Changes between Initial Version and Version 1 of Ticket #231


Ignore:
Timestamp:
2010-04-24T20:13:49Z (14 years ago)
Author:
Jakub Jermář
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #231 – Description

    initial v1  
    33In order to figure out more, I improved the spinlock code to be more sensitive to random lock corruption (which I can thus rule out) and also to be more observable by providing a global ring buffer for recording the locking history. See the attachement to see the diff. I am also going to attach screenshots which illustrate the panics.
    44
    5 Frankly speaking, my suspect number one is actually Qemu (since the HelenOS code looks good to me atm.), but I am logging this ticket anyway just for the case I am wrong. One more thing which makes me think that this is rather a Qemu issue is that with the given ring buffer and the spinlock_lock_debug() code, I would expect the panic to occur in spinlock_lock_debug() on either of the two checks for multiple CPUs in the CS, and not so late in spinlock_unlock().
     5Frankly speaking, my suspect number one is actually Qemu (since the HelenOS code looks good to me atm.), but I am logging this ticket anyway just for the case I am wrong. One more thing which makes me think that this is rather a Qemu issue is that with the given ring buffer and the spinlock_lock_debug() code, I would expect the panic to occur in spinlock_lock_debug() on either of the two checks for multiple CPUs in the CS, and not so late in spinlock_unlock(). With this behavior, the simulated CPUs appear to use some very strange memory model (i.e. we observe the effect of the lock_event_record() on both CPUs that manage to "lock" the spinlock, but in most of the cases do not hit the "not alone in critical section" panic).