Continue to de-oversynchronize the kernel.
- replace as→refcount with an atomic counter; accesses to this
reference counter are not to be done when the as→lock mutex is held;
this gets us rid of mutex_lock_active();
Remove the possibility of a deadlock between TLB shootdown and asidlock.
- get rid of mutex_lock_active() on as→lock
- when locking the asidlock spinlock, always do it conditionally and with
preemption disabled; in the unsuccessful case, enable interrupts and try again
- there should be no deadlock between TLB shootdown and the as→lock mutexes
- PLEASE !!!
Add DEADLOCK_PROBE's to places where we have spinlock_trylock() loops.