Changeset a5b5f17 in mainline for kernel/generic/src/main/main.c


Ignore:
Timestamp:
2024-01-21T16:36:15Z (4 months ago)
Author:
Jiří Zárevúcky <zarevucky.jiri@…>
Branches:
master
Children:
1a1e124
Parents:
ed7e057 (diff), d23712e (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge scheduler refactoring to remove the need for thread structure lock

All necessary synchronization is already a product of other operations
that enforce ordering (that is, runqueue manipulation and thread_sleep()
/thread_wakeup()). Some fields formally become atomic, which is only
needed because they are read from other threads to print out statistics.
These atomic operations are limited to relaxed individual reads/writes
to native-sized fields, which should at least in theory be compiled
identically to regular volatile variable accesses, the only difference
being that concurrent accesses from different threads are not undefined
behavior by definition.

Additionally, it is now made possible to switch directly to new thread
context instead of going through a separate scheduler stack. A separate
context is only needed and used when no runnable threads is immediately
available, which means we optimize switching in the limiting case where
many threads are waiting for execution. Switching is also avoided
altogether when there's only one runnable thread and it is being
preempted. Originally, the scheduler would switch to a separate stack,
requeue the thread that was running, retrieve that same thread from
queue, and switch to it again, all that work is now avoided.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • kernel/generic/src/main/main.c

    red7e057 ra5b5f17  
    287287         * starting the thread of kernel threads.
    288288         */
    289         scheduler_run();
     289        current_copy(CURRENT, (current_t *) CPU_LOCAL->stack);
     290        context_replace(scheduler_run, CPU_LOCAL->stack, STACK_SIZE);
    290291        /* not reached */
    291292}
     
    327328        ARCH_OP(post_cpu_init);
    328329
    329         current_copy(CURRENT, (current_t *) CPU_LOCAL->stack);
    330 
    331330        /*
    332331         * If we woke kmp up before we left the kernel stack, we could
     
    334333         * switch to this cpu's private stack prior to waking kmp up.
    335334         */
     335        current_copy(CURRENT, (current_t *) CPU_LOCAL->stack);
    336336        context_replace(main_ap_separated_stack, CPU_LOCAL->stack, STACK_SIZE);
    337337        /* not reached */
Note: See TracChangeset for help on using the changeset viewer.