Changes between Version 2 and Version 3 of Genode

2012-08-20T15:14:42Z (12 years ago)


  • Genode

    v2 v3  
    1 =  HelenOS as a Genode platform =
     1= HelenOS as a Genode platform =
    33This document is inteded to help understand the Portation of the Genode
    66how these differences has been solved. \\
    77Corresponding Ticket: #419 \\
    8 Code base:
     8Code base:\\
     10General information about Genode:\\
     13== Syscall Availability ==
    10 # \\
    11 # TODO: brief introduction to Genode in general \\
    12 #
    15 == IPC ==
     15Syscalls are being made available for Genode via a separate platform library. This includes some header files from HelenOS's /abi tree and then using these to provide some basic syscall architecture.\\
     18== Porting the IPC Framework ==
    1720=== Design ===
    2225||= IPC Routing    =|| By global naming service (first task to be loaded in userspace)\\ * every new created task has an initial ipc connection to the naming service\\ * namimg service has to asked for at least first connection\\ * calls are being forwarded at will || parent of each single task / thread identified by a unique parent capability\\ * new created tasks have a connection to its parent\\ * parent has to be asked for every connection\\ * call is routed through the task"tree" until destination is reached ||
    2326||= Connection Identification =|| by phone handle (simple int value) || by Capability, consisting of a platform specific argument and a global or local id for identifiaction ||
    24 ||= IPC Messages =|| different lenghts possible\\ * short messages using max 6 unsigned long arguments in a short call submitted directly\\ * longer messages submitted via copying memory from one task to another\\ * longer messages submitted via shared memory\\ * requesting or sharing a connection requires an extra ipc call for every single phone handle || variable\\ * messages have user defined maximum length \\ * capabilities can be send within each message (more than one is possible) ||
     27||= IPC Messages =|| different lenghts possible\\ * short messages using max 6 unsigned long arguments in a short call submitted directly\\ * longer messages submitted via copying memory from one task to another\\ * longer messages submitted via shared memory\\ * requesting or sharing a connection requires an extra ipc call for every single phone handle || variable sizes\\ * messages have user defined maximum length \\ * capabilities can be send within each message (more than one is possible) ||
    2629From these differences arise several problems, which has to be solved by the portation:\\
    3841'''IPC Manager Thread & IPC Call Queue:'''
    3942 Threads are divided into two groups: "worker threads" and "ipc manager threads". One single task may consist of an undefined (but at least one) amount of worker threads. Worker threads are created by the running application. Additionally every task consists of one single IPC Manager Thread, which has exclusive access to the tasks answerbox. The IPC Manager Thread loops the answerbox and as soon as a call drops in it delivers the call to the addressed thread. Every worker thread waiting for incoming calls has to register with the IPC Manager Thread, so calls can be delivered to the worker thread (since otherwise the IPC Manager Thread has no knowledge of other threads). Every worker thread owns a exclusive IPC Call queue. Waiting operations of the worker threads are performed on their IPC Call Queue and incoming calls are being stored in this queue by the IPC Manager Thread. The IPC Manager Thread is invokes as soon as one of the worker threads is waiting for an incoming ipc call to prevent unnecessary overhead.\\
     43Furthermore, for easy handling, a class has been introduced which conveniently handles IPC calls. It grants easy access to all necessary and possible attributes.\\
    4044'''Sending / Receiving IPC Messages:'''
    4145 Since all messages are of a variable size, a call which copies memory from the caller to the callee is used to send the message buffer (a char pointer with a defined amount of payload).\\
    42 '''Sending Capabilities "within Messages":'''
     46'''Sending Capabilities "within messages":'''
    4347 When a Capability is marshalled into a message, it is stored in an extra queue with a fixed size (inside the message class). The number of Capabilities stored in this queue is being written at the very beginning of the message buffer right before it is being send (as described before), while the Capabilities themselves are not being touched. After sending the message buffer, all Capabilites are being send via an IPC call, cloning the phone handle of the Capability. Additionial arguments for every such call is the corresponding thread id where the phone handle points to and the global id of the Capability.
     50== Porting Core ==
    45 === Implementation ===
     52Description taken from [ Genode's documentation] (also suited for further information on Core)
     53> Core is the first user-level program that takes control when starting up the system. It has access to the raw physical resources and converts them to abstractions that enable multiple programs to use these resources. In particular, core converts the physical address space to higher-level containers called dataspaces. A dataspace represents a contiguous physical address space region with an arbitrary size (at page-size granularity). Multiple processes can make the same dataspace accessible in their local address spaces. The system on top of core never deals with physical memory pages but uses this uniform abstraction to work with memory, memory-mapped I/O regions, and ROM areas.[[BR]]
     54> Furthermore, core provides all prerequisites to bootstrap the process tree. These prerequisites comprise services for creating processes and threads, for allocating memory, for accessing boot-time-present files, and for managing address-space layouts. Core is almost free from policy. There are no configuration options. The only policy of core is the startup of the init process to which core grants all available resources.
     56From this description derive several problems in the design of both system, which has to be solved:
     58||                  ||= HelenOS offers =||= Genode requires =||
     59||= userspace memory management =|| only linux-like virtual memory allocation in userspace || can handle physical address spaces as well as pure virtual ones ||
     60||= loading modules provided to GRUB =|| kernel loads all provided modules as separate tasks || is able to load tasks provided by GRUB, has to have knowledge about address space areas the elf image was loaded to ||
     61||= information about ROM modules\\(modules provided to GRUB) =|| no information is passed to the userspace about loaded tasks || wants to have knowledge about ROM modules ||
     62||= IPC within first created task =|| not possible with kernel IPC interface, since the first created task has no phone to itself || IPC between different threads of first loaded task required ||
     64'''userspace memory management:'''
     65 Whilst the Spartan kernel only offers Linux-like virtual memory allocation in userspace, Genode is able to handle physical address spaces on its own, but is able to use pure virtual address spaces as well.\\ Since there is already an implementation for Linux in Genode (base-linux), this implementation has been adapted and suited for use with Spartan. Therefor the syscalls as_area_create (which is equivalent to Linux's mmap() function) and as_area_destroy (equivalent to Linux's free() function) has been implemented.\\
     66'''loading modules provided to GRUB:'''
     67 Since the Spartan kernel already loads all modules provided to GRUB and creates tasks for those, Genode does not have to worry about this.\\
     68'''information about ROM modules (modules provided to GRUB): '''
     69 In contrast to the HelenOS userspace, Genode (in particular Core) needs knowledge about all loaded modules (which are referred to as ROM images). Since the Spartan kernel loads all the provided modules, but does not provide any information about them, a solution has to be found.\\ Luckily there is already a working implementation to cope with this problem in base-hw (introduced to the master branch of Genode just recently), where all to be loaded modules are compressed into one single elf image and the information about it is passed to Core via a special assembler file. This solution is being adepted for usage in Spartan. [not yet implemented]\\
     70'''IPC within first created task:'''
     71 The current IPC implementation does not allow to so send local messages between different threads of one and the same task without using syscalls (and thus kernel interference). This applies to Core as well as any other task. Since Core requires to send messages to itself whilst it (as first loaded task) has no phone handle to itself, the easiest solution is to load a simple nameserver before starting Core. This simple nameserver is only answering the first connection request from Core (which is a connect_to_me() request) and afterwards forwards all incoming calls to Core. Thereby the nameserver is not using the IPC framework to avoid the creation of the Ipc_manager (which would create a separate, in this case unnecessary, thread). [not yet working properly]\\
     73Additional implementations for Core:\\
     75 For handling threads, Genode requires a platform specific platform_thread implementation, which supplies access to thread creation and destruction.\\
     76'''special cases for IPC with Core:'''
     77 # TODO\\
     80'''still to be dermined facts:'''
     81* how to load elf images with Spartan?
     82* how do both systems differ in terms of task creation and how to bring them together?