wiki:Genode

Version 3 (modified by kurbel, 12 years ago) ( diff )

HelenOS as a Genode platform

This document is inteded to help understand the Portation of the Genode Operating System Framework to HelenOS by pointing out major differences in the design, implementation and used terms of both platforms and describing how these differences has been solved.
Corresponding Ticket: #419
Code base: https://github.com/kurbel/genode/tree/base-spartan

General information about Genode: http://genode.org/documentation/index


Syscall Availability

Syscalls are being made available for Genode via a separate platform library. This includes some header files from HelenOS's /abi tree and then using these to provide some basic syscall architecture.


Porting the IPC Framework

Design

Before describing the design of the portation it is necessary to understand how both systems differ in their basic approaches:

HelenOS offers Genode requires
IPC End Points task to task communication
* fibrils used as userspace threads
* one answerbox per task
thread to thread communication
* uses kernel primitives for threading
* calls are delivered to threads
IPC Routing By global naming service (first task to be loaded in userspace)
* every new created task has an initial ipc connection to the naming service
* namimg service has to asked for at least first connection
* calls are being forwarded at will
parent of each single task / thread identified by a unique parent capability
* new created tasks have a connection to its parent
* parent has to be asked for every connection
* call is routed through the task"tree" until destination is reached
Connection Identification by phone handle (simple int value) by Capability, consisting of a platform specific argument and a global or local id for identifiaction
IPC Messages different lenghts possible
* short messages using max 6 unsigned long arguments in a short call submitted directly
* longer messages submitted via copying memory from one task to another
* longer messages submitted via shared memory
* requesting or sharing a connection requires an extra ipc call for every single phone handle
variable sizes
* messages have user defined maximum length
* capabilities can be send within each message (more than one is possible)

From these differences arise several problems, which has to be solved by the portation:
Accessing the Answerbox:

For the fact, that there is only one single answerbox for every task but Genode requires single threads being the endpoints of the communication, the access to the answerbox has to regulated. As a thread can not determine whether an incoming call in the answerbox is addressed to it or not and taken calls from the answerbox can not be put back in, a solution has to be found how to take calls from the answerbox and deliver it to the addressed thread.

IPC destination vs. Capabilities

The purpose of using capabilities is to eliminate the knowledge of global names for destinations. Since an IPC call in Genode has to be delivered to a thread inside of a task, but the Spartan kernel itself can only deliver a call to a specific task, the addressed threads id has to be delivered within the call (as simple argument). This makes it inevitable that the capability has to have knowledge of the destinations thread id so the task then can deliver the call to the so addressed thread.

Sending Capabilities "within Messages"

The Spartan kernel can share IPC connections (phone handles) only by copying them via a special IPC call. For the API of Genode on the other side, sending Capabilities looks like the Capabilities are being send within the message (they are marshalled into the message using the "<<" operator like every other value). Capabilities in messages have to be recognized, stored, safely send and received.


To overcome those differences and problems, the portation of the IPC framework has been designed as followed:
Threading:

Genode is using real kernel primitive threads for threading, created through a syscall.

IPC Manager Thread & IPC Call Queue:

Threads are divided into two groups: "worker threads" and "ipc manager threads". One single task may consist of an undefined (but at least one) amount of worker threads. Worker threads are created by the running application. Additionally every task consists of one single IPC Manager Thread, which has exclusive access to the tasks answerbox. The IPC Manager Thread loops the answerbox and as soon as a call drops in it delivers the call to the addressed thread. Every worker thread waiting for incoming calls has to register with the IPC Manager Thread, so calls can be delivered to the worker thread (since otherwise the IPC Manager Thread has no knowledge of other threads). Every worker thread owns a exclusive IPC Call queue. Waiting operations of the worker threads are performed on their IPC Call Queue and incoming calls are being stored in this queue by the IPC Manager Thread. The IPC Manager Thread is invokes as soon as one of the worker threads is waiting for an incoming ipc call to prevent unnecessary overhead.

Furthermore, for easy handling, a class has been introduced which conveniently handles IPC calls. It grants easy access to all necessary and possible attributes.
Sending / Receiving IPC Messages:

Since all messages are of a variable size, a call which copies memory from the caller to the callee is used to send the message buffer (a char pointer with a defined amount of payload).

Sending Capabilities "within messages":

When a Capability is marshalled into a message, it is stored in an extra queue with a fixed size (inside the message class). The number of Capabilities stored in this queue is being written at the very beginning of the message buffer right before it is being send (as described before), while the Capabilities themselves are not being touched. After sending the message buffer, all Capabilites are being send via an IPC call, cloning the phone handle of the Capability. Additionial arguments for every such call is the corresponding thread id where the phone handle points to and the global id of the Capability.



Porting Core

Description taken from Genode's documentation (also suited for further information on Core)

Core is the first user-level program that takes control when starting up the system. It has access to the raw physical resources and converts them to abstractions that enable multiple programs to use these resources. In particular, core converts the physical address space to higher-level containers called dataspaces. A dataspace represents a contiguous physical address space region with an arbitrary size (at page-size granularity). Multiple processes can make the same dataspace accessible in their local address spaces. The system on top of core never deals with physical memory pages but uses this uniform abstraction to work with memory, memory-mapped I/O regions, and ROM areas.
Furthermore, core provides all prerequisites to bootstrap the process tree. These prerequisites comprise services for creating processes and threads, for allocating memory, for accessing boot-time-present files, and for managing address-space layouts. Core is almost free from policy. There are no configuration options. The only policy of core is the startup of the init process to which core grants all available resources.


From this description derive several problems in the design of both system, which has to be solved:

HelenOS offers Genode requires
userspace memory management only linux-like virtual memory allocation in userspace can handle physical address spaces as well as pure virtual ones
loading modules provided to GRUB kernel loads all provided modules as separate tasks is able to load tasks provided by GRUB, has to have knowledge about address space areas the elf image was loaded to
information about ROM modules
(modules provided to GRUB)
no information is passed to the userspace about loaded tasks wants to have knowledge about ROM modules
IPC within first created task not possible with kernel IPC interface, since the first created task has no phone to itself IPC between different threads of first loaded task required

userspace memory management:

Whilst the Spartan kernel only offers Linux-like virtual memory allocation in userspace, Genode is able to handle physical address spaces on its own, but is able to use pure virtual address spaces as well.
Since there is already an implementation for Linux in Genode (base-linux), this implementation has been adapted and suited for use with Spartan. Therefor the syscalls as_area_create (which is equivalent to Linux's mmap() function) and as_area_destroy (equivalent to Linux's free() function) has been implemented.

loading modules provided to GRUB:

Since the Spartan kernel already loads all modules provided to GRUB and creates tasks for those, Genode does not have to worry about this.

information about ROM modules (modules provided to GRUB):

In contrast to the HelenOS userspace, Genode (in particular Core) needs knowledge about all loaded modules (which are referred to as ROM images). Since the Spartan kernel loads all the provided modules, but does not provide any information about them, a solution has to be found.
Luckily there is already a working implementation to cope with this problem in base-hw (introduced to the master branch of Genode just recently), where all to be loaded modules are compressed into one single elf image and the information about it is passed to Core via a special assembler file. This solution is being adepted for usage in Spartan. [not yet implemented]

IPC within first created task:

The current IPC implementation does not allow to so send local messages between different threads of one and the same task without using syscalls (and thus kernel interference). This applies to Core as well as any other task. Since Core requires to send messages to itself whilst it (as first loaded task) has no phone handle to itself, the easiest solution is to load a simple nameserver before starting Core. This simple nameserver is only answering the first connection request from Core (which is a connect_to_me() request) and afterwards forwards all incoming calls to Core. Thereby the nameserver is not using the IPC framework to avoid the creation of the Ipc_manager (which would create a separate, in this case unnecessary, thread). [not yet working properly]

Additional implementations for Core:
threads:

For handling threads, Genode requires a platform specific platform_thread implementation, which supplies access to thread creation and destruction.

special cases for IPC with Core:

# TODO



still to be dermined facts:

  • how to load elf images with Spartan?
  • how do both systems differ in terms of task creation and how to bring them together?
Note: See TracWiki for help on using the wiki.