Changes between Version 6 and Version 7 of IPC


Ignore:
Timestamp:
2018-06-21T13:43:59Z (6 years ago)
Author:
Martin Decky
Comment:

update for current async framework API

Legend:

Unmodified
Added
Removed
Modified
  • IPC

    v6 v7  
    11= IPC for Dummies =
    22
    3 Understanding HelenOS IPC is essential for the development of HelenOS userspace servers and services and,
    4 to a much lesser extent, for the development of any HelenOS userspace code. This document attempts to concisely explain how
     3Understanding HelenOS IPC is essential for the development of HelenOS user space servers and services and,
     4to a much lesser extent, for the development of any HelenOS user space code. This document attempts to concisely explain how
    55to use the HelenOS IPC. It doesn't aspire to be exhaustive nor to cover the implementation details of the IPC
    6 subsystem itself, which is dealt with in chapter 8 of the HelenOS design [http://www.helenos.eu/doc/design.pdf documentation].
     6subsystem itself. The original design motivations are explained in Chapter 8 of the [http://www.helenos.eu/doc/design.pdf HelenOS design documentation].
    77
    88 * [#IpcIntroRT Introduction to the runtime environment]
     
    1919
    2020 * [#IpcSkeletonSrv Writing a skeleton server]
    21 
    2221
    2322== Introduction to the runtime environment == #IpcIntroRT
     
    2827each task breaks down to one or more independently scheduled ''threads''.
    2928
    30 In userspace, each thread executes by the means of a lightweight execution entities called ''fibrils''.
    31 The distinction between threads and fibrils is that the kernel schedules threads and is completely unaware of fibrils.
     29In user space, each thread executes by the means of a lightweight execution entities called ''fibrils''.
     30The distinction between threads and fibrils is that the kernel schedules threads and it is completely unaware of fibrils.
    3231
    3332The standard library cooperatively schedules fibrils and lets them run on behalf of the underlying thread. Due to this
     
    3938 * The underlying thread is preempted by the kernel
    4039
    41 Fibrils were introduced especially to facilitate more straight forward IPC communication.
     40Fibrils were introduced especially to facilitate more straightforward IPC communication.
    4241
    4342== Basics of IPC communication == #IpcIntroIPC
    4443
    4544Because tasks are isolated from each other, they need to use the kernel's syscall interface for communication with the rest of
    46 the world. In the last generation of microkernels, the emphasis was put on synchronous IPC communication. In HelenOS, both
    47 synchronous and asynchronous communication is possible, but it could be concluded that the HelenOS IPC is primarily asynchronous.
     45the world. In the previous generation of microkernels, the emphasis was put on synchronous IPC communication. In HelenOS, both
     46synchronous and asynchronous communication is possible, but the HelenOS IPC is primarily asynchronous.
    4847
    4948The concept of and the terminology used in HelenOS IPC is based on the natural abstraction of a telephone dialogue between a man on one
     
    5150Because of that, the call cannot be immediately answered, but needs to be first picked up from the answerbox by the second party.
    5251
    53 In HelenOS, the IPC communication goes like in the following example. A userspace fibril uses one of its ''phones'', which is connected to the
     52In HelenOS, the IPC communication goes like in the following example. A user space fibril uses one of its ''phones'', which is connected to the
    5453callee task's ''answerbox'', and makes a short ''call''. The caller fibril can either make another call or wait for the answer. The callee task
    5554has a missed call stored in its answerbox now. Sooner or later, one of the callee task's fibril will pick the call up, process it and either answer
     
    6362available fibril to pick it up, but then we could not talk about a connection and if we tried to preserve the concept of connection, the code handling
    6463incoming calls would most likely become full of state automata and callbacks. In HelenOS, there is a specialized piece of software called asynchronous
    65 framework, which forms a layer above the low-level IPC library functions. The asynchronous framework does all the state automata and callback dirty work
     64framework, which forms a layer above the low-level IPC mechanism. The asynchronous framework does all the state automata and callback dirty work
    6665itself and hides the implementation details from the programmer.
    6766
     
    6968With the asynchronous framework in place, there are two kinds of fibrils:
    7069
    71  * manager fibrils and
     70 * manager fibrils, and
    7271 * worker fibrils.
    7372
     
    7776The benefit of using the asynchronous framework and fibrils is that the programmer can do without callbacks and state automata and still use asynchronous communication.
    7877
    79 === Capabilities of HelenOS IPC ===
    80 
    81 The capabilities of HelenOS IPC can be summarized in the following list:
     78=== Features of HelenOS IPC ===
     79
     80The features of HelenOS IPC can be summarized in the following list:
    8281
    8382 * short calls, consisting of one argument for method number and five arguments of payload,
     
    8786 * sharing memory from another task,
    8887 * sharing memory to another task,
    89  * interrupt notifications for userspace device drivers.
     88 * interrupt notifications for user space device drivers.
    9089
    9190The first two items can be considered basic building blocks.
     
    9998
    10099{{{
    101 #include <ipc/ipc.h>
    102 #include <ipc/services.h>
    103 ...
    104         vfs_phone = ipc_connect_me_to(PHONE_NS, SERVICE_VFS, 0, 0);
    105         if (vfs_phone < 0) {
    106                 /* handle error */
    107         }
    108 }}}
    109 
    110 The naming service simply forwards the ''IPC_M_CONNECT_ME_TO'' call, which is marshalled by the ipc_connect_me_to(),
    111 to the destination service, provided that such service exists. Note that the service to which you intend connecting to will create
    112 a new fibril for handling the connection from your task. The newly created fibril in the destination task will receive the
    113 ''IPC_M_CONNECT_ME_TO'' call and will be given chance either to accept or reject the connection. In the snippet above, the
    114 client doesn't make use of two server-defined connection arguments. If the connection is accepted, a new non-negative phone
    115 number will be returned to the client task. From that time on, the task can use that phone for making calls to the service.
     100#include <async.h>
     101...
     102        /*
     103         * Use the naming service session than abstracts
     104         * the phone to the naming service.
     105         */
     106        async_exch_t *exch = async_exchange_begin(ns_session);
     107        if (exch == NULL) {
     108                /* Handle error creating an exchange */
     109        }
     110
     111        async_sess_t *session =
     112            async_connect_me_to_iface(exch,
     113            INTERFACE_VFS, SERVICE_VFS, 0);
     114        async_exchange_end(exch);
     115
     116        if (session == NULL) {
     117                /* Handle error connecting to the VFS */
     118        }
     119}}}
     120
     121The ''async_connect_me_to_iface'' is a wrapper for sending the ''IPC_M_CONNECT_ME_TO'' low-level IPC message to the naming service.
     122The naming service simply forwards the ''IPC_M_CONNECT_ME_TO'' call to the destination service, provided that such service exists.
     123Note that the service to which you intend connecting to will create a new fibril for handling the connection from your task.
     124The newly created fibril in the destination task will receive the ''IPC_M_CONNECT_ME_TO'' call and will be given chance either
     125to accept or reject the connection. In the snippet above, the client doesn't make use of the server-defined connection argument.
     126If the connection is accepted, a new non-negative phone number will be returned to the client task and the asynchronous framework
     127will create a new session for it. From that time on, the task can use that session for making calls to the service.
    116128The connection exists until either side closes it.
    117129
    118 The client uses the ''ipc_hangup(int phone)'' interface to close the connection.
     130The client uses the ''async_hangup(async_sess_t *session)'' interface to close the connection.
    119131
    120132== Passing short IPC messages == #IpcShortMsg
     
    128140protocol-defined methods, the payload arguments will be defined by the protocol in question.
    129141
    130 Even though a call can be made by using the low-level IPC primitives, it is strongly discouraged (unless you know what you are doing) in favor of
     142Even though a user space task can use the low-level IPC mechanisms directly, it is strongly discouraged (unless you know what you are doing) in favor of
    131143using the asynchronous framework. Making an asynchronous request via the asynchronous framework is fairly easy, as can be seen in the following example:
    132144
    133 
    134 {{{
    135 #include <ipc/ipc.h>
    136 #include <async.h>
    137 ...
    138         int vfs_phone;
    139         aid_t req;
    140         ipc_call_t answer;
    141         ipcarg_t rc;
    142 ...
    143         req = async_send_3(vfs_phone, VFS_OPEN, lflag, oflag, 0, &answer);
    144 ...
    145         async_wait_for(req, &rc);
    146 ....
    147         if (rc != EOK) {
    148                 /* handle error */
    149         }
     145{{{
     146#include <async.h>
     147...
     148        async_exch_t *exch = async_exchange_begin(session);
     149        if (exch == NULL) {
     150                /* Handle error creating an exchange */
     151        }
     152
     153        ipc_call_t answer;
     154        aid_t req = async_send_3(exch, VFS_IN_OPEN, lflags, oflags, 0, &answer);
     155        async_exchange_end(exch);
     156...
     157        int rc;
     158        async_wait_for(req, &rc);
     159
     160        if (rc != EOK) {
     161                /* Handle error from the server */
     162        }
    150163}}}
    151164
    152165In the example above, the standard library is making an asynchronous call to the VFS server.
    153 The method number is ''VFS_OPEN'', and ''lflag'', ''oflag'' and 0 are three payload arguments defined
    154 by the VFS protocol. Note that the number of arguments figures in the numeric suffix of the async_send_3()
     166The method number is ''VFS_IN_OPEN'', and ''lflag'', ''oflag'' and 0 are three payload arguments defined
     167by the VFS protocol. Note that the number of arguments figures in the numeric suffix of the ''async_send_3()''
    155168function name. There are analogous interfaces which take from zero to five payload arguments.
    156169
     
    163176
    164177{{{
    165 #include <ipc/ipc.h>
    166 #include <async.h>
    167 ...
    168         int vfs_phone;
    169         int fildes;
    170         ipcarg_t rc;
    171 ...
    172         rc = async_req_1_0(vfs_phone, VFS_CLOSE, fildes);
     178#include <async.h>
     179...
     180        async_exch_t *exch = async_exchange_begin(session);
     181        if (exch == NULL) {
     182                /* Handle error creating an exchange */
     183        }
     184
     185        int rc = async_req_1_0(exch, VFS_IN_CLOSE, fildes);
     186        async_exchange_end(exch);
     187
    173188        if (rc != EOK) {
    174                 /* handle error */
     189                /* Handle error from the server */
    175190        }
    176191}}}
    177192
    178193The example above illustrates how the standard library synchronously calls the VFS server and asks it to close a file descriptor passed
    179 in the ''fildes'' argument, which is the only payload argument defined for the ''VFS_CLOSE'' method. The interface name encodes the number of input and return arguments in the function name, so there are variants that take or return different number of arguments. Note that contrary to the asynchronous example above, the return arguments would be stored directly to pointers passed to the function.
    180 
    181 The interface for answering calls is ''ipc_answer_n()'', where ''n'' is the number of return arguments. This is how the VFS server answers the ''VFS_OPEN'' call:
    182 
    183 {{{
    184         ipc_answer_1(rid, EOK, fd);
    185 }}}
    186 
    187 In this example, ''rid'' is the hash of the received call, ''EOK'' is the return value and ''fd'' is the only return argument.
     194in the ''fildes'' argument, which is the only payload argument defined for the ''VFS_IN_CLOSE'' method. The interface name encodes the number of input and return arguments in the function name, so there are variants that take or return different number of arguments. Note that contrary to the asynchronous example above, the return arguments would be stored directly to pointers passed to the function.
     195
     196The interface for answering calls is ''async_answer_n()'', where ''n'' is the number of return arguments. This is how the VFS server answers the ''VFS_IN_OPEN'' call:
     197
     198{{{
     199        async_answer_1(rid, EOK, fd);
     200}}}
     201
     202In this example, ''rid'' is the capability of the received call, ''EOK'' is the return value and ''fd'' is the only return argument.
    188203
    189204== Passing large data via IPC == #IpcDataCopy
    190205
    191 Passing five words of payload in a request and five words of payload in an answer is not very suitable for larger data transfers. Instead, the application can use these
    192 building blocks to negotiate transfer of a much larger block (currently there is a hard limit on 64KiB). The negotiation has three phases:
     206Passing five words of payload in a request and five words of payload in an answer is not very suitable for larger data transfers.
     207Instead, the application can use these building blocks to negotiate transfer of a much larger block (currently there is a hard limit
     208on 64 KiB). The negotiation has three phases:
    193209
    194210 * the initial phase in which the client announces its intention to copy memory to or from the recipient,
     
    196212 * the final phase in which the server either accepts or rejects the bid.
    197213
    198 We use the terms client and server instead of the terms sender and recipient, because a client can be both the sender and the recipient and a server can be both the recipient and the sender, depending on the direction of the data transfer. In the following text, we'll cover both.
    199 
    200 In theory, the programmer can use the low-level short IPC messages to implement all three phases himself. However, this is can be tedious and error prone and therefore the standard library offers convenience wrappers for each phase instead.
     214We use the terms client and server instead of the terms sender and recipient, because a client can be both the sender and the recipient and
     215a server can be both the recipient and the sender, depending on the direction of the data transfer. In the following text, we'll cover both.
     216
     217In theory, the programmer can use the low-level short IPC messages to implement all three phases himself or herself. However, this is can be
     218tedious and error prone and therefore the standard library offers convenience wrappers for each phase instead.
    201219
    202220=== Sending data ===