Follow us on Google+ Follow us on Facebook Follow us on Twitter

Opened 6 years ago

Last modified 5 years ago

#393 new enhancement

Async single connection per session

Reported by: svoboda Owned by:
Priority: major Milestone:
Component: helenos/unspecified Version: mainline
Keywords: Cc:
Blocker for: Depends on:
See also:

Description

Currently EXCHANGE_PARALLEL is implemented using multiple IPC connections. This causes problems - often the framework lacks information on how to create the additional connections. (It does not know how many arguments get eaten by interposed naming services, callback connections).

This can be all solved by multiplexing all exchanges into a single connection. This can now be implemented without changes to the async API. The framework would transparently tag each IPC call with an exchange ID (these would need to be allocated by the framework).

Change History (4)

comment:1 Changed 5 years ago by svoboda

It seems preferable to allocate the top half of ARG0 (IMETHOD) for the exchange ID. That means the kernel needs to be updated not to interpret the top half of ARG0, since exchanges can contain system messages (e.g. IPC_M_READ).

comment:2 Changed 5 years ago by svoboda

While the change will be transparent to clients, it cannot be currently made in a manner transparent to servers. A server assumes that each virtual path handler (exchange-sequence handler) will be passed a new CONNECT_ME_TO call and the code will validate and answer that call.

The minimum change possible is to have the server specify a handler for session initiation (that answers the CONNECT_ME_TO call) plus a handler for virtual path / exchange sequence.

The ideal change is to have the server specify:

  • optional handler/validator for session initiation
  • exchange handler
  • optional handler for session termination

This would address the need of ticket #391 to delimit exchange boundaries. This may overlap in functionality with the client data constructor/destructor functions, it could be possibly merged somehow.

comment:3 follow-up: Changed 5 years ago by svoboda

As evidenced by #441 (netecho cannot be killed) there is another advantage to this. Right now there is a problem with any blocking call: if you kill the client, the server may never process the hangup message (since the blocking call may never return and hangup message is processed in band, in order, after the blocking call returns).

On the other hand, with single connection per session, a server task will get all messages from the answerbox immediately and queue them to individual exchanges. For a hangup message it will execute the session termination handler immediately. This handler can then abort the outstanding exchanges and close any callback sessions.

comment:4 in reply to: ↑ 3 Changed 5 years ago by jermar

Replying to svoboda:

if you kill the client, the server may never process the hangup message (since the blocking call may never return and hangup message is processed in band, in order, after the blocking call returns).

Right now, there is a difference between the behavior on hangup for blocked servers and blocked clients. While the async framework will attempt to wake up the blocked server, it will let the blocked client continue to sleep. We should fix this for the blocked clients and, also, callers of fibril_condvar_wait() too.

Note: See TracTickets for help on using tickets.