Opened 14 years ago

Closed 13 years ago

#204 closed defect (notadefect)

Running nettest1 and nettest2 tests repeatedly will give worse and worse results

Reported by: Jakub Jermář Owned by:
Priority: major Milestone: 0.4.3
Component: helenos/net/other Version: mainline
Keywords: Cc:
Blocker for: Depends on:
See also:


Running nettest1 and nettest2 tests repeatedly will give worse and worse execution times.

Change History (5)

comment:1 by Jakub Jermář, 13 years ago

I did some experiments and noticed a couple of things:

  • the next result can be very rarely better than the previous result; usually it is worse though
  • the averaged running times (*) of socket_core() seem to be copying the trend of worsening (and very occasional improvement) of the running times of the entire application
  • after a longer while of UDP inactivity, both kind of running times will improve a bit, but then will start to worsen as before

(*) I modified the sources of socket_core() to show me the average number of ticks spent in socket_core() after each 100 created sockets.

socket_core() seems to be calling predominantly malloc(), so I wonder if the issue can be attributed to the first-fit userspace allocator.

comment:2 by Jakub Jermář, 13 years ago


comment:3 by Jakub Jermář, 13 years ago

Resolution: deferred
Status: newclosed

I've just made some finer measurements of average running times of socket_create(). When split into three segments as follows:

1st segment:

socket_create(socket_cores_ref local_sockets, int app_phone,
    void *specific_data, int *socket_id)
        socket_core_ref socket;
        int positive;
        int rc;

        if (!socket_id)
                return EINVAL;

        // store the socket
        if (*socket_id <= 0) {
                positive = (*socket_id == 0);
                *socket_id = socket_generate_new_id(local_sockets, positive);
                if (*socket_id <= 0)
                        return *socket_id;
                if (!positive)
                        *socket_id *= -1;
        } else if(socket_cores_find(local_sockets, *socket_id)) {
                return EEXIST;

2nd segment:

        socket = (socket_core_ref) malloc(sizeof(*socket));
        if (!socket)
                return ENOMEM;

3rd segment:

socket->phone = app_phone;
        socket->port = -1;
        socket->key = NULL;
        socket->key_length = 0;
        socket->specific_data = specific_data;
        rc = dyn_fifo_initialize(&socket->received, SOCKET_INITIAL_RECEIVED_SIZE);
        if (rc != EOK) {
                return rc;

        rc = dyn_fifo_initialize(&socket->accepted, SOCKET_INITIAL_ACCEPTED_SIZE);
        if (rc != EOK) {
                return rc;
        socket->socket_id = *socket_id;
        rc = socket_cores_add(local_sockets, socket->socket_id, socket);
        if (rc < 0) {
                return rc;

we can measure that the first segment executes still in about the same time.

The remaining two segments, however, execute in slower and slower times.

The second segment is a call to malloc().

In the third segment, we have the following calls:

  • dyn_fifo_initialize() → malloc()
  • socket_cores_add() → realloc()

So as can be seen above, all the calls end up in the memory allocator. Besides of the calls, there are no loops that could be slowing down the function.

Really looks like the memory allocator is to be blamed. I am closing this ticket as this does not appear to be a networking related issue.

comment:4 by Jakub Jermář, 13 years ago

Resolution: deferred
Status: closedreopened

Reopening to close again as not-a-defect.

comment:5 by Jakub Jermář, 13 years ago

Resolution: notadefect
Status: reopenedclosed
Note: See TracTickets for help on using tickets.