Fork us on GitHub Follow us on Facebook Follow us on Twitter

Changes between Version 4 and Version 5 of ConcurrentHashTable


Ignore:
Timestamp:
2012-08-08T12:03:57Z (7 years ago)
Author:
Adam Hraska
Comment:

rcu update overhead benchmark results evaluation

Legend:

Unmodified
Added
Removed
Modified
  • ConcurrentHashTable

    v4 v5  
    142142compared to acquiring a spinlock. The figure above shows the number of traversals
    143143or updates of a five element list with an increasing percentage of updates.
    144 The benchmark ran in 4 threads/CPUs. In each iteration a thread selected at random
     144The benchmark ran in 4 threads/CPUs. In each iteration each thread selected at random
    145145whether to walk the entire list or to replace an item in the list (ie to update the
    146 list). All items were preallocated. More is better ;-).
     146list). All items were preallocated. More is better ;-). The figure on the right
     147details the effects of increasing the update percentage by leaving out the data point for 0% updates.
    147148- //ideal// - the list was accessed without any synchronization whatsoever on a single cpu;
    148149  and the result multiplied by the number of cpus (ie 4)
    149150- //a-rcu + spinlock// - each list traversal and update was protected by A-RCU; concurrent updates
    150   were synchronized by means of a spinlock
     151  were synchronized by means of a spinlock, but removed items were freed via {{{rcu_call()}}}
    151152- //podzimek-rcu + spinlock// - same as a-rcu but protected by preemptible version of Podzimek's RCU
    152153- //spinlock// - guarded by an ordinary preemption disabling spinlock
     154
     155When the update fraction is small, RCU protected list traversals and modifications
     156significantly outperform a spinlock guarded list. However, as the percent of updates
     157increases the cost of acquiring a spinlock to modify the list quickly begins to dominate.
     158In the case of RCU protected lists, updating a list involves not only locking a spinlock
     159but also posting and rcu callback. Therefore, protecting an often modified list with a
     160spinlock eventually outperforms RCU guarded lists. The cross over point is around 40%
     161and 60% of updates for Podzimek's RCU and A-RCU respectively.  Again A-RCU fare a bit
     162better than the preemptible version of Podzimek's RCU.
    153163
    154164To reproduce these results, switch to the kernel console and run: