|  | 140 |  | 
          
            |  | 141 | Second, we examine the overhead of rcu writers (ie invoking {{{rcu_call()}}} and processing the callback in the background) | 
          
            |  | 142 | compared to acquiring a spinlock. The figure above shows the number of traversals | 
          
            |  | 143 | or updates of a five element list with an increasing percentage of updates. | 
          
            |  | 144 | The benchmark ran in 4 threads/CPUs. In each iteration a thread selected at random | 
          
            |  | 145 | whether to walk the entire list or to replace an item in the list (ie to update the | 
          
            |  | 146 | list). All items were preallocated. More is better ;-). | 
          
            |  | 147 | - //ideal// - the list was accessed without any synchronization whatsoever on a single cpu; | 
          
            |  | 148 | and the result multiplied by the number of cpus (ie 4) | 
          
            |  | 149 | - //a-rcu + spinlock// - each list traversal and update was protected by A-RCU; concurrent updates | 
          
            |  | 150 | were synchronized by means of a spinlock | 
          
            |  | 151 | - //podzimek-rcu + spinlock// - same as a-rcu but protected by preemptible version of Podzimek's RCU | 
          
            |  | 152 | - //spinlock// - guarded by an ordinary preemption disabling spinlock | 
          
            |  | 153 |  | 
          
            |  | 154 | To reproduce these results, switch to the kernel console and run: | 
          
            |  | 155 | {{{ | 
          
            |  | 156 | chtbench 6 1 0 -w | 
          
            |  | 157 | chtbench 7 4 0 -w | 
          
            |  | 158 | chtbench 7 4 5 -w | 
          
            |  | 159 | chtbench 7 4 10 -w | 
          
            |  | 160 | chtbench 7 4 20 -w | 
          
            |  | 161 | chtbench 7 4 30 -w | 
          
            |  | 162 | chtbench 7 4 40 -w | 
          
            |  | 163 | chtbench 7 4 60 -w | 
          
            |  | 164 | chtbench 7 4 100 -w | 
          
            |  | 165 | chtbench 8 4 0 -w | 
          
            |  | 166 | chtbench 8 4 5 -w | 
          
            |  | 167 | chtbench 8 4 10 -w | 
          
            |  | 168 | chtbench 8 4 20 -w | 
          
            |  | 169 | chtbench 8 4 30 -w | 
          
            |  | 170 | chtbench 8 4 40 -w | 
          
            |  | 171 | chtbench 8 4 60 -w | 
          
            |  | 172 | chtbench 8 4 100 -w | 
          
            |  | 173 | }}} | 
          
            |  | 174 | Then rebuild with Podzimek-RCU and rerun: | 
          
            |  | 175 | {{{ | 
          
            |  | 176 | chtbench 7 4 0 -w | 
          
            |  | 177 | chtbench 7 4 5 -w | 
          
            |  | 178 | chtbench 7 4 10 -w | 
          
            |  | 179 | chtbench 7 4 20 -w | 
          
            |  | 180 | chtbench 7 4 30 -w | 
          
            |  | 181 | chtbench 7 4 40 -w | 
          
            |  | 182 | chtbench 7 4 60 -w | 
          
            |  | 183 | chtbench 7 4 100 -w | 
          
            |  | 184 | }}} | 
          
            |  | 185 |  | 
          
            |  | 186 |  | 
          
            |  | 187 | === Hash table lookup scalability === |