source: mainline/kernel/generic/src/adt/cht.c@ 9fbc7fa

lfn serial ticket/834-toolchain-update topic/msim-upgrade topic/simplify-dev-export
Last change on this file since 9fbc7fa was 9fbc7fa, checked in by Jakub Jermar <jakub@…>, 9 years ago

cht: Make sure done is initialized before use

  • Property mode set to 100644
File size: 90.0 KB
Line 
1/*
2 * Copyright (c) 2012 Adam Hraska
3 * All rights reserved.
4 *
5 * Redistribution and use in source and binary forms, with or without
6 * modification, are permitted provided that the following conditions
7 * are met:
8 *
9 * - Redistributions of source code must retain the above copyright
10 * notice, this list of conditions and the following disclaimer.
11 * - Redistributions in binary form must reproduce the above copyright
12 * notice, this list of conditions and the following disclaimer in the
13 * documentation and/or other materials provided with the distribution.
14 * - The name of the author may not be used to endorse or promote products
15 * derived from this software without specific prior written permission.
16 *
17 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
18 * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
19 * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
20 * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
21 * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
22 * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
23 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
24 * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
26 * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27 */
28
29
30/** @addtogroup genericadt
31 * @{
32 */
33
34/**
35 * @file
36 * @brief Scalable resizable concurrent lock-free hash table.
37 *
38 * CHT is a concurrent hash table that is scalable resizable and lock-free.
39 * resizable = the number of buckets of the table increases or decreases
40 * depending on the average number of elements per bucket (ie load)
41 * scalable = accessing the table from more cpus increases performance
42 * almost linearly
43 * lock-free = common operations never block; even if any of the operations
44 * is preempted or interrupted at any time, other operations will still
45 * make forward progress
46 *
47 * CHT is designed for read mostly scenarios. Performance degrades as the
48 * fraction of updates (insert/remove) increases. Other data structures
49 * significantly outperform CHT if the fraction of updates exceeds ~40%.
50 *
51 * CHT tolerates hardware exceptions and may be accessed from exception
52 * handlers as long as the underlying RCU implementation is exception safe.
53 *
54 * @par Caveats
55 *
56 * 0) Never assume an item is still in the table.
57 * The table may be accessed concurrently; therefore, other threads may
58 * insert or remove an item at any time. Do not assume an item is still
59 * in the table if cht_find() just returned it to you. Similarly, an
60 * item may have already been inserted by the time cht_find() returns NULL.
61 *
62 * 1) Always use RCU read locks when searching the table.
63 * Holding an RCU lock guarantees that an item found in the table remains
64 * valid (eg is not freed) even if the item was removed from the table
65 * in the meantime by another thread.
66 *
67 * 2) Never update values in place.
68 * Do not update items in the table in place, ie directly. The changes
69 * will not propagate to other readers (on other cpus) immediately or even
70 * correctly. Some readers may then encounter items that have only some
71 * of their fields changed or are completely inconsistent.
72 *
73 * Instead consider inserting an updated/changed copy of the item and
74 * removing the original item. Or contact the maintainer to provide
75 * you with a function that atomically replaces an item with a copy.
76 *
77 * 3) Use cht_insert_unique() instead of checking for duplicates with cht_find()
78 * The following code is prone to race conditions:
79 * @code
80 * if (NULL == cht_find(&h, key)) {
81 * // If another thread inserts and item here, we'll insert a duplicate.
82 * cht_insert(&h, item);
83 * }
84 * @endcode
85 * See cht_insert_unique() on how to correctly fix this.
86 *
87 *
88 * @par Semantics
89 *
90 * Lazy readers = cht_find_lazy(), cht_find_next_lazy()
91 * Readers = lazy readers, cht_find(), cht_find_next()
92 * Updates = cht_insert(), cht_insert_unique(), cht_remove_key(),
93 * cht_remove_item()
94 *
95 * Readers (but not lazy readers) are guaranteed to see the effects
96 * of @e completed updates. In other words, if cht_find() is invoked
97 * after a cht_insert() @e returned eg on another cpu, cht_find() is
98 * guaranteed to see the inserted item.
99 *
100 * Similarly, updates see the effects of @e completed updates. For example,
101 * issuing cht_remove() after a cht_insert() for that key returned (even
102 * on another cpu) is guaranteed to remove the inserted item.
103 *
104 * Reading or updating the table concurrently with other updates
105 * always returns consistent data and never corrupts the table.
106 * However the effects of concurrent updates may or may not be
107 * visible to all other concurrent readers or updaters. Eg, not
108 * all readers may see that an item has already been inserted
109 * if cht_insert() has not yet returned.
110 *
111 * Lazy readers are guaranteed to eventually see updates but it
112 * may take some time (possibly milliseconds) after the update
113 * completes for the change to propagate to lazy readers on all
114 * cpus.
115 *
116 * @par Implementation
117 *
118 * Collisions in CHT are resolved with chaining. The number of buckets
119 * is always a power of 2. Each bucket is represented with a single linked
120 * lock-free list [1]. Items in buckets are sorted by their mixed hashes
121 * in ascending order. All buckets are terminated with a single global
122 * sentinel node whose mixed hash value is the greatest possible.
123 *
124 * CHT with 2^k buckets uses the k most significant bits of a hash value
125 * to determine the bucket number where an item is to be stored. To
126 * avoid storing all items in a single bucket if the user supplied
127 * hash function does not produce uniform hashes, hash values are
128 * mixed first so that the top bits of a mixed hash change even if hash
129 * values differ only in the least significant bits. The mixed hash
130 * values are cached in cht_link.hash (which is overwritten once the
131 * item is scheduled for removal via rcu_call).
132 *
133 * A new item is inserted before all other existing items in the bucket
134 * with the same hash value as the newly inserted item (a la the original
135 * lock-free list [2]). Placing new items at the start of a same-hash
136 * sequence of items (eg duplicates) allows us to easily check for duplicates
137 * in cht_insert_unique(). The function can first check that there are
138 * no duplicates of the newly inserted item amongst the items with the
139 * same hash as the new item. If there were no duplicates the new item
140 * is linked before the same-hash items. Inserting a duplicate while
141 * the function is checking for duplicates is detected as a change of
142 * the link to the first checked same-hash item (and the search for
143 * duplicates can be restarted).
144 *
145 * @par Table resize algorithm
146 *
147 * Table resize is based on [3] and [5]. First, a new bucket head array
148 * is allocated and initialized. Second, old bucket heads are moved
149 * to the new bucket head array with the protocol mentioned in [5].
150 * At this point updaters start using the new bucket heads. Third,
151 * buckets are split (or joined) so that the table can make use of
152 * the extra bucket head slots in the new array (or stop wasting space
153 * with the unnecessary extra slots in the old array). Splitting
154 * or joining buckets employs a custom protocol. Last, the new array
155 * replaces the original bucket array.
156 *
157 * A single background work item (of the system work queue) guides
158 * resizing of the table. If an updater detects that the bucket it
159 * is about to access is undergoing a resize (ie its head is moving
160 * or it needs to be split/joined), it helps out and completes the
161 * head move or the bucket split/join.
162 *
163 * The table always grows or shrinks by a factor of 2. Because items
164 * are assigned a bucket based on the top k bits of their mixed hash
165 * values, when growing the table each bucket is split into two buckets
166 * and all items of the two new buckets come from the single bucket in the
167 * original table. Ie items from separate buckets in the original table
168 * never intermix in the new buckets. Moreover
169 * since the buckets are sorted by their mixed hash values the items
170 * at the beginning of the old bucket will end up in the first new
171 * bucket while all the remaining items of the old bucket will end up
172 * in the second new bucket. Therefore, there is a single point where
173 * to split the linked list of the old bucket into two correctly sorted
174 * linked lists of the new buckets:
175 * .- bucket split
176 * |
177 * <-- first --> v <-- second -->
178 * [old] --> [00b] -> [01b] -> [10b] -> [11b] -> sentinel
179 * ^ ^
180 * [new0] -- -+ |
181 * [new1] -- -- -- -- -- -- -- -+
182 *
183 * Resize in greater detail:
184 *
185 * a) First, a resizer (a single background system work queue item
186 * in charge of resizing the table) allocates and initializes a new
187 * bucket head array. New bucket heads are pointed to the sentinel
188 * and marked Invalid (in the lower order bits of the pointer to the
189 * next item, ie the sentinel in this case):
190 *
191 * [old, N] --> [00b] -> [01b] -> [10b] -> [11b] -> sentinel
192 * ^ ^
193 * [new0, Inv] -------------------------------------+ |
194 * [new1, Inv] ---------------------------------------+
195 *
196 *
197 * b) Second, the resizer starts moving old bucket heads with the following
198 * lock-free protocol (from [5]) where cas(variable, expected_val, new_val)
199 * is short for compare-and-swap:
200 *
201 * old head new0 head transition to next state
202 * -------- --------- ------------------------
203 * addr, N sentinel, Inv cas(old, (addr, N), (addr, Const))
204 * .. mark the old head as immutable, so that
205 * updaters do not relink it to other nodes
206 * until the head move is done.
207 * addr, Const sentinel, Inv cas(new0, (sentinel, Inv), (addr, N))
208 * .. move the address to the new head and mark
209 * the new head normal so updaters can start
210 * using it.
211 * addr, Const addr, N cas(old, (addr, Const), (addr, Inv))
212 * .. mark the old head Invalid to signify
213 * the head move is done.
214 * addr, Inv addr, N
215 *
216 * Notice that concurrent updaters may step in at any point and correctly
217 * complete the head move without disrupting the resizer. At worst, the
218 * resizer or other concurrent updaters will attempt a number of CAS() that
219 * will correctly fail.
220 *
221 * [old, Inv] -> [00b] -> [01b] -> [10b] -> [11b] -> sentinel
222 * ^ ^
223 * [new0, N] ----+ |
224 * [new1, Inv] --------------------------------------+
225 *
226 *
227 * c) Third, buckets are split if the table is growing; or joined if
228 * shrinking (by the resizer or updaters depending on whoever accesses
229 * the bucket first). See split_bucket() and join_buckets() for details.
230 *
231 * 1) Mark the last item of new0 with JOIN_FOLLOWS:
232 * [old, Inv] -> [00b] -> [01b, JF] -> [10b] -> [11b] -> sentinel
233 * ^ ^
234 * [new0, N] ----+ |
235 * [new1, Inv] ------------------------------------------+
236 *
237 * 2) Mark the first item of new1 with JOIN_NODE:
238 * [old, Inv] -> [00b] -> [01b, JF] -> [10b, JN] -> [11b] -> sentinel
239 * ^ ^
240 * [new0, N] ----+ |
241 * [new1, Inv] ----------------------------------------------+
242 *
243 * 3) Point new1 to the join-node and mark new1 NORMAL.
244 * [old, Inv] -> [00b] -> [01b, JF] -> [10b, JN] -> [11b] -> sentinel
245 * ^ ^
246 * [new0, N] ----+ |
247 * [new1, N] --------------------------+
248 *
249 *
250 * d) Fourth, the resizer cleans up extra marks added during bucket
251 * splits/joins but only when it is sure all updaters are accessing
252 * the table via the new bucket heads only (ie it is certain there
253 * are no delayed updaters unaware of the resize and accessing the
254 * table via the old bucket head).
255 *
256 * [old, Inv] ---+
257 * v
258 * [new0, N] --> [00b] -> [01b, N] ---+
259 * v
260 * [new1, N] --> [10b, N] -> [11b] -> sentinel
261 *
262 *
263 * e) Last, the resizer publishes the new bucket head array for everyone
264 * to see and use. This signals the end of the resize and the old bucket
265 * array is freed.
266 *
267 *
268 * To understand details of how the table is resized, read [1, 3, 5]
269 * and comments in join_buckets(), split_bucket().
270 *
271 *
272 * [1] High performance dynamic lock-free hash tables and list-based sets,
273 * Michael, 2002
274 * http://www.research.ibm.com/people/m/michael/spaa-2002.pdf
275 * [2] Lock-free linked lists using compare-and-swap,
276 * Valois, 1995
277 * http://people.csail.mit.edu/bushl2/rpi/portfolio/lockfree-grape/documents/lock-free-linked-lists.pdf
278 * [3] Resizable, scalable, concurrent hash tables via relativistic programming,
279 * Triplett, 2011
280 * http://www.usenix.org/event/atc11/tech/final_files/Triplett.pdf
281 * [4] Split-ordered Lists: Lock-free Extensible Hash Tables,
282 * Shavit, 2006
283 * http://www.cs.ucf.edu/~dcm/Teaching/COT4810-Spring2011/Literature/SplitOrderedLists.pdf
284 * [5] Towards a Scalable Non-blocking Coding Style,
285 * Click, 2008
286 * http://www.azulsystems.com/events/javaone_2008/2008_CodingNonBlock.pdf
287 */
288
289
290#include <adt/cht.h>
291#include <adt/hash.h>
292#include <debug.h>
293#include <memstr.h>
294#include <mm/slab.h>
295#include <arch/barrier.h>
296#include <compiler/barrier.h>
297#include <atomic.h>
298#include <synch/rcu.h>
299
300#ifdef CONFIG_DEBUG
301/* Do not enclose in parentheses. */
302#define DBG(x) x
303#else
304#define DBG(x)
305#endif
306
307/* Logarithm of the min bucket count. Must be at least 3. 2^6 == 64 buckets. */
308#define CHT_MIN_ORDER 6
309/* Logarithm of the max bucket count. */
310#define CHT_MAX_ORDER (8 * sizeof(size_t))
311/* Minimum number of hash table buckets. */
312#define CHT_MIN_BUCKET_CNT (1 << CHT_MIN_ORDER)
313/* Does not have to be a power of 2. */
314#define CHT_MAX_LOAD 2
315
316typedef cht_ptr_t marked_ptr_t;
317typedef bool (*equal_pred_t)(void *arg, const cht_link_t *item);
318
319/** The following mark items and bucket heads.
320 *
321 * They are stored in the two low order bits of the next item pointers.
322 * Some marks may be combined. Some marks share the same binary value and
323 * are distinguished only by context (eg bucket head vs an ordinary item),
324 * in particular by walk_mode_t.
325 */
326typedef enum mark {
327 /** Normal non-deleted item or a valid bucket head. */
328 N_NORMAL = 0,
329 /** Logically deleted item that might have already been unlinked.
330 *
331 * May be combined with N_JOIN and N_JOIN_FOLLOWS. Applicable only
332 * to items; never to bucket heads.
333 *
334 * Once marked deleted an item remains marked deleted.
335 */
336 N_DELETED = 1,
337 /** Immutable bucket head.
338 *
339 * The bucket is being moved or joined with another and its (old) head
340 * must not be modified.
341 *
342 * May be combined with N_INVALID. Applicable only to old bucket heads,
343 * ie cht_t.b and not cht_t.new_b.
344 */
345 N_CONST = 1,
346 /** Invalid bucket head. The bucket head must not be modified.
347 *
348 * Old bucket heads (ie cht_t.b) are marked invalid if they have
349 * already been moved to cht_t.new_b or if the bucket had already
350 * been merged with another when shrinking the table. New bucket
351 * heads (ie cht_t.new_b) are marked invalid if the old bucket had
352 * not yet been moved or if an old bucket had not yet been split
353 * when growing the table.
354 */
355 N_INVALID = 3,
356 /** The item is a join node, ie joining two buckets
357 *
358 * A join node is either the first node of the second part of
359 * a bucket to be split; or it is the first node of the bucket
360 * to be merged into/appended to/joined with another bucket.
361 *
362 * May be combined with N_DELETED. Applicable only to items, never
363 * to bucket heads.
364 *
365 * Join nodes are referred to from two different buckets and may,
366 * therefore, not be safely/atomically unlinked from both buckets.
367 * As a result join nodes are not unlinked but rather just marked
368 * deleted. Once resize completes join nodes marked deleted are
369 * garbage collected.
370 */
371 N_JOIN = 2,
372 /** The next node is a join node and will soon be marked so.
373 *
374 * A join-follows node is the last node of the first part of bucket
375 * that is to be split, ie it is the last node that will remain
376 * in the same bucket after splitting it.
377 *
378 * May be combined with N_DELETED. Applicable to items as well
379 * as to bucket heads of the bucket to be split (but only in cht_t.new_b).
380 */
381 N_JOIN_FOLLOWS = 2,
382 /** Bit mask to filter out the address to the next item from the next ptr. */
383 N_MARK_MASK = 3
384} mark_t;
385
386/** Determines */
387typedef enum walk_mode {
388 /** The table is not resizing. */
389 WM_NORMAL = 4,
390 /** The table is undergoing a resize. Join nodes may be encountered. */
391 WM_LEAVE_JOIN,
392 /** The table is growing. A join-follows node may be encountered. */
393 WM_MOVE_JOIN_FOLLOWS
394} walk_mode_t;
395
396/** Bucket position window. */
397typedef struct wnd {
398 /** Pointer to cur's predecessor. */
399 marked_ptr_t *ppred;
400 /** Current item. */
401 cht_link_t *cur;
402 /** Last encountered item. Deleted or not. */
403 cht_link_t *last;
404} wnd_t;
405
406
407/* Sentinel node used by all buckets. Stores the greatest possible hash value.*/
408static const cht_link_t sentinel = {
409 /* NULL and N_NORMAL */
410 .link = 0 | N_NORMAL,
411 .hash = -1
412};
413
414
415static size_t size_to_order(size_t bucket_cnt, size_t min_order);
416static cht_buckets_t *alloc_buckets(size_t order, bool set_invalid,
417 bool can_block);
418static inline cht_link_t *find_lazy(cht_t *h, void *key);
419static cht_link_t *search_bucket(cht_t *h, marked_ptr_t head, void *key,
420 size_t search_hash);
421static cht_link_t *find_resizing(cht_t *h, void *key, size_t hash,
422 marked_ptr_t old_head, size_t old_idx);
423static bool insert_impl(cht_t *h, cht_link_t *item, cht_link_t **dup_item);
424static bool insert_at(cht_link_t *item, const wnd_t *wnd, walk_mode_t walk_mode,
425 bool *resizing);
426static bool has_duplicate(cht_t *h, const cht_link_t *item, size_t hash,
427 cht_link_t *cur, cht_link_t **dup_item);
428static cht_link_t *find_duplicate(cht_t *h, const cht_link_t *item, size_t hash,
429 cht_link_t *start);
430static bool remove_pred(cht_t *h, size_t hash, equal_pred_t pred, void *pred_arg);
431static bool delete_at(cht_t *h, wnd_t *wnd, walk_mode_t walk_mode,
432 bool *deleted_but_gc, bool *resizing);
433static bool mark_deleted(cht_link_t *cur, walk_mode_t walk_mode, bool *resizing);
434static bool unlink_from_pred(wnd_t *wnd, walk_mode_t walk_mode, bool *resizing);
435static bool find_wnd_and_gc_pred(cht_t *h, size_t hash, walk_mode_t walk_mode,
436 equal_pred_t pred, void *pred_arg, wnd_t *wnd, bool *resizing);
437static bool find_wnd_and_gc(cht_t *h, size_t hash, walk_mode_t walk_mode,
438 wnd_t *wnd, bool *resizing);
439static bool gc_deleted_node(cht_t *h, walk_mode_t walk_mode, wnd_t *wnd,
440 bool *resizing);
441static bool join_completed(cht_t *h, const wnd_t *wnd);
442static void upd_resizing_head(cht_t *h, size_t hash, marked_ptr_t **phead,
443 bool *join_finishing, walk_mode_t *walk_mode);
444static void item_removed(cht_t *h);
445static void item_inserted(cht_t *h);
446static void free_later(cht_t *h, cht_link_t *item);
447static void help_head_move(marked_ptr_t *psrc_head, marked_ptr_t *pdest_head);
448static void start_head_move(marked_ptr_t *psrc_head);
449static void mark_const(marked_ptr_t *psrc_head);
450static void complete_head_move(marked_ptr_t *psrc_head, marked_ptr_t *pdest_head);
451static void split_bucket(cht_t *h, marked_ptr_t *psrc_head,
452 marked_ptr_t *pdest_head, size_t split_hash);
453static void mark_join_follows(cht_t *h, marked_ptr_t *psrc_head,
454 size_t split_hash, wnd_t *wnd);
455static void mark_join_node(cht_link_t *join_node);
456static void join_buckets(cht_t *h, marked_ptr_t *psrc_head,
457 marked_ptr_t *pdest_head, size_t split_hash);
458static void link_to_join_node(cht_t *h, marked_ptr_t *pdest_head,
459 cht_link_t *join_node, size_t split_hash);
460static void resize_table(work_t *arg);
461static void grow_table(cht_t *h);
462static void shrink_table(cht_t *h);
463static void cleanup_join_node(cht_t *h, marked_ptr_t *new_head);
464static void clear_join_and_gc(cht_t *h, cht_link_t *join_node,
465 marked_ptr_t *new_head);
466static void cleanup_join_follows(cht_t *h, marked_ptr_t *new_head);
467static marked_ptr_t make_link(const cht_link_t *next, mark_t mark);
468static cht_link_t * get_next(marked_ptr_t link);
469static mark_t get_mark(marked_ptr_t link);
470static void next_wnd(wnd_t *wnd);
471static bool same_node_pred(void *node, const cht_link_t *item2);
472static size_t calc_key_hash(cht_t *h, void *key);
473static size_t node_hash(cht_t *h, const cht_link_t *item);
474static size_t calc_node_hash(cht_t *h, const cht_link_t *item);
475static void memoize_node_hash(cht_t *h, cht_link_t *item);
476static size_t calc_split_hash(size_t split_idx, size_t order);
477static size_t calc_bucket_idx(size_t hash, size_t order);
478static size_t grow_to_split_idx(size_t old_idx);
479static size_t grow_idx(size_t idx);
480static size_t shrink_idx(size_t idx);
481static marked_ptr_t cas_link(marked_ptr_t *link, const cht_link_t *cur_next,
482 mark_t cur_mark, const cht_link_t *new_next, mark_t new_mark);
483static marked_ptr_t _cas_link(marked_ptr_t *link, marked_ptr_t cur,
484 marked_ptr_t new);
485static void cas_order_barrier(void);
486
487static void dummy_remove_callback(cht_link_t *item)
488{
489 /* empty */
490}
491
492/** Creates a concurrent hash table.
493 *
494 * @param h Valid pointer to a cht_t instance.
495 * @param op Item specific operations. All operations are compulsory.
496 * @return True if successfully created the table. False otherwise.
497 */
498bool cht_create_simple(cht_t *h, cht_ops_t *op)
499{
500 return cht_create(h, 0, 0, 0, false, op);
501}
502
503/** Creates a concurrent hash table.
504 *
505 * @param h Valid pointer to a cht_t instance.
506 * @param init_size The initial number of buckets the table should contain.
507 * The table may be shrunk below this value if deemed necessary.
508 * Uses the default value if 0.
509 * @param min_size Minimum number of buckets that the table should contain.
510 * The number of buckets never drops below this value,
511 * although it may be rounded up internally as appropriate.
512 * Uses the default value if 0.
513 * @param max_load Maximum average number of items per bucket that allowed
514 * before the table grows.
515 * @param can_block If true creating the table blocks until enough memory
516 * is available (possibly indefinitely). Otherwise,
517 * table creation does not block and returns immediately
518 * even if not enough memory is available.
519 * @param op Item specific operations. All operations are compulsory.
520 * @return True if successfully created the table. False otherwise.
521 */
522bool cht_create(cht_t *h, size_t init_size, size_t min_size, size_t max_load,
523 bool can_block, cht_ops_t *op)
524{
525 ASSERT(h);
526 ASSERT(op && op->hash && op->key_hash && op->equal && op->key_equal);
527 /* Memoized hashes are stored in the rcu_link.func function pointer. */
528 STATIC_ASSERT(sizeof(size_t) == sizeof(rcu_func_t));
529 ASSERT(sentinel.hash == (uintptr_t)sentinel.rcu_link.func);
530
531 /* All operations are compulsory. */
532 if (!op || !op->hash || !op->key_hash || !op->equal || !op->key_equal)
533 return false;
534
535 size_t min_order = size_to_order(min_size, CHT_MIN_ORDER);
536 size_t order = size_to_order(init_size, min_order);
537
538 h->b = alloc_buckets(order, false, can_block);
539
540 if (!h->b)
541 return false;
542
543 h->max_load = (max_load == 0) ? CHT_MAX_LOAD : max_load;
544 h->min_order = min_order;
545 h->new_b = NULL;
546 h->op = op;
547 atomic_set(&h->item_cnt, 0);
548 atomic_set(&h->resize_reqs, 0);
549
550 if (NULL == op->remove_callback) {
551 h->op->remove_callback = dummy_remove_callback;
552 }
553
554 /*
555 * Cached item hashes are stored in item->rcu_link.func. Once the item
556 * is deleted rcu_link.func will contain the value of invalid_hash.
557 */
558 h->invalid_hash = (uintptr_t)h->op->remove_callback;
559
560 /* Ensure the initialization takes place before we start using the table. */
561 write_barrier();
562
563 return true;
564}
565
566/** Allocates and initializes 2^order buckets.
567 *
568 * All bucket heads are initialized to point to the sentinel node.
569 *
570 * @param order The number of buckets to allocate is 2^order.
571 * @param set_invalid Bucket heads are marked invalid if true; otherwise
572 * they are marked N_NORMAL.
573 * @param can_block If true memory allocation blocks until enough memory
574 * is available (possibly indefinitely). Otherwise,
575 * memory allocation does not block.
576 * @return Newly allocated and initialized buckets or NULL if not enough memory.
577 */
578static cht_buckets_t *alloc_buckets(size_t order, bool set_invalid, bool can_block)
579{
580 size_t bucket_cnt = (1 << order);
581 size_t bytes =
582 sizeof(cht_buckets_t) + (bucket_cnt - 1) * sizeof(marked_ptr_t);
583 cht_buckets_t *b = malloc(bytes, can_block ? 0 : FRAME_ATOMIC);
584
585 if (!b)
586 return NULL;
587
588 b->order = order;
589
590 marked_ptr_t head_link = set_invalid
591 ? make_link(&sentinel, N_INVALID)
592 : make_link(&sentinel, N_NORMAL);
593
594 for (size_t i = 0; i < bucket_cnt; ++i) {
595 b->head[i] = head_link;
596 }
597
598 return b;
599}
600
601/** Returns the smallest k such that bucket_cnt <= 2^k and min_order <= k.*/
602static size_t size_to_order(size_t bucket_cnt, size_t min_order)
603{
604 size_t order = min_order;
605
606 /* Find a power of two such that bucket_cnt <= 2^order */
607 do {
608 if (bucket_cnt <= ((size_t)1 << order))
609 return order;
610
611 ++order;
612 } while (order < CHT_MAX_ORDER);
613
614 return order;
615}
616
617/** Destroys a CHT successfully created via cht_create().
618 *
619 * Waits for all outstanding concurrent operations to complete and
620 * frees internal allocated resources. The table is however not cleared
621 * and items already present in the table (if any) are leaked.
622 */
623void cht_destroy(cht_t *h)
624{
625 cht_destroy_unsafe(h);
626
627 /* You must clear the table of items. Otherwise cht_destroy will leak. */
628 ASSERT(atomic_get(&h->item_cnt) == 0);
629}
630
631/** Destroys a successfully created CHT but does no error checking. */
632void cht_destroy_unsafe(cht_t *h)
633{
634 /* Wait for resize to complete. */
635 while (0 < atomic_get(&h->resize_reqs)) {
636 rcu_barrier();
637 }
638
639 /* Wait for all remove_callback()s to complete. */
640 rcu_barrier();
641
642 free(h->b);
643 h->b = NULL;
644}
645
646/** Returns the first item equal to the search key or NULL if not found.
647 *
648 * The call must be enclosed in a rcu_read_lock() unlock() pair. The
649 * returned item is guaranteed to be allocated until rcu_read_unlock()
650 * although the item may be concurrently removed from the table by another
651 * cpu.
652 *
653 * Further items matching the key may be retrieved via cht_find_next().
654 *
655 * cht_find() sees the effects of any completed cht_remove(), cht_insert().
656 * If a concurrent remove or insert had not yet completed cht_find() may
657 * or may not see the effects of it (eg it may find an item being removed).
658 *
659 * @param h CHT to operate on.
660 * @param key Search key as defined by cht_ops_t.key_equal() and .key_hash().
661 * @return First item equal to the key or NULL if such an item does not exist.
662 */
663cht_link_t *cht_find(cht_t *h, void *key)
664{
665 /* Make the most recent changes to the table visible. */
666 read_barrier();
667 return cht_find_lazy(h, key);
668}
669
670/** Returns the first item equal to the search key or NULL if not found.
671 *
672 * Unlike cht_find(), cht_find_lazy() may not see the effects of
673 * cht_remove() or cht_insert() even though they have already completed.
674 * It may take a couple of milliseconds for those changes to propagate
675 * and become visible to cht_find_lazy(). On the other hand, cht_find_lazy()
676 * operates a bit faster than cht_find().
677 *
678 * See cht_find() for more details.
679 */
680cht_link_t *cht_find_lazy(cht_t *h, void *key)
681{
682 return find_lazy(h, key);
683}
684
685/** Finds the first item equal to the search key. */
686static inline cht_link_t *find_lazy(cht_t *h, void *key)
687{
688 ASSERT(h);
689 /* See docs to cht_find() and cht_find_lazy(). */
690 ASSERT(rcu_read_locked());
691
692 size_t hash = calc_key_hash(h, key);
693
694 cht_buckets_t *b = rcu_access(h->b);
695 size_t idx = calc_bucket_idx(hash, b->order);
696 /*
697 * No need for access_once. b->head[idx] will point to an allocated node
698 * even if marked invalid until we exit rcu read section.
699 */
700 marked_ptr_t head = b->head[idx];
701
702 /* Undergoing a resize - take the slow path. */
703 if (N_INVALID == get_mark(head))
704 return find_resizing(h, key, hash, head, idx);
705
706 return search_bucket(h, head, key, hash);
707}
708
709/** Returns the next item matching \a item.
710 *
711 * Must be enclosed in a rcu_read_lock()/unlock() pair. Effects of
712 * completed cht_remove(), cht_insert() are guaranteed to be visible
713 * to cht_find_next().
714 *
715 * See cht_find() for more details.
716 */
717cht_link_t *cht_find_next(cht_t *h, const cht_link_t *item)
718{
719 /* Make the most recent changes to the table visible. */
720 read_barrier();
721 return cht_find_next_lazy(h, item);
722}
723
724/** Returns the next item matching \a item.
725 *
726 * Must be enclosed in a rcu_read_lock()/unlock() pair. Effects of
727 * completed cht_remove(), cht_insert() may or may not be visible
728 * to cht_find_next_lazy().
729 *
730 * See cht_find_lazy() for more details.
731 */
732cht_link_t *cht_find_next_lazy(cht_t *h, const cht_link_t *item)
733{
734 ASSERT(h);
735 ASSERT(rcu_read_locked());
736 ASSERT(item);
737
738 return find_duplicate(h, item, calc_node_hash(h, item), get_next(item->link));
739}
740
741/** Searches the bucket at head for key using search_hash. */
742static inline cht_link_t *search_bucket(cht_t *h, marked_ptr_t head, void *key,
743 size_t search_hash)
744{
745 /*
746 * It is safe to access nodes even outside of this bucket (eg when
747 * splitting the bucket). The resizer makes sure that any node we
748 * may find by following the next pointers is allocated.
749 */
750
751 cht_link_t *cur = NULL;
752 marked_ptr_t prev = head;
753
754try_again:
755 /* Filter out items with different hashes. */
756 do {
757 cur = get_next(prev);
758 ASSERT(cur);
759 prev = cur->link;
760 } while (node_hash(h, cur) < search_hash);
761
762 /*
763 * Only search for an item with an equal key if cur is not the sentinel
764 * node or a node with a different hash.
765 */
766 while (node_hash(h, cur) == search_hash) {
767 if (h->op->key_equal(key, cur)) {
768 if (!(N_DELETED & get_mark(cur->link)))
769 return cur;
770 }
771
772 cur = get_next(cur->link);
773 ASSERT(cur);
774 }
775
776 /*
777 * In the unlikely case that we have encountered a node whose cached
778 * hash has been overwritten due to a pending rcu_call for it, skip
779 * the node and try again.
780 */
781 if (node_hash(h, cur) == h->invalid_hash) {
782 prev = cur->link;
783 goto try_again;
784 }
785
786 return NULL;
787}
788
789/** Searches for the key while the table is undergoing a resize. */
790static cht_link_t *find_resizing(cht_t *h, void *key, size_t hash,
791 marked_ptr_t old_head, size_t old_idx)
792{
793 ASSERT(N_INVALID == get_mark(old_head));
794 ASSERT(h->new_b);
795
796 size_t new_idx = calc_bucket_idx(hash, h->new_b->order);
797 marked_ptr_t new_head = h->new_b->head[new_idx];
798 marked_ptr_t search_head = new_head;
799
800 /* Growing. */
801 if (h->b->order < h->new_b->order) {
802 /*
803 * Old bucket head is invalid, so it must have been already
804 * moved. Make the new head visible if still not visible, ie
805 * invalid.
806 */
807 if (N_INVALID == get_mark(new_head)) {
808 /*
809 * We should be searching a newly added bucket but the old
810 * moved bucket has not yet been split (its marked invalid)
811 * or we have not yet seen the split.
812 */
813 if (grow_idx(old_idx) != new_idx) {
814 /*
815 * Search the moved bucket. It is guaranteed to contain
816 * items of the newly added bucket that were present
817 * before the moved bucket was split.
818 */
819 new_head = h->new_b->head[grow_idx(old_idx)];
820 }
821
822 /* new_head is now the moved bucket, either valid or invalid. */
823
824 /*
825 * The old bucket was definitely moved to new_head but the
826 * change of new_head had not yet propagated to this cpu.
827 */
828 if (N_INVALID == get_mark(new_head)) {
829 /*
830 * We could issue a read_barrier() and make the now valid
831 * moved bucket head new_head visible, but instead fall back
832 * on using the old bucket. Although the old bucket head is
833 * invalid, it points to a node that is allocated and in the
834 * right bucket. Before the node can be freed, it must be
835 * unlinked from the head (or another item after that item
836 * modified the new_head) and a grace period must elapse.
837 * As a result had the node been already freed the grace
838 * period preceeding the free() would make the unlink and
839 * any changes to new_head visible. Therefore, it is safe
840 * to use the node pointed to from the old bucket head.
841 */
842
843 search_head = old_head;
844 } else {
845 search_head = new_head;
846 }
847 }
848
849 return search_bucket(h, search_head, key, hash);
850 } else if (h->b->order > h->new_b->order) {
851 /* Shrinking. */
852
853 /* Index of the bucket in the old table that was moved. */
854 size_t move_src_idx = grow_idx(new_idx);
855 marked_ptr_t moved_old_head = h->b->head[move_src_idx];
856
857 /*
858 * h->b->head[move_src_idx] had already been moved to new_head
859 * but the change to new_head had not yet propagated to us.
860 */
861 if (N_INVALID == get_mark(new_head)) {
862 /*
863 * new_head is definitely valid and we could make it visible
864 * to this cpu with a read_barrier(). Instead, use the bucket
865 * in the old table that was moved even though it is now marked
866 * as invalid. The node it points to must be allocated because
867 * a grace period would have to elapse before it could be freed;
868 * and the grace period would make the now valid new_head
869 * visible to all cpus.
870 *
871 * Note that move_src_idx may not be the same as old_idx.
872 * If move_src_idx != old_idx then old_idx is the bucket
873 * in the old table that is not moved but instead it is
874 * appended to the moved bucket, ie it is added at the tail
875 * of new_head. In that case an invalid old_head notes that
876 * it had already been merged into (the moved) new_head.
877 * We will try to search that bucket first because it
878 * may contain some newly added nodes after the bucket
879 * join. Moreover, the bucket joining link may already be
880 * visible even if new_head is not. Therefore, if we're
881 * lucky we'll find the item via moved_old_head. In any
882 * case, we'll retry in proper old_head if not found.
883 */
884 search_head = moved_old_head;
885 }
886
887 cht_link_t *ret = search_bucket(h, search_head, key, hash);
888
889 if (ret)
890 return ret;
891 /*
892 * Bucket old_head was already joined with moved_old_head
893 * in the new table but we have not yet seen change of the
894 * joining link (or the item is not in the table).
895 */
896 if (move_src_idx != old_idx && get_next(old_head) != &sentinel) {
897 /*
898 * Note that old_head (the bucket to be merged into new_head)
899 * points to an allocated join node (if non-null) even if marked
900 * invalid. Before the resizer lets join nodes to be unlinked
901 * (and freed) it sets old_head to NULL and waits for a grace period.
902 * So either the invalid old_head points to join node; or old_head
903 * is null and we would have seen a completed bucket join while
904 * traversing search_head.
905 */
906 ASSERT(N_JOIN & get_mark(get_next(old_head)->link));
907 return search_bucket(h, old_head, key, hash);
908 }
909
910 return NULL;
911 } else {
912 /*
913 * Resize is almost done. The resizer is waiting to make
914 * sure all cpus see that the new table replaced the old one.
915 */
916 ASSERT(h->b->order == h->new_b->order);
917 /*
918 * The resizer must ensure all new bucket heads are visible before
919 * replacing the old table.
920 */
921 ASSERT(N_NORMAL == get_mark(new_head));
922 return search_bucket(h, new_head, key, hash);
923 }
924}
925
926/** Inserts an item. Succeeds even if an equal item is already present. */
927void cht_insert(cht_t *h, cht_link_t *item)
928{
929 insert_impl(h, item, NULL);
930}
931
932/** Inserts a unique item. Returns false if an equal item was already present.
933 *
934 * Use this function to atomically check if an equal/duplicate item had
935 * not yet been inserted into the table and to insert this item into the
936 * table.
937 *
938 * The following is @e NOT thread-safe, so do not use:
939 * @code
940 * if (!cht_find(h, key)) {
941 * // A concurrent insert here may go unnoticed by cht_find() above.
942 * item = malloc(..);
943 * cht_insert(h, item);
944 * // Now we may have two items with equal search keys.
945 * }
946 * @endcode
947 *
948 * Replace such code with:
949 * @code
950 * item = malloc(..);
951 * if (!cht_insert_unique(h, item, &dup_item)) {
952 * // Whoops, someone beat us to it - an equal item 'dup_item'
953 * // had already been inserted.
954 * free(item);
955 * } else {
956 * // Successfully inserted the item and we are guaranteed that
957 * // there are no other equal items.
958 * }
959 * @endcode
960 *
961 */
962bool cht_insert_unique(cht_t *h, cht_link_t *item, cht_link_t **dup_item)
963{
964 ASSERT(rcu_read_locked());
965 ASSERT(dup_item);
966 return insert_impl(h, item, dup_item);
967}
968
969/** Inserts the item into the table and checks for duplicates if dup_item. */
970static bool insert_impl(cht_t *h, cht_link_t *item, cht_link_t **dup_item)
971{
972 rcu_read_lock();
973
974 cht_buckets_t *b = rcu_access(h->b);
975 memoize_node_hash(h, item);
976 size_t hash = node_hash(h, item);
977 size_t idx = calc_bucket_idx(hash, b->order);
978 marked_ptr_t *phead = &b->head[idx];
979
980 bool resizing = false;
981 bool inserted = false;
982
983 do {
984 walk_mode_t walk_mode = WM_NORMAL;
985 bool join_finishing;
986
987 resizing = resizing || (N_NORMAL != get_mark(*phead));
988
989 /* The table is resizing. Get the correct bucket head. */
990 if (resizing) {
991 upd_resizing_head(h, hash, &phead, &join_finishing, &walk_mode);
992 }
993
994 wnd_t wnd = {
995 .ppred = phead,
996 .cur = get_next(*phead),
997 .last = NULL
998 };
999
1000 if (!find_wnd_and_gc(h, hash, walk_mode, &wnd, &resizing)) {
1001 /* Could not GC a node; or detected an unexpected resize. */
1002 continue;
1003 }
1004
1005 if (dup_item && has_duplicate(h, item, hash, wnd.cur, dup_item)) {
1006 rcu_read_unlock();
1007 return false;
1008 }
1009
1010 inserted = insert_at(item, &wnd, walk_mode, &resizing);
1011 } while (!inserted);
1012
1013 rcu_read_unlock();
1014
1015 item_inserted(h);
1016 return true;
1017}
1018
1019/** Inserts item between wnd.ppred and wnd.cur.
1020 *
1021 * @param item Item to link to wnd.ppred and wnd.cur.
1022 * @param wnd The item will be inserted before wnd.cur. Wnd.ppred
1023 * must be N_NORMAL.
1024 * @param walk_mode
1025 * @param resizing Set to true only if the table is undergoing resize
1026 * and it was not expected (ie walk_mode == WM_NORMAL).
1027 * @return True if the item was successfully linked to wnd.ppred. False
1028 * if whole insert operation must be retried because the predecessor
1029 * of wnd.cur has changed.
1030 */
1031inline static bool insert_at(cht_link_t *item, const wnd_t *wnd,
1032 walk_mode_t walk_mode, bool *resizing)
1033{
1034 marked_ptr_t ret;
1035
1036 if (walk_mode == WM_NORMAL) {
1037 item->link = make_link(wnd->cur, N_NORMAL);
1038 /* Initialize the item before adding it to a bucket. */
1039 memory_barrier();
1040
1041 /* Link a clean/normal predecessor to the item. */
1042 ret = cas_link(wnd->ppred, wnd->cur, N_NORMAL, item, N_NORMAL);
1043
1044 if (ret == make_link(wnd->cur, N_NORMAL)) {
1045 return true;
1046 } else {
1047 /* This includes an invalid head but not a const head. */
1048 *resizing = ((N_JOIN_FOLLOWS | N_JOIN) & get_mark(ret));
1049 return false;
1050 }
1051 } else if (walk_mode == WM_MOVE_JOIN_FOLLOWS) {
1052 /* Move JOIN_FOLLOWS mark but filter out the DELETED mark. */
1053 mark_t jf_mark = get_mark(*wnd->ppred) & N_JOIN_FOLLOWS;
1054 item->link = make_link(wnd->cur, jf_mark);
1055 /* Initialize the item before adding it to a bucket. */
1056 memory_barrier();
1057
1058 /* Link the not-deleted predecessor to the item. Move its JF mark. */
1059 ret = cas_link(wnd->ppred, wnd->cur, jf_mark, item, N_NORMAL);
1060
1061 return ret == make_link(wnd->cur, jf_mark);
1062 } else {
1063 ASSERT(walk_mode == WM_LEAVE_JOIN);
1064
1065 item->link = make_link(wnd->cur, N_NORMAL);
1066 /* Initialize the item before adding it to a bucket. */
1067 memory_barrier();
1068
1069 mark_t pred_mark = get_mark(*wnd->ppred);
1070 /* If the predecessor is a join node it may be marked deleted.*/
1071 mark_t exp_pred_mark = (N_JOIN & pred_mark) ? pred_mark : N_NORMAL;
1072
1073 ret = cas_link(wnd->ppred, wnd->cur, exp_pred_mark, item, exp_pred_mark);
1074 return ret == make_link(wnd->cur, exp_pred_mark);
1075 }
1076}
1077
1078/** Returns true if the chain starting at cur has an item equal to \a item.
1079 *
1080 * @param h CHT to operate on.
1081 * @param item Item whose duplicates the function looks for.
1082 * @param hash Hash of \a item.
1083 * @param[in] cur The first node with a hash greater to or equal to item's hash.
1084 * @param[out] dup_item The first duplicate item encountered.
1085 * @return True if a non-deleted item equal to \a item exists in the table.
1086 */
1087static inline bool has_duplicate(cht_t *h, const cht_link_t *item, size_t hash,
1088 cht_link_t *cur, cht_link_t **dup_item)
1089{
1090 ASSERT(cur);
1091 ASSERT(cur == &sentinel || hash <= node_hash(h, cur)
1092 || node_hash(h, cur) == h->invalid_hash);
1093
1094 /* hash < node_hash(h, cur) */
1095 if (hash != node_hash(h, cur) && h->invalid_hash != node_hash(h, cur))
1096 return false;
1097
1098 /*
1099 * Load the most recent node marks. Otherwise we might pronounce a
1100 * logically deleted node for a duplicate of the item just because
1101 * the deleted node's DEL mark had not yet propagated to this cpu.
1102 */
1103 read_barrier();
1104
1105 *dup_item = find_duplicate(h, item, hash, cur);
1106 return NULL != *dup_item;
1107}
1108
1109/** Returns an item that is equal to \a item starting in a chain at \a start. */
1110static cht_link_t *find_duplicate(cht_t *h, const cht_link_t *item, size_t hash,
1111 cht_link_t *start)
1112{
1113 ASSERT(hash <= node_hash(h, start) || h->invalid_hash == node_hash(h, start));
1114
1115 cht_link_t *cur = start;
1116
1117try_again:
1118 ASSERT(cur);
1119
1120 while (node_hash(h, cur) == hash) {
1121 ASSERT(cur != &sentinel);
1122
1123 bool deleted = (N_DELETED & get_mark(cur->link));
1124
1125 /* Skip logically deleted nodes. */
1126 if (!deleted && h->op->equal(item, cur))
1127 return cur;
1128
1129 cur = get_next(cur->link);
1130 ASSERT(cur);
1131 }
1132
1133 /* Skip logically deleted nodes with rcu_call() in progress. */
1134 if (h->invalid_hash == node_hash(h, cur)) {
1135 cur = get_next(cur->link);
1136 goto try_again;
1137 }
1138
1139 return NULL;
1140}
1141
1142/** Removes all items matching the search key. Returns the number of items removed.*/
1143size_t cht_remove_key(cht_t *h, void *key)
1144{
1145 ASSERT(h);
1146
1147 size_t hash = calc_key_hash(h, key);
1148 size_t removed = 0;
1149
1150 while (remove_pred(h, hash, h->op->key_equal, key))
1151 ++removed;
1152
1153 return removed;
1154}
1155
1156/** Removes a specific item from the table.
1157 *
1158 * The called must hold rcu read lock.
1159 *
1160 * @param item Item presumably present in the table and to be removed.
1161 * @return True if the item was removed successfully; or false if it had
1162 * already been deleted.
1163 */
1164bool cht_remove_item(cht_t *h, cht_link_t *item)
1165{
1166 ASSERT(h);
1167 ASSERT(item);
1168 /* Otherwise a concurrent cht_remove_key might free the item. */
1169 ASSERT(rcu_read_locked());
1170
1171 /*
1172 * Even though we know the node we want to delete we must unlink it
1173 * from the correct bucket and from a clean/normal predecessor. Therefore,
1174 * we search for it again from the beginning of the correct bucket.
1175 */
1176 size_t hash = calc_node_hash(h, item);
1177 return remove_pred(h, hash, same_node_pred, item);
1178}
1179
1180/** Removes an item equal to pred_arg according to the predicate pred. */
1181static bool remove_pred(cht_t *h, size_t hash, equal_pred_t pred, void *pred_arg)
1182{
1183 rcu_read_lock();
1184
1185 bool resizing = false;
1186 bool deleted = false;
1187 bool deleted_but_gc = false;
1188
1189 cht_buckets_t *b = rcu_access(h->b);
1190 size_t idx = calc_bucket_idx(hash, b->order);
1191 marked_ptr_t *phead = &b->head[idx];
1192
1193 do {
1194 walk_mode_t walk_mode = WM_NORMAL;
1195 bool join_finishing = false;
1196
1197 resizing = resizing || (N_NORMAL != get_mark(*phead));
1198
1199 /* The table is resizing. Get the correct bucket head. */
1200 if (resizing) {
1201 upd_resizing_head(h, hash, &phead, &join_finishing, &walk_mode);
1202 }
1203
1204 wnd_t wnd = {
1205 .ppred = phead,
1206 .cur = get_next(*phead),
1207 .last = NULL
1208 };
1209
1210 if (!find_wnd_and_gc_pred(
1211 h, hash, walk_mode, pred, pred_arg, &wnd, &resizing)) {
1212 /* Could not GC a node; or detected an unexpected resize. */
1213 continue;
1214 }
1215
1216 /*
1217 * The item lookup is affected by a bucket join but effects of
1218 * the bucket join have not been seen while searching for the item.
1219 */
1220 if (join_finishing && !join_completed(h, &wnd)) {
1221 /*
1222 * Bucket was appended at the end of another but the next
1223 * ptr linking them together was not visible on this cpu.
1224 * join_completed() makes this appended bucket visible.
1225 */
1226 continue;
1227 }
1228
1229 /* Already deleted, but delete_at() requested one GC pass. */
1230 if (deleted_but_gc)
1231 break;
1232
1233 bool found = (wnd.cur != &sentinel && pred(pred_arg, wnd.cur));
1234
1235 if (!found) {
1236 rcu_read_unlock();
1237 return false;
1238 }
1239
1240 deleted = delete_at(h, &wnd, walk_mode, &deleted_but_gc, &resizing);
1241 } while (!deleted || deleted_but_gc);
1242
1243 rcu_read_unlock();
1244 return true;
1245}
1246
1247/** Unlinks wnd.cur from wnd.ppred and schedules a deferred free for the item.
1248 *
1249 * Ignores nodes marked N_JOIN if walk mode is WM_LEAVE_JOIN.
1250 *
1251 * @param h CHT to operate on.
1252 * @param wnd Points to the item to delete and its N_NORMAL predecessor.
1253 * @param walk_mode Bucket chaing walk mode.
1254 * @param deleted_but_gc Set to true if the item had been logically deleted,
1255 * but a garbage collecting walk of the bucket is in order for
1256 * it to be fully unlinked.
1257 * @param resizing Set to true if the table is undergoing an unexpected
1258 * resize (ie walk_mode == WM_NORMAL).
1259 * @return False if the wnd.ppred changed in the meantime and the whole
1260 * delete operation must be retried.
1261 */
1262static inline bool delete_at(cht_t *h, wnd_t *wnd, walk_mode_t walk_mode,
1263 bool *deleted_but_gc, bool *resizing)
1264{
1265 ASSERT(wnd->cur && wnd->cur != &sentinel);
1266
1267 *deleted_but_gc = false;
1268
1269 if (!mark_deleted(wnd->cur, walk_mode, resizing)) {
1270 /* Already deleted, or unexpectedly marked as JOIN/JOIN_FOLLOWS. */
1271 return false;
1272 }
1273
1274 /* Marked deleted. Unlink from the bucket. */
1275
1276 /* Never unlink join nodes. */
1277 if (walk_mode == WM_LEAVE_JOIN && (N_JOIN & get_mark(wnd->cur->link)))
1278 return true;
1279
1280 cas_order_barrier();
1281
1282 if (unlink_from_pred(wnd, walk_mode, resizing)) {
1283 free_later(h, wnd->cur);
1284 } else {
1285 *deleted_but_gc = true;
1286 }
1287
1288 return true;
1289}
1290
1291/** Marks cur logically deleted. Returns false to request a retry. */
1292static inline bool mark_deleted(cht_link_t *cur, walk_mode_t walk_mode,
1293 bool *resizing)
1294{
1295 ASSERT(cur && cur != &sentinel);
1296
1297 /*
1298 * Btw, we could loop here if the cas fails but let's not complicate
1299 * things and let's retry from the head of the bucket.
1300 */
1301
1302 cht_link_t *next = get_next(cur->link);
1303
1304 if (walk_mode == WM_NORMAL) {
1305 /* Only mark clean/normal nodes - JF/JN is used only during resize. */
1306 marked_ptr_t ret = cas_link(&cur->link, next, N_NORMAL, next, N_DELETED);
1307
1308 if (ret != make_link(next, N_NORMAL)) {
1309 *resizing = (N_JOIN | N_JOIN_FOLLOWS) & get_mark(ret);
1310 return false;
1311 }
1312 } else {
1313 STATIC_ASSERT(N_JOIN == N_JOIN_FOLLOWS);
1314
1315 /* Keep the N_JOIN/N_JOIN_FOLLOWS mark but strip N_DELETED. */
1316 mark_t cur_mark = get_mark(cur->link) & N_JOIN_FOLLOWS;
1317
1318 marked_ptr_t ret =
1319 cas_link(&cur->link, next, cur_mark, next, cur_mark | N_DELETED);
1320
1321 if (ret != make_link(next, cur_mark))
1322 return false;
1323 }
1324
1325 return true;
1326}
1327
1328/** Unlinks wnd.cur from wnd.ppred. Returns false if it should be retried. */
1329static inline bool unlink_from_pred(wnd_t *wnd, walk_mode_t walk_mode,
1330 bool *resizing)
1331{
1332 ASSERT(wnd->cur != &sentinel);
1333 ASSERT(wnd->cur && (N_DELETED & get_mark(wnd->cur->link)));
1334
1335 cht_link_t *next = get_next(wnd->cur->link);
1336
1337 if (walk_mode == WM_LEAVE_JOIN) {
1338 /* Never try to unlink join nodes. */
1339 ASSERT(!(N_JOIN & get_mark(wnd->cur->link)));
1340
1341 mark_t pred_mark = get_mark(*wnd->ppred);
1342 /* Succeed only if the predecessor is clean/normal or a join node. */
1343 mark_t exp_pred_mark = (N_JOIN & pred_mark) ? pred_mark : N_NORMAL;
1344
1345 marked_ptr_t pred_link = make_link(wnd->cur, exp_pred_mark);
1346 marked_ptr_t next_link = make_link(next, exp_pred_mark);
1347
1348 if (pred_link != _cas_link(wnd->ppred, pred_link, next_link))
1349 return false;
1350 } else {
1351 ASSERT(walk_mode == WM_MOVE_JOIN_FOLLOWS || walk_mode == WM_NORMAL);
1352 /* Move the JF mark if set. Clear DEL mark. */
1353 mark_t cur_mark = N_JOIN_FOLLOWS & get_mark(wnd->cur->link);
1354
1355 /* The predecessor must be clean/normal. */
1356 marked_ptr_t pred_link = make_link(wnd->cur, N_NORMAL);
1357 /* Link to cur's successor keeping/copying cur's JF mark. */
1358 marked_ptr_t next_link = make_link(next, cur_mark);
1359
1360 marked_ptr_t ret = _cas_link(wnd->ppred, pred_link, next_link);
1361
1362 if (pred_link != ret) {
1363 /* If we're not resizing the table there are no JF/JN nodes. */
1364 *resizing = (walk_mode == WM_NORMAL)
1365 && (N_JOIN_FOLLOWS & get_mark(ret));
1366 return false;
1367 }
1368 }
1369
1370 return true;
1371}
1372
1373/** Finds the first non-deleted item equal to \a pred_arg according to \a pred.
1374 *
1375 * The function returns the candidate item in \a wnd. Logically deleted
1376 * nodes are garbage collected so the predecessor will most likely not
1377 * be marked as deleted.
1378 *
1379 * Unlike find_wnd_and_gc(), this function never returns a node that
1380 * is known to have already been marked N_DELETED.
1381 *
1382 * Any logically deleted nodes (ie those marked N_DELETED) are garbage
1383 * collected, ie free in the background via rcu_call (except for join-nodes
1384 * if walk_mode == WM_LEAVE_JOIN).
1385 *
1386 * @param h CHT to operate on.
1387 * @param hash Hash the search for.
1388 * @param walk_mode Bucket chain walk mode.
1389 * @param pred Predicate used to find an item equal to pred_arg.
1390 * @param pred_arg Argument to pass to the equality predicate \a pred.
1391 * @param[in,out] wnd The search starts with wnd.cur. If the desired
1392 * item is found wnd.cur will point to it.
1393 * @param resizing Set to true if the table is resizing but it was not
1394 * expected (ie walk_mode == WM_NORMAL).
1395 * @return False if the operation has to be retried. True otherwise
1396 * (even if an equal item had not been found).
1397 */
1398static bool find_wnd_and_gc_pred(cht_t *h, size_t hash, walk_mode_t walk_mode,
1399 equal_pred_t pred, void *pred_arg, wnd_t *wnd, bool *resizing)
1400{
1401 ASSERT(wnd->cur);
1402
1403 if (wnd->cur == &sentinel)
1404 return true;
1405
1406 /*
1407 * A read barrier is not needed here to bring up the most recent
1408 * node marks (esp the N_DELETED). At worst we'll try to delete
1409 * an already deleted node; fail in delete_at(); and retry.
1410 */
1411
1412 size_t cur_hash;
1413
1414try_again:
1415 cur_hash = node_hash(h, wnd->cur);
1416
1417 while (cur_hash <= hash) {
1418 ASSERT(wnd->cur && wnd->cur != &sentinel);
1419
1420 /* GC any deleted nodes on the way. */
1421 if (N_DELETED & get_mark(wnd->cur->link)) {
1422 if (!gc_deleted_node(h, walk_mode, wnd, resizing)) {
1423 /* Retry from the head of a bucket. */
1424 return false;
1425 }
1426 } else {
1427 /* Is this the node we were looking for? */
1428 if (cur_hash == hash && pred(pred_arg, wnd->cur))
1429 return true;
1430
1431 next_wnd(wnd);
1432 }
1433
1434 cur_hash = node_hash(h, wnd->cur);
1435 }
1436
1437 if (cur_hash == h->invalid_hash) {
1438 next_wnd(wnd);
1439 ASSERT(wnd->cur);
1440 goto try_again;
1441 }
1442
1443 /* The searched for node is not in the current bucket. */
1444 return true;
1445}
1446
1447/** Find the first item (deleted or not) with a hash greater or equal to \a hash.
1448 *
1449 * The function returns the first item with a hash that is greater or
1450 * equal to \a hash in \a wnd. Moreover it garbage collects logically
1451 * deleted node that have not yet been unlinked and freed. Therefore,
1452 * the returned node's predecessor will most likely be N_NORMAL.
1453 *
1454 * Unlike find_wnd_and_gc_pred(), this function may return a node
1455 * that is known to had been marked N_DELETED.
1456 *
1457 * @param h CHT to operate on.
1458 * @param hash Hash of the item to find.
1459 * @param walk_mode Bucket chain walk mode.
1460 * @param[in,out] wnd wnd.cur denotes the first node of the chain. If the
1461 * the operation is successful, \a wnd points to the desired
1462 * item.
1463 * @param resizing Set to true if a table resize was detected but walk_mode
1464 * suggested the table was not undergoing a resize.
1465 * @return False indicates the operation must be retried. True otherwise
1466 * (even if an item with exactly the same has was not found).
1467 */
1468static bool find_wnd_and_gc(cht_t *h, size_t hash, walk_mode_t walk_mode,
1469 wnd_t *wnd, bool *resizing)
1470{
1471try_again:
1472 ASSERT(wnd->cur);
1473
1474 while (node_hash(h, wnd->cur) < hash) {
1475 /* GC any deleted nodes along the way to our desired node. */
1476 if (N_DELETED & get_mark(wnd->cur->link)) {
1477 if (!gc_deleted_node(h, walk_mode, wnd, resizing)) {
1478 /* Failed to remove the garbage node. Retry. */
1479 return false;
1480 }
1481 } else {
1482 next_wnd(wnd);
1483 }
1484
1485 ASSERT(wnd->cur);
1486 }
1487
1488 if (node_hash(h, wnd->cur) == h->invalid_hash) {
1489 next_wnd(wnd);
1490 goto try_again;
1491 }
1492
1493 /* wnd->cur may be NULL or even marked N_DELETED. */
1494 return true;
1495}
1496
1497/** Garbage collects the N_DELETED node at \a wnd skipping join nodes. */
1498static bool gc_deleted_node(cht_t *h, walk_mode_t walk_mode, wnd_t *wnd,
1499 bool *resizing)
1500{
1501 ASSERT(N_DELETED & get_mark(wnd->cur->link));
1502
1503 /* Skip deleted JOIN nodes. */
1504 if (walk_mode == WM_LEAVE_JOIN && (N_JOIN & get_mark(wnd->cur->link))) {
1505 next_wnd(wnd);
1506 } else {
1507 /* Ordinary deleted node or a deleted JOIN_FOLLOWS. */
1508 ASSERT(walk_mode != WM_LEAVE_JOIN
1509 || !((N_JOIN | N_JOIN_FOLLOWS) & get_mark(wnd->cur->link)));
1510
1511 /* Unlink an ordinary deleted node, move JOIN_FOLLOWS mark. */
1512 if (!unlink_from_pred(wnd, walk_mode, resizing)) {
1513 /* Retry. The predecessor was deleted, invalid, const, join_follows. */
1514 return false;
1515 }
1516
1517 free_later(h, wnd->cur);
1518
1519 /* Leave ppred as is. */
1520 wnd->last = wnd->cur;
1521 wnd->cur = get_next(wnd->cur->link);
1522 }
1523
1524 return true;
1525}
1526
1527/** Returns true if a bucket join had already completed.
1528 *
1529 * May only be called if upd_resizing_head() indicates a bucket join
1530 * may be in progress.
1531 *
1532 * If it returns false, the search must be retried in order to guarantee
1533 * all item that should have been encountered have been seen.
1534 */
1535static bool join_completed(cht_t *h, const wnd_t *wnd)
1536{
1537 /*
1538 * The table is shrinking and the searched for item is in a bucket
1539 * appended to another. Check that the link joining these two buckets
1540 * is visible and if not, make it visible to this cpu.
1541 */
1542
1543 /*
1544 * Resizer ensures h->b->order stays the same for the duration of this
1545 * func. We got here because there was an alternative head to search.
1546 * The resizer waits for all preexisting readers to finish after
1547 * it
1548 */
1549 ASSERT(h->b->order > h->new_b->order);
1550 ASSERT(wnd->cur);
1551
1552 /* Either we did not need the joining link or we have already followed it.*/
1553 if (wnd->cur != &sentinel)
1554 return true;
1555
1556 /* We have reached the end of a bucket. */
1557
1558 if (wnd->last != &sentinel) {
1559 size_t last_seen_hash = node_hash(h, wnd->last);
1560
1561 if (last_seen_hash == h->invalid_hash) {
1562 last_seen_hash = calc_node_hash(h, wnd->last);
1563 }
1564
1565 size_t last_old_idx = calc_bucket_idx(last_seen_hash, h->b->order);
1566 size_t move_src_idx = grow_idx(shrink_idx(last_old_idx));
1567
1568 /*
1569 * Last node seen was in the joining bucket - if the searched
1570 * for node is there we will find it.
1571 */
1572 if (move_src_idx != last_old_idx)
1573 return true;
1574 }
1575
1576 /*
1577 * Reached the end of the bucket but no nodes from the joining bucket
1578 * were seen. There should have at least been a JOIN node so we have
1579 * definitely not seen (and followed) the joining link. Make the link
1580 * visible and retry.
1581 */
1582 read_barrier();
1583 return false;
1584}
1585
1586/** When resizing returns the bucket head to start the search with in \a phead.
1587 *
1588 * If a resize had been detected (eg cht_t.b.head[idx] is marked immutable).
1589 * upd_resizing_head() moves the bucket for \a hash from the old head
1590 * to the new head. Moreover, it splits or joins buckets as necessary.
1591 *
1592 * @param h CHT to operate on.
1593 * @param hash Hash of an item whose chain we would like to traverse.
1594 * @param[out] phead Head of the bucket to search for \a hash.
1595 * @param[out] join_finishing Set to true if a bucket join might be
1596 * in progress and the bucket may have to traversed again
1597 * as indicated by join_completed().
1598 * @param[out] walk_mode Specifies how to interpret node marks.
1599 */
1600static void upd_resizing_head(cht_t *h, size_t hash, marked_ptr_t **phead,
1601 bool *join_finishing, walk_mode_t *walk_mode)
1602{
1603 cht_buckets_t *b = rcu_access(h->b);
1604 size_t old_idx = calc_bucket_idx(hash, b->order);
1605 size_t new_idx = calc_bucket_idx(hash, h->new_b->order);
1606
1607 marked_ptr_t *pold_head = &b->head[old_idx];
1608 marked_ptr_t *pnew_head = &h->new_b->head[new_idx];
1609
1610 /* In any case, use the bucket in the new table. */
1611 *phead = pnew_head;
1612
1613 /* Growing the table. */
1614 if (b->order < h->new_b->order) {
1615 size_t move_dest_idx = grow_idx(old_idx);
1616 marked_ptr_t *pmoved_head = &h->new_b->head[move_dest_idx];
1617
1618 /* Complete moving the bucket from the old to the new table. */
1619 help_head_move(pold_head, pmoved_head);
1620
1621 /* The hash belongs to the moved bucket. */
1622 if (move_dest_idx == new_idx) {
1623 ASSERT(pmoved_head == pnew_head);
1624 /*
1625 * move_head() makes the new head of the moved bucket visible.
1626 * The new head may be marked with a JOIN_FOLLOWS
1627 */
1628 ASSERT(!(N_CONST & get_mark(*pmoved_head)));
1629 *walk_mode = WM_MOVE_JOIN_FOLLOWS;
1630 } else {
1631 ASSERT(pmoved_head != pnew_head);
1632 /*
1633 * The hash belongs to the bucket that is the result of splitting
1634 * the old/moved bucket, ie the bucket that contains the second
1635 * half of the split/old/moved bucket.
1636 */
1637
1638 /* The moved bucket has not yet been split. */
1639 if (N_NORMAL != get_mark(*pnew_head)) {
1640 size_t split_hash = calc_split_hash(new_idx, h->new_b->order);
1641 split_bucket(h, pmoved_head, pnew_head, split_hash);
1642 /*
1643 * split_bucket() makes the new head visible. No
1644 * JOIN_FOLLOWS in this part of split bucket.
1645 */
1646 ASSERT(N_NORMAL == get_mark(*pnew_head));
1647 }
1648
1649 *walk_mode = WM_LEAVE_JOIN;
1650 }
1651 } else if (h->new_b->order < b->order ) {
1652 /* Shrinking the table. */
1653
1654 size_t move_src_idx = grow_idx(new_idx);
1655
1656 /*
1657 * Complete moving the bucket from the old to the new table.
1658 * Makes a valid pnew_head visible if already moved.
1659 */
1660 help_head_move(&b->head[move_src_idx], pnew_head);
1661
1662 /* Hash belongs to the bucket to be joined with the moved bucket. */
1663 if (move_src_idx != old_idx) {
1664 /* Bucket join not yet completed. */
1665 if (N_INVALID != get_mark(*pold_head)) {
1666 size_t split_hash = calc_split_hash(old_idx, b->order);
1667 join_buckets(h, pold_head, pnew_head, split_hash);
1668 }
1669
1670 /*
1671 * The resizer sets pold_head to &sentinel when all cpus are
1672 * guaranteed to see the bucket join.
1673 */
1674 *join_finishing = (&sentinel != get_next(*pold_head));
1675 }
1676
1677 /* move_head() or join_buckets() makes it so or makes the mark visible.*/
1678 ASSERT(N_INVALID == get_mark(*pold_head));
1679 /* move_head() makes it visible. No JOIN_FOLLOWS used when shrinking. */
1680 ASSERT(N_NORMAL == get_mark(*pnew_head));
1681
1682 *walk_mode = WM_LEAVE_JOIN;
1683 } else {
1684 /*
1685 * Final stage of resize. The resizer is waiting for all
1686 * readers to notice that the old table had been replaced.
1687 */
1688 ASSERT(b == h->new_b);
1689 *walk_mode = WM_NORMAL;
1690 }
1691}
1692
1693
1694#if 0
1695static void move_head(marked_ptr_t *psrc_head, marked_ptr_t *pdest_head)
1696{
1697 start_head_move(psrc_head);
1698 cas_order_barrier();
1699 complete_head_move(psrc_head, pdest_head);
1700}
1701#endif
1702
1703/** Moves an immutable head \a psrc_head of cht_t.b to \a pdest_head of cht_t.new_b.
1704 *
1705 * The function guarantees the move will be visible on this cpu once
1706 * it completes. In particular, *pdest_head will not be N_INVALID.
1707 *
1708 * Unlike complete_head_move(), help_head_move() checks if the head had already
1709 * been moved and tries to avoid moving the bucket heads if possible.
1710 */
1711static inline void help_head_move(marked_ptr_t *psrc_head,
1712 marked_ptr_t *pdest_head)
1713{
1714 /* Head move has to in progress already when calling this func. */
1715 ASSERT(N_CONST & get_mark(*psrc_head));
1716
1717 /* Head already moved. */
1718 if (N_INVALID == get_mark(*psrc_head)) {
1719 /* Effects of the head move have not yet propagated to this cpu. */
1720 if (N_INVALID == get_mark(*pdest_head)) {
1721 /* Make the move visible on this cpu. */
1722 read_barrier();
1723 }
1724 } else {
1725 complete_head_move(psrc_head, pdest_head);
1726 }
1727
1728 ASSERT(!(N_CONST & get_mark(*pdest_head)));
1729}
1730
1731/** Initiates the move of the old head \a psrc_head.
1732 *
1733 * The move may be completed with help_head_move().
1734 */
1735static void start_head_move(marked_ptr_t *psrc_head)
1736{
1737 /* Mark src head immutable. */
1738 mark_const(psrc_head);
1739}
1740
1741/** Marks the head immutable. */
1742static void mark_const(marked_ptr_t *psrc_head)
1743{
1744 marked_ptr_t ret, src_link;
1745
1746 /* Mark src head immutable. */
1747 do {
1748 cht_link_t *next = get_next(*psrc_head);
1749 src_link = make_link(next, N_NORMAL);
1750
1751 /* Mark the normal/clean src link immutable/const. */
1752 ret = cas_link(psrc_head, next, N_NORMAL, next, N_CONST);
1753 } while(ret != src_link && !(N_CONST & get_mark(ret)));
1754}
1755
1756/** Completes moving head psrc_head to pdest_head (started by start_head_move()).*/
1757static void complete_head_move(marked_ptr_t *psrc_head, marked_ptr_t *pdest_head)
1758{
1759 ASSERT(N_JOIN_FOLLOWS != get_mark(*psrc_head));
1760 ASSERT(N_CONST & get_mark(*psrc_head));
1761
1762 cht_link_t *next = get_next(*psrc_head);
1763
1764 DBG(marked_ptr_t ret = )
1765 cas_link(pdest_head, &sentinel, N_INVALID, next, N_NORMAL);
1766 ASSERT(ret == make_link(&sentinel, N_INVALID) || (N_NORMAL == get_mark(ret)));
1767 cas_order_barrier();
1768
1769 DBG(ret = )
1770 cas_link(psrc_head, next, N_CONST, next, N_INVALID);
1771 ASSERT(ret == make_link(next, N_CONST) || (N_INVALID == get_mark(ret)));
1772 cas_order_barrier();
1773}
1774
1775/** Splits the bucket at psrc_head and links to the remainder from pdest_head.
1776 *
1777 * Items with hashes greater or equal to \a split_hash are moved to bucket
1778 * with head at \a pdest_head.
1779 *
1780 * @param h CHT to operate on.
1781 * @param psrc_head Head of the bucket to split (in cht_t.new_b).
1782 * @param pdest_head Head of the bucket that points to the second part
1783 * of the split bucket in psrc_head. (in cht_t.new_b)
1784 * @param split_hash Hash of the first possible item in the remainder of
1785 * psrc_head, ie the smallest hash pdest_head is allowed
1786 * to point to..
1787 */
1788static void split_bucket(cht_t *h, marked_ptr_t *psrc_head,
1789 marked_ptr_t *pdest_head, size_t split_hash)
1790{
1791 /* Already split. */
1792 if (N_NORMAL == get_mark(*pdest_head))
1793 return;
1794
1795 /*
1796 * L == Last node of the first part of the split bucket. That part
1797 * remains in the original/src bucket.
1798 * F == First node of the second part of the split bucket. That part
1799 * will be referenced from the dest bucket head.
1800 *
1801 * We want to first mark a clean L as JF so that updaters unaware of
1802 * the split (or table resize):
1803 * - do not insert a new node between L and F
1804 * - do not unlink L (that is why it has to be clean/normal)
1805 * - do not unlink F
1806 *
1807 * Then we can safely mark F as JN even if it has been marked deleted.
1808 * Once F is marked as JN updaters aware of table resize will not
1809 * attempt to unlink it (JN will have two predecessors - we cannot
1810 * safely unlink from both at the same time). Updaters unaware of
1811 * ongoing resize can reach F only via L and that node is already
1812 * marked JF, so they won't unlink F.
1813 *
1814 * Last, link the new/dest head to F.
1815 *
1816 *
1817 * 0) ,-- split_hash, first hash of the dest bucket
1818 * v
1819 * [src_head | N] -> .. -> [L] -> [F]
1820 * [dest_head | Inv]
1821 *
1822 * 1) ,-- split_hash
1823 * v
1824 * [src_head | N] -> .. -> [JF] -> [F]
1825 * [dest_head | Inv]
1826 *
1827 * 2) ,-- split_hash
1828 * v
1829 * [src_head | N] -> .. -> [JF] -> [JN]
1830 * [dest_head | Inv]
1831 *
1832 * 3) ,-- split_hash
1833 * v
1834 * [src_head | N] -> .. -> [JF] -> [JN]
1835 * ^
1836 * [dest_head | N] -----------------'
1837 */
1838 wnd_t wnd;
1839
1840 rcu_read_lock();
1841
1842 /* Mark the last node of the first part of the split bucket as JF. */
1843 mark_join_follows(h, psrc_head, split_hash, &wnd);
1844 cas_order_barrier();
1845
1846 /* There are nodes in the dest bucket, ie the second part of the split. */
1847 if (wnd.cur != &sentinel) {
1848 /*
1849 * Mark the first node of the dest bucket as a join node so
1850 * updaters do not attempt to unlink it if it is deleted.
1851 */
1852 mark_join_node(wnd.cur);
1853 cas_order_barrier();
1854 } else {
1855 /*
1856 * Second part of the split bucket is empty. There are no nodes
1857 * to mark as JOIN nodes and there never will be.
1858 */
1859 }
1860
1861 /* Link the dest head to the second part of the split. */
1862 DBG(marked_ptr_t ret = )
1863 cas_link(pdest_head, &sentinel, N_INVALID, wnd.cur, N_NORMAL);
1864 ASSERT(ret == make_link(&sentinel, N_INVALID) || (N_NORMAL == get_mark(ret)));
1865 cas_order_barrier();
1866
1867 rcu_read_unlock();
1868}
1869
1870/** Finds and marks the last node of psrc_head w/ hash less than split_hash.
1871 *
1872 * Finds a node in psrc_head with the greatest hash that is strictly less
1873 * than split_hash and marks it with N_JOIN_FOLLOWS.
1874 *
1875 * Returns a window pointing to that node.
1876 *
1877 * Any logically deleted nodes along the way are
1878 * garbage collected; therefore, the predecessor node (if any) will most
1879 * likely not be marked N_DELETED.
1880 *
1881 * @param h CHT to operate on.
1882 * @param psrc_head Bucket head.
1883 * @param split_hash The smallest hash a join node (ie the node following
1884 * the desired join-follows node) may have.
1885 * @param[out] wnd Points to the node marked with N_JOIN_FOLLOWS.
1886 */
1887static void mark_join_follows(cht_t *h, marked_ptr_t *psrc_head,
1888 size_t split_hash, wnd_t *wnd)
1889{
1890 /* See comment in split_bucket(). */
1891
1892 bool done = false;
1893
1894 do {
1895 bool resizing = false;
1896 wnd->ppred = psrc_head;
1897 wnd->cur = get_next(*psrc_head);
1898
1899 /*
1900 * Find the split window, ie the last node of the first part of
1901 * the split bucket and the its successor - the first node of
1902 * the second part of the split bucket. Retry if GC failed.
1903 */
1904 if (!find_wnd_and_gc(h, split_hash, WM_MOVE_JOIN_FOLLOWS, wnd, &resizing))
1905 continue;
1906
1907 /* Must not report that the table is resizing if WM_MOVE_JOIN_FOLLOWS.*/
1908 ASSERT(!resizing);
1909 /*
1910 * Mark the last node of the first half of the split bucket
1911 * that a join node follows. It must be clean/normal.
1912 */
1913 marked_ptr_t ret
1914 = cas_link(wnd->ppred, wnd->cur, N_NORMAL, wnd->cur, N_JOIN_FOLLOWS);
1915
1916 /*
1917 * Successfully marked as a JF node or already marked that way (even
1918 * if also marked deleted - unlinking the node will move the JF mark).
1919 */
1920 done = (ret == make_link(wnd->cur, N_NORMAL))
1921 || (N_JOIN_FOLLOWS & get_mark(ret));
1922 } while (!done);
1923}
1924
1925/** Marks join_node with N_JOIN. */
1926static void mark_join_node(cht_link_t *join_node)
1927{
1928 /* See comment in split_bucket(). */
1929
1930 bool done;
1931 do {
1932 cht_link_t *next = get_next(join_node->link);
1933 mark_t mark = get_mark(join_node->link);
1934
1935 /*
1936 * May already be marked as deleted, but it won't be unlinked
1937 * because its predecessor is marked with JOIN_FOLLOWS or CONST.
1938 */
1939 marked_ptr_t ret
1940 = cas_link(&join_node->link, next, mark, next, mark | N_JOIN);
1941
1942 /* Successfully marked or already marked as a join node. */
1943 done = (ret == make_link(next, mark))
1944 || (N_JOIN & get_mark(ret));
1945 } while(!done);
1946}
1947
1948/** Appends the bucket at psrc_head to the bucket at pdest_head.
1949 *
1950 * @param h CHT to operate on.
1951 * @param psrc_head Bucket to merge with pdest_head.
1952 * @param pdest_head Bucket to be joined by psrc_head.
1953 * @param split_hash The smallest hash psrc_head may contain.
1954 */
1955static void join_buckets(cht_t *h, marked_ptr_t *psrc_head,
1956 marked_ptr_t *pdest_head, size_t split_hash)
1957{
1958 /* Buckets already joined. */
1959 if (N_INVALID == get_mark(*psrc_head))
1960 return;
1961 /*
1962 * F == First node of psrc_head, ie the bucket we want to append
1963 * to (ie join with) the bucket starting at pdest_head.
1964 * L == Last node of pdest_head, ie the bucket that psrc_head will
1965 * be appended to.
1966 *
1967 * (1) We first mark psrc_head immutable to signal that a join is
1968 * in progress and so that updaters unaware of the join (or table
1969 * resize):
1970 * - do not insert new nodes between the head psrc_head and F
1971 * - do not unlink F (it may already be marked deleted)
1972 *
1973 * (2) Next, F is marked as a join node. Updaters aware of table resize
1974 * will not attempt to unlink it. We cannot safely/atomically unlink
1975 * the join node because it will be pointed to from two different
1976 * buckets. Updaters unaware of resize will fail to unlink the join
1977 * node due to the head being marked immutable.
1978 *
1979 * (3) Then the tail of the bucket at pdest_head is linked to the join
1980 * node. From now on, nodes in both buckets can be found via pdest_head.
1981 *
1982 * (4) Last, mark immutable psrc_head as invalid. It signals updaters
1983 * that the join is complete and they can insert new nodes (originally
1984 * destined for psrc_head) into pdest_head.
1985 *
1986 * Note that pdest_head keeps pointing at the join node. This allows
1987 * lookups and updaters to determine if they should see a link between
1988 * the tail L and F when searching for nodes originally in psrc_head
1989 * via pdest_head. If they reach the tail of pdest_head without
1990 * encountering any nodes of psrc_head, either there were no nodes
1991 * in psrc_head to begin with or the link between L and F did not
1992 * yet propagate to their cpus. If psrc_head was empty, it remains
1993 * NULL. Otherwise psrc_head points to a join node (it will not be
1994 * unlinked until table resize completes) and updaters/lookups
1995 * should issue a read_barrier() to make the link [L]->[JN] visible.
1996 *
1997 * 0) ,-- split_hash, first hash of the src bucket
1998 * v
1999 * [dest_head | N]-> .. -> [L]
2000 * [src_head | N]--> [F] -> ..
2001 * ^
2002 * ` split_hash, first hash of the src bucket
2003 *
2004 * 1) ,-- split_hash
2005 * v
2006 * [dest_head | N]-> .. -> [L]
2007 * [src_head | C]--> [F] -> ..
2008 *
2009 * 2) ,-- split_hash
2010 * v
2011 * [dest_head | N]-> .. -> [L]
2012 * [src_head | C]--> [JN] -> ..
2013 *
2014 * 3) ,-- split_hash
2015 * v
2016 * [dest_head | N]-> .. -> [L] --+
2017 * v
2018 * [src_head | C]-------------> [JN] -> ..
2019 *
2020 * 4) ,-- split_hash
2021 * v
2022 * [dest_head | N]-> .. -> [L] --+
2023 * v
2024 * [src_head | Inv]-----------> [JN] -> ..
2025 */
2026
2027 rcu_read_lock();
2028
2029 /* Mark src_head immutable - signals updaters that bucket join started. */
2030 mark_const(psrc_head);
2031 cas_order_barrier();
2032
2033 cht_link_t *join_node = get_next(*psrc_head);
2034
2035 if (join_node != &sentinel) {
2036 mark_join_node(join_node);
2037 cas_order_barrier();
2038
2039 link_to_join_node(h, pdest_head, join_node, split_hash);
2040 cas_order_barrier();
2041 }
2042
2043 DBG(marked_ptr_t ret = )
2044 cas_link(psrc_head, join_node, N_CONST, join_node, N_INVALID);
2045 ASSERT(ret == make_link(join_node, N_CONST) || (N_INVALID == get_mark(ret)));
2046 cas_order_barrier();
2047
2048 rcu_read_unlock();
2049}
2050
2051/** Links the tail of pdest_head to join_node.
2052 *
2053 * @param h CHT to operate on.
2054 * @param pdest_head Head of the bucket whose tail is to be linked to join_node.
2055 * @param join_node A node marked N_JOIN with a hash greater or equal to
2056 * split_hash.
2057 * @param split_hash The least hash that is greater than the hash of any items
2058 * (originally) in pdest_head.
2059 */
2060static void link_to_join_node(cht_t *h, marked_ptr_t *pdest_head,
2061 cht_link_t *join_node, size_t split_hash)
2062{
2063 bool done = false;
2064
2065 do {
2066 wnd_t wnd = {
2067 .ppred = pdest_head,
2068 .cur = get_next(*pdest_head)
2069 };
2070
2071 bool resizing = false;
2072
2073 if (!find_wnd_and_gc(h, split_hash, WM_LEAVE_JOIN, &wnd, &resizing))
2074 continue;
2075
2076 ASSERT(!resizing);
2077
2078 if (wnd.cur != &sentinel) {
2079 /* Must be from the new appended bucket. */
2080 ASSERT(split_hash <= node_hash(h, wnd.cur)
2081 || h->invalid_hash == node_hash(h, wnd.cur));
2082 return;
2083 }
2084
2085 /* Reached the tail of pdest_head - link it to the join node. */
2086 marked_ptr_t ret =
2087 cas_link(wnd.ppred, &sentinel, N_NORMAL, join_node, N_NORMAL);
2088
2089 done = (ret == make_link(&sentinel, N_NORMAL));
2090 } while (!done);
2091}
2092
2093/** Instructs RCU to free the item once all preexisting references are dropped.
2094 *
2095 * The item is freed via op->remove_callback().
2096 */
2097static void free_later(cht_t *h, cht_link_t *item)
2098{
2099 ASSERT(item != &sentinel);
2100
2101 /*
2102 * remove_callback only works as rcu_func_t because rcu_link is the first
2103 * field in cht_link_t.
2104 */
2105 rcu_call(&item->rcu_link, (rcu_func_t)h->op->remove_callback);
2106
2107 item_removed(h);
2108}
2109
2110/** Notes that an item had been unlinked from the table and shrinks it if needed.
2111 *
2112 * If the number of items in the table drops below 1/4 of the maximum
2113 * allowed load the table is shrunk in the background.
2114 */
2115static inline void item_removed(cht_t *h)
2116{
2117 size_t items = (size_t) atomic_predec(&h->item_cnt);
2118 size_t bucket_cnt = (1 << h->b->order);
2119
2120 bool need_shrink = (items == h->max_load * bucket_cnt / 4);
2121 bool missed_shrink = (items == h->max_load * bucket_cnt / 8);
2122
2123 if ((need_shrink || missed_shrink) && h->b->order > h->min_order) {
2124 atomic_count_t resize_reqs = atomic_preinc(&h->resize_reqs);
2125 /* The first resize request. Start the resizer. */
2126 if (1 == resize_reqs) {
2127 workq_global_enqueue_noblock(&h->resize_work, resize_table);
2128 }
2129 }
2130}
2131
2132/** Notes an item had been inserted and grows the table if needed.
2133 *
2134 * The table is resized in the background.
2135 */
2136static inline void item_inserted(cht_t *h)
2137{
2138 size_t items = (size_t) atomic_preinc(&h->item_cnt);
2139 size_t bucket_cnt = (1 << h->b->order);
2140
2141 bool need_grow = (items == h->max_load * bucket_cnt);
2142 bool missed_grow = (items == 2 * h->max_load * bucket_cnt);
2143
2144 if ((need_grow || missed_grow) && h->b->order < CHT_MAX_ORDER) {
2145 atomic_count_t resize_reqs = atomic_preinc(&h->resize_reqs);
2146 /* The first resize request. Start the resizer. */
2147 if (1 == resize_reqs) {
2148 workq_global_enqueue_noblock(&h->resize_work, resize_table);
2149 }
2150 }
2151}
2152
2153/** Resize request handler. Invoked on the system work queue. */
2154static void resize_table(work_t *arg)
2155{
2156 cht_t *h = member_to_inst(arg, cht_t, resize_work);
2157
2158#ifdef CONFIG_DEBUG
2159 ASSERT(h->b);
2160 /* Make resize_reqs visible. */
2161 read_barrier();
2162 ASSERT(0 < atomic_get(&h->resize_reqs));
2163#endif
2164
2165 bool done;
2166 do {
2167 /* Load the most recent h->item_cnt. */
2168 read_barrier();
2169 size_t cur_items = (size_t) atomic_get(&h->item_cnt);
2170 size_t bucket_cnt = (1 << h->b->order);
2171 size_t max_items = h->max_load * bucket_cnt;
2172
2173 if (cur_items >= max_items && h->b->order < CHT_MAX_ORDER) {
2174 grow_table(h);
2175 } else if (cur_items <= max_items / 4 && h->b->order > h->min_order) {
2176 shrink_table(h);
2177 } else {
2178 /* Table is just the right size. */
2179 atomic_count_t reqs = atomic_predec(&h->resize_reqs);
2180 done = (reqs == 0);
2181 }
2182 } while (!done);
2183}
2184
2185/** Increases the number of buckets two-fold. Blocks until done. */
2186static void grow_table(cht_t *h)
2187{
2188 if (h->b->order >= CHT_MAX_ORDER)
2189 return;
2190
2191 h->new_b = alloc_buckets(h->b->order + 1, true, false);
2192
2193 /* Failed to alloc a new table - try next time the resizer is run. */
2194 if (!h->new_b)
2195 return;
2196
2197 /* Wait for all readers and updaters to see the initialized new table. */
2198 rcu_synchronize();
2199 size_t old_bucket_cnt = (1 << h->b->order);
2200
2201 /*
2202 * Give updaters a chance to help out with the resize. Do the minimum
2203 * work needed to announce a resize is in progress, ie start moving heads.
2204 */
2205 for (size_t idx = 0; idx < old_bucket_cnt; ++idx) {
2206 start_head_move(&h->b->head[idx]);
2207 }
2208
2209 /* Order start_head_move() wrt complete_head_move(). */
2210 cas_order_barrier();
2211
2212 /* Complete moving heads and split any buckets not yet split by updaters. */
2213 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2214 marked_ptr_t *move_dest_head = &h->new_b->head[grow_idx(old_idx)];
2215 marked_ptr_t *move_src_head = &h->b->head[old_idx];
2216
2217 /* Head move not yet completed. */
2218 if (N_INVALID != get_mark(*move_src_head)) {
2219 complete_head_move(move_src_head, move_dest_head);
2220 }
2221
2222 size_t split_idx = grow_to_split_idx(old_idx);
2223 size_t split_hash = calc_split_hash(split_idx, h->new_b->order);
2224 marked_ptr_t *split_dest_head = &h->new_b->head[split_idx];
2225
2226 split_bucket(h, move_dest_head, split_dest_head, split_hash);
2227 }
2228
2229 /*
2230 * Wait for all updaters to notice the new heads. Once everyone sees
2231 * the invalid old bucket heads they will know a resize is in progress
2232 * and updaters will modify the correct new buckets.
2233 */
2234 rcu_synchronize();
2235
2236 /* Clear the JOIN_FOLLOWS mark and remove the link between the split buckets.*/
2237 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2238 size_t new_idx = grow_idx(old_idx);
2239
2240 cleanup_join_follows(h, &h->new_b->head[new_idx]);
2241 }
2242
2243 /*
2244 * Wait for everyone to notice that buckets were split, ie link connecting
2245 * the join follows and join node has been cut.
2246 */
2247 rcu_synchronize();
2248
2249 /* Clear the JOIN mark and GC any deleted join nodes. */
2250 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2251 size_t new_idx = grow_to_split_idx(old_idx);
2252
2253 cleanup_join_node(h, &h->new_b->head[new_idx]);
2254 }
2255
2256 /* Wait for everyone to see that the table is clear of any resize marks. */
2257 rcu_synchronize();
2258
2259 cht_buckets_t *old_b = h->b;
2260 rcu_assign(h->b, h->new_b);
2261
2262 /* Wait for everyone to start using the new table. */
2263 rcu_synchronize();
2264
2265 free(old_b);
2266
2267 /* Not needed; just for increased readability. */
2268 h->new_b = NULL;
2269}
2270
2271/** Halfs the number of buckets. Blocks until done. */
2272static void shrink_table(cht_t *h)
2273{
2274 if (h->b->order <= h->min_order)
2275 return;
2276
2277 h->new_b = alloc_buckets(h->b->order - 1, true, false);
2278
2279 /* Failed to alloc a new table - try next time the resizer is run. */
2280 if (!h->new_b)
2281 return;
2282
2283 /* Wait for all readers and updaters to see the initialized new table. */
2284 rcu_synchronize();
2285
2286 size_t old_bucket_cnt = (1 << h->b->order);
2287
2288 /*
2289 * Give updaters a chance to help out with the resize. Do the minimum
2290 * work needed to announce a resize is in progress, ie start moving heads.
2291 */
2292 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2293 size_t new_idx = shrink_idx(old_idx);
2294
2295 /* This bucket should be moved. */
2296 if (grow_idx(new_idx) == old_idx) {
2297 start_head_move(&h->b->head[old_idx]);
2298 } else {
2299 /* This bucket should join the moved bucket once the move is done.*/
2300 }
2301 }
2302
2303 /* Order start_head_move() wrt to complete_head_move(). */
2304 cas_order_barrier();
2305
2306 /* Complete moving heads and join buckets with the moved buckets. */
2307 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2308 size_t new_idx = shrink_idx(old_idx);
2309 size_t move_src_idx = grow_idx(new_idx);
2310
2311 /* This bucket should be moved. */
2312 if (move_src_idx == old_idx) {
2313 /* Head move not yet completed. */
2314 if (N_INVALID != get_mark(h->b->head[old_idx])) {
2315 complete_head_move(&h->b->head[old_idx], &h->new_b->head[new_idx]);
2316 }
2317 } else {
2318 /* This bucket should join the moved bucket. */
2319 size_t split_hash = calc_split_hash(old_idx, h->b->order);
2320 join_buckets(h, &h->b->head[old_idx], &h->new_b->head[new_idx],
2321 split_hash);
2322 }
2323 }
2324
2325 /*
2326 * Wait for all updaters to notice the new heads. Once everyone sees
2327 * the invalid old bucket heads they will know a resize is in progress
2328 * and updaters will modify the correct new buckets.
2329 */
2330 rcu_synchronize();
2331
2332 /* Let everyone know joins are complete and fully visible. */
2333 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2334 size_t move_src_idx = grow_idx(shrink_idx(old_idx));
2335
2336 /* Set the invalid joinee head to NULL. */
2337 if (old_idx != move_src_idx) {
2338 ASSERT(N_INVALID == get_mark(h->b->head[old_idx]));
2339
2340 if (&sentinel != get_next(h->b->head[old_idx]))
2341 h->b->head[old_idx] = make_link(&sentinel, N_INVALID);
2342 }
2343 }
2344
2345 /* todo comment join node vs reset joinee head*/
2346 rcu_synchronize();
2347
2348 size_t new_bucket_cnt = (1 << h->new_b->order);
2349
2350 /* Clear the JOIN mark and GC any deleted join nodes. */
2351 for (size_t new_idx = 0; new_idx < new_bucket_cnt; ++new_idx) {
2352 cleanup_join_node(h, &h->new_b->head[new_idx]);
2353 }
2354
2355 /* Wait for everyone to see that the table is clear of any resize marks. */
2356 rcu_synchronize();
2357
2358 cht_buckets_t *old_b = h->b;
2359 rcu_assign(h->b, h->new_b);
2360
2361 /* Wait for everyone to start using the new table. */
2362 rcu_synchronize();
2363
2364 free(old_b);
2365
2366 /* Not needed; just for increased readability. */
2367 h->new_b = NULL;
2368}
2369
2370/** Finds and clears the N_JOIN mark from a node in new_head (if present). */
2371static void cleanup_join_node(cht_t *h, marked_ptr_t *new_head)
2372{
2373 rcu_read_lock();
2374
2375 cht_link_t *cur = get_next(*new_head);
2376
2377 while (cur != &sentinel) {
2378 /* Clear the join node's JN mark - even if it is marked as deleted. */
2379 if (N_JOIN & get_mark(cur->link)) {
2380 clear_join_and_gc(h, cur, new_head);
2381 break;
2382 }
2383
2384 cur = get_next(cur->link);
2385 }
2386
2387 rcu_read_unlock();
2388}
2389
2390/** Clears the join_node's N_JOIN mark frees it if marked N_DELETED as well. */
2391static void clear_join_and_gc(cht_t *h, cht_link_t *join_node,
2392 marked_ptr_t *new_head)
2393{
2394 ASSERT(join_node != &sentinel);
2395 ASSERT(join_node && (N_JOIN & get_mark(join_node->link)));
2396
2397 bool done;
2398
2399 /* Clear the JN mark. */
2400 do {
2401 marked_ptr_t jn_link = join_node->link;
2402 cht_link_t *next = get_next(jn_link);
2403 /* Clear the JOIN mark but keep the DEL mark if present. */
2404 mark_t cleared_mark = get_mark(jn_link) & N_DELETED;
2405
2406 marked_ptr_t ret =
2407 _cas_link(&join_node->link, jn_link, make_link(next, cleared_mark));
2408
2409 /* Done if the mark was cleared. Retry if a new node was inserted. */
2410 done = (ret == jn_link);
2411 ASSERT(ret == jn_link || (get_mark(ret) & N_JOIN));
2412 } while (!done);
2413
2414 if (!(N_DELETED & get_mark(join_node->link)))
2415 return;
2416
2417 /* The join node had been marked as deleted - GC it. */
2418
2419 /* Clear the JOIN mark before trying to unlink the deleted join node.*/
2420 cas_order_barrier();
2421
2422 size_t jn_hash = node_hash(h, join_node);
2423 do {
2424 bool resizing = false;
2425
2426 wnd_t wnd = {
2427 .ppred = new_head,
2428 .cur = get_next(*new_head)
2429 };
2430
2431 done = find_wnd_and_gc_pred(h, jn_hash, WM_NORMAL, same_node_pred,
2432 join_node, &wnd, &resizing);
2433
2434 ASSERT(!resizing);
2435 } while (!done);
2436}
2437
2438/** Finds a non-deleted node with N_JOIN_FOLLOWS and clears the mark. */
2439static void cleanup_join_follows(cht_t *h, marked_ptr_t *new_head)
2440{
2441 ASSERT(new_head);
2442
2443 rcu_read_lock();
2444
2445 wnd_t wnd = {
2446 .ppred = NULL,
2447 .cur = NULL
2448 };
2449 marked_ptr_t *cur_link = new_head;
2450
2451 /*
2452 * Find the non-deleted node with a JF mark and clear the JF mark.
2453 * The JF node may be deleted and/or the mark moved to its neighbors
2454 * at any time. Therefore, we GC deleted nodes until we find the JF
2455 * node in order to remove stale/deleted JF nodes left behind eg by
2456 * delayed threads that did not yet get a chance to unlink the deleted
2457 * JF node and move its mark.
2458 *
2459 * Note that the head may be marked JF (but never DELETED).
2460 */
2461 while (true) {
2462 bool is_jf_node = N_JOIN_FOLLOWS & get_mark(*cur_link);
2463
2464 /* GC any deleted nodes on the way - even deleted JOIN_FOLLOWS. */
2465 if (N_DELETED & get_mark(*cur_link)) {
2466 ASSERT(cur_link != new_head);
2467 ASSERT(wnd.ppred && wnd.cur && wnd.cur != &sentinel);
2468 ASSERT(cur_link == &wnd.cur->link);
2469
2470 bool dummy;
2471 bool deleted = gc_deleted_node(h, WM_MOVE_JOIN_FOLLOWS, &wnd, &dummy);
2472
2473 /* Failed to GC or collected a deleted JOIN_FOLLOWS. */
2474 if (!deleted || is_jf_node) {
2475 /* Retry from the head of the bucket. */
2476 cur_link = new_head;
2477 continue;
2478 }
2479 } else {
2480 /* Found a non-deleted JF. Clear its JF mark. */
2481 if (is_jf_node) {
2482 cht_link_t *next = get_next(*cur_link);
2483 marked_ptr_t ret =
2484 cas_link(cur_link, next, N_JOIN_FOLLOWS, &sentinel, N_NORMAL);
2485
2486 ASSERT(next == &sentinel
2487 || ((N_JOIN | N_JOIN_FOLLOWS) & get_mark(ret)));
2488
2489 /* Successfully cleared the JF mark of a non-deleted node. */
2490 if (ret == make_link(next, N_JOIN_FOLLOWS)) {
2491 break;
2492 } else {
2493 /*
2494 * The JF node had been deleted or a new node inserted
2495 * right after it. Retry from the head.
2496 */
2497 cur_link = new_head;
2498 continue;
2499 }
2500 } else {
2501 wnd.ppred = cur_link;
2502 wnd.cur = get_next(*cur_link);
2503 }
2504 }
2505
2506 /* We must encounter a JF node before we reach the end of the bucket. */
2507 ASSERT(wnd.cur && wnd.cur != &sentinel);
2508 cur_link = &wnd.cur->link;
2509 }
2510
2511 rcu_read_unlock();
2512}
2513
2514/** Returns the first possible hash following a bucket split point.
2515 *
2516 * In other words the returned hash is the smallest possible hash
2517 * the remainder of the split bucket may contain.
2518 */
2519static inline size_t calc_split_hash(size_t split_idx, size_t order)
2520{
2521 ASSERT(1 <= order && order <= 8 * sizeof(size_t));
2522 return split_idx << (8 * sizeof(size_t) - order);
2523}
2524
2525/** Returns the bucket head index given the table size order and item hash. */
2526static inline size_t calc_bucket_idx(size_t hash, size_t order)
2527{
2528 ASSERT(1 <= order && order <= 8 * sizeof(size_t));
2529 return hash >> (8 * sizeof(size_t) - order);
2530}
2531
2532/** Returns the bucket index of destination*/
2533static inline size_t grow_to_split_idx(size_t old_idx)
2534{
2535 return grow_idx(old_idx) | 1;
2536}
2537
2538/** Returns the destination index of a bucket head when the table is growing. */
2539static inline size_t grow_idx(size_t idx)
2540{
2541 return idx << 1;
2542}
2543
2544/** Returns the destination index of a bucket head when the table is shrinking.*/
2545static inline size_t shrink_idx(size_t idx)
2546{
2547 return idx >> 1;
2548}
2549
2550/** Returns a mixed hash of the search key.*/
2551static inline size_t calc_key_hash(cht_t *h, void *key)
2552{
2553 /* Mimic calc_node_hash. */
2554 return hash_mix(h->op->key_hash(key)) & ~(size_t)1;
2555}
2556
2557/** Returns a memoized mixed hash of the item. */
2558static inline size_t node_hash(cht_t *h, const cht_link_t *item)
2559{
2560 ASSERT(item->hash == h->invalid_hash
2561 || item->hash == sentinel.hash
2562 || item->hash == calc_node_hash(h, item));
2563
2564 return item->hash;
2565}
2566
2567/** Calculates and mixed the hash of the item. */
2568static inline size_t calc_node_hash(cht_t *h, const cht_link_t *item)
2569{
2570 ASSERT(item != &sentinel);
2571 /*
2572 * Clear the lowest order bit in order for sentinel's node hash
2573 * to be the greatest possible.
2574 */
2575 return hash_mix(h->op->hash(item)) & ~(size_t)1;
2576}
2577
2578/** Computes and memoizes the hash of the item. */
2579static inline void memoize_node_hash(cht_t *h, cht_link_t *item)
2580{
2581 item->hash = calc_node_hash(h, item);
2582}
2583
2584/** Packs the next pointer address and the mark into a single pointer. */
2585static inline marked_ptr_t make_link(const cht_link_t *next, mark_t mark)
2586{
2587 marked_ptr_t ptr = (marked_ptr_t) next;
2588
2589 ASSERT(!(ptr & N_MARK_MASK));
2590 ASSERT(!((unsigned)mark & ~N_MARK_MASK));
2591
2592 return ptr | mark;
2593}
2594
2595/** Strips any marks from the next item link and returns the next item's address.*/
2596static inline cht_link_t * get_next(marked_ptr_t link)
2597{
2598 return (cht_link_t*)(link & ~N_MARK_MASK);
2599}
2600
2601/** Returns the current node's mark stored in the next item link. */
2602static inline mark_t get_mark(marked_ptr_t link)
2603{
2604 return (mark_t)(link & N_MARK_MASK);
2605}
2606
2607/** Moves the window by one item so that is points to the next item. */
2608static inline void next_wnd(wnd_t *wnd)
2609{
2610 ASSERT(wnd);
2611 ASSERT(wnd->cur);
2612
2613 wnd->last = wnd->cur;
2614 wnd->ppred = &wnd->cur->link;
2615 wnd->cur = get_next(wnd->cur->link);
2616}
2617
2618/** Predicate that matches only exactly the same node. */
2619static bool same_node_pred(void *node, const cht_link_t *item2)
2620{
2621 const cht_link_t *item1 = (const cht_link_t*) node;
2622 return item1 == item2;
2623}
2624
2625/** Compare-and-swaps a next item link. */
2626static inline marked_ptr_t cas_link(marked_ptr_t *link, const cht_link_t *cur_next,
2627 mark_t cur_mark, const cht_link_t *new_next, mark_t new_mark)
2628{
2629 return _cas_link(link, make_link(cur_next, cur_mark),
2630 make_link(new_next, new_mark));
2631}
2632
2633/** Compare-and-swaps a next item link. */
2634static inline marked_ptr_t _cas_link(marked_ptr_t *link, marked_ptr_t cur,
2635 marked_ptr_t new)
2636{
2637 ASSERT(link != &sentinel.link);
2638 /*
2639 * cas(x) on the same location x on one cpu must be ordered, but do not
2640 * have to be ordered wrt to other cas(y) to a different location y
2641 * on the same cpu.
2642 *
2643 * cas(x) must act as a write barrier on x, ie if cas(x) succeeds
2644 * and is observed by another cpu, then all cpus must be able to
2645 * make the effects of cas(x) visible just by issuing a load barrier.
2646 * For example:
2647 * cpu1 cpu2 cpu3
2648 * cas(x, 0 -> 1), succeeds
2649 * cas(x, 0 -> 1), fails
2650 * MB, to order load of x in cas and store to y
2651 * y = 7
2652 * sees y == 7
2653 * loadMB must be enough to make cas(x) on cpu3 visible to cpu1, ie x == 1.
2654 *
2655 * If cas() did not work this way:
2656 * a) our head move protocol would not be correct.
2657 * b) freeing an item linked to a moved head after another item was
2658 * inserted in front of it, would require more than one grace period.
2659 *
2660 * Ad (a): In the following example, cpu1 starts moving old_head
2661 * to new_head, cpu2 completes the move and cpu3 notices cpu2
2662 * completed the move before cpu1 gets a chance to notice cpu2
2663 * had already completed the move. Our requirements for cas()
2664 * assume cpu3 will see a valid and mutable value in new_head
2665 * after issuing a load memory barrier once it has determined
2666 * the old_head's value had been successfully moved to new_head
2667 * (because it sees old_head marked invalid).
2668 *
2669 * cpu1 cpu2 cpu3
2670 * cas(old_head, <addr, N>, <addr, Const>), succeeds
2671 * cas-order-barrier
2672 * // Move from old_head to new_head started, now the interesting stuff:
2673 * cas(new_head, <0, Inv>, <addr, N>), succeeds
2674 *
2675 * cas(new_head, <0, Inv>, <addr, N>), but fails
2676 * cas-order-barrier
2677 * cas(old_head, <addr, Const>, <addr, Inv>), succeeds
2678 *
2679 * Sees old_head marked Inv (by cpu2)
2680 * load-MB
2681 * assert(new_head == <addr, N>)
2682 *
2683 * cas-order-barrier
2684 *
2685 * Even though cpu1 did not yet issue a cas-order-barrier, cpu1's store
2686 * to new_head (successful cas()) must be made visible to cpu3 with
2687 * a load memory barrier if cpu1's store to new_head is visible
2688 * on another cpu (cpu2) and that cpu's (cpu2's) store to old_head
2689 * is already visible to cpu3. *
2690 */
2691 void *expected = (void*)cur;
2692
2693 /*
2694 * Use the acquire-release model, although we could probably
2695 * get away even with the relaxed memory model due to our use
2696 * of explicit memory barriers.
2697 */
2698 __atomic_compare_exchange_n((void**)link, &expected, (void *)new, false,
2699 __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
2700
2701 return (marked_ptr_t) expected;
2702}
2703
2704/** Orders compare-and-swaps to different memory locations. */
2705static inline void cas_order_barrier(void)
2706{
2707 /* Make sure CAS to different memory locations are ordered. */
2708 write_barrier();
2709}
2710
2711
2712/** @}
2713 */
Note: See TracBrowser for help on using the repository browser.