source: mainline/kernel/generic/src/adt/cht.c@ 3cfe2b8

lfn serial ticket/834-toolchain-update topic/msim-upgrade topic/simplify-dev-export
Last change on this file since 3cfe2b8 was 3cfe2b8, checked in by Jiří Zárevúcky <jiri.zarevucky@…>, 7 years ago

Convert atomic_t to atomic_size_t (6): Replace atomic_count_t with size_t

  • Property mode set to 100644
File size: 89.4 KB
Line 
1/*
2 * Copyright (c) 2012 Adam Hraska
3 * All rights reserved.
4 *
5 * Redistribution and use in source and binary forms, with or without
6 * modification, are permitted provided that the following conditions
7 * are met:
8 *
9 * - Redistributions of source code must retain the above copyright
10 * notice, this list of conditions and the following disclaimer.
11 * - Redistributions in binary form must reproduce the above copyright
12 * notice, this list of conditions and the following disclaimer in the
13 * documentation and/or other materials provided with the distribution.
14 * - The name of the author may not be used to endorse or promote products
15 * derived from this software without specific prior written permission.
16 *
17 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
18 * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
19 * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
20 * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
21 * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
22 * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
23 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
24 * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
26 * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27 */
28
29
30/** @addtogroup genericadt
31 * @{
32 */
33
34/**
35 * @file
36 * @brief Scalable resizable concurrent lock-free hash table.
37 *
38 * CHT is a concurrent hash table that is scalable resizable and lock-free.
39 * resizable = the number of buckets of the table increases or decreases
40 * depending on the average number of elements per bucket (ie load)
41 * scalable = accessing the table from more cpus increases performance
42 * almost linearly
43 * lock-free = common operations never block; even if any of the operations
44 * is preempted or interrupted at any time, other operations will still
45 * make forward progress
46 *
47 * CHT is designed for read mostly scenarios. Performance degrades as the
48 * fraction of updates (insert/remove) increases. Other data structures
49 * significantly outperform CHT if the fraction of updates exceeds ~40%.
50 *
51 * CHT tolerates hardware exceptions and may be accessed from exception
52 * handlers as long as the underlying RCU implementation is exception safe.
53 *
54 * @par Caveats
55 *
56 * 0) Never assume an item is still in the table.
57 * The table may be accessed concurrently; therefore, other threads may
58 * insert or remove an item at any time. Do not assume an item is still
59 * in the table if cht_find() just returned it to you. Similarly, an
60 * item may have already been inserted by the time cht_find() returns NULL.
61 *
62 * 1) Always use RCU read locks when searching the table.
63 * Holding an RCU lock guarantees that an item found in the table remains
64 * valid (eg is not freed) even if the item was removed from the table
65 * in the meantime by another thread.
66 *
67 * 2) Never update values in place.
68 * Do not update items in the table in place, ie directly. The changes
69 * will not propagate to other readers (on other cpus) immediately or even
70 * correctly. Some readers may then encounter items that have only some
71 * of their fields changed or are completely inconsistent.
72 *
73 * Instead consider inserting an updated/changed copy of the item and
74 * removing the original item. Or contact the maintainer to provide
75 * you with a function that atomically replaces an item with a copy.
76 *
77 * 3) Use cht_insert_unique() instead of checking for duplicates with cht_find()
78 * The following code is prone to race conditions:
79 * @code
80 * if (NULL == cht_find(&h, key)) {
81 * // If another thread inserts and item here, we'll insert a duplicate.
82 * cht_insert(&h, item);
83 * }
84 * @endcode
85 * See cht_insert_unique() on how to correctly fix this.
86 *
87 *
88 * @par Semantics
89 *
90 * Lazy readers = cht_find_lazy(), cht_find_next_lazy()
91 * Readers = lazy readers, cht_find(), cht_find_next()
92 * Updates = cht_insert(), cht_insert_unique(), cht_remove_key(),
93 * cht_remove_item()
94 *
95 * Readers (but not lazy readers) are guaranteed to see the effects
96 * of @e completed updates. In other words, if cht_find() is invoked
97 * after a cht_insert() @e returned eg on another cpu, cht_find() is
98 * guaranteed to see the inserted item.
99 *
100 * Similarly, updates see the effects of @e completed updates. For example,
101 * issuing cht_remove() after a cht_insert() for that key returned (even
102 * on another cpu) is guaranteed to remove the inserted item.
103 *
104 * Reading or updating the table concurrently with other updates
105 * always returns consistent data and never corrupts the table.
106 * However the effects of concurrent updates may or may not be
107 * visible to all other concurrent readers or updaters. Eg, not
108 * all readers may see that an item has already been inserted
109 * if cht_insert() has not yet returned.
110 *
111 * Lazy readers are guaranteed to eventually see updates but it
112 * may take some time (possibly milliseconds) after the update
113 * completes for the change to propagate to lazy readers on all
114 * cpus.
115 *
116 * @par Implementation
117 *
118 * Collisions in CHT are resolved with chaining. The number of buckets
119 * is always a power of 2. Each bucket is represented with a single linked
120 * lock-free list [1]. Items in buckets are sorted by their mixed hashes
121 * in ascending order. All buckets are terminated with a single global
122 * sentinel node whose mixed hash value is the greatest possible.
123 *
124 * CHT with 2^k buckets uses the k most significant bits of a hash value
125 * to determine the bucket number where an item is to be stored. To
126 * avoid storing all items in a single bucket if the user supplied
127 * hash function does not produce uniform hashes, hash values are
128 * mixed first so that the top bits of a mixed hash change even if hash
129 * values differ only in the least significant bits. The mixed hash
130 * values are cached in cht_link.hash (which is overwritten once the
131 * item is scheduled for removal via rcu_call).
132 *
133 * A new item is inserted before all other existing items in the bucket
134 * with the same hash value as the newly inserted item (a la the original
135 * lock-free list [2]). Placing new items at the start of a same-hash
136 * sequence of items (eg duplicates) allows us to easily check for duplicates
137 * in cht_insert_unique(). The function can first check that there are
138 * no duplicates of the newly inserted item amongst the items with the
139 * same hash as the new item. If there were no duplicates the new item
140 * is linked before the same-hash items. Inserting a duplicate while
141 * the function is checking for duplicates is detected as a change of
142 * the link to the first checked same-hash item (and the search for
143 * duplicates can be restarted).
144 *
145 * @par Table resize algorithm
146 *
147 * Table resize is based on [3] and [5]. First, a new bucket head array
148 * is allocated and initialized. Second, old bucket heads are moved
149 * to the new bucket head array with the protocol mentioned in [5].
150 * At this point updaters start using the new bucket heads. Third,
151 * buckets are split (or joined) so that the table can make use of
152 * the extra bucket head slots in the new array (or stop wasting space
153 * with the unnecessary extra slots in the old array). Splitting
154 * or joining buckets employs a custom protocol. Last, the new array
155 * replaces the original bucket array.
156 *
157 * A single background work item (of the system work queue) guides
158 * resizing of the table. If an updater detects that the bucket it
159 * is about to access is undergoing a resize (ie its head is moving
160 * or it needs to be split/joined), it helps out and completes the
161 * head move or the bucket split/join.
162 *
163 * The table always grows or shrinks by a factor of 2. Because items
164 * are assigned a bucket based on the top k bits of their mixed hash
165 * values, when growing the table each bucket is split into two buckets
166 * and all items of the two new buckets come from the single bucket in the
167 * original table. Ie items from separate buckets in the original table
168 * never intermix in the new buckets. Moreover
169 * since the buckets are sorted by their mixed hash values the items
170 * at the beginning of the old bucket will end up in the first new
171 * bucket while all the remaining items of the old bucket will end up
172 * in the second new bucket. Therefore, there is a single point where
173 * to split the linked list of the old bucket into two correctly sorted
174 * linked lists of the new buckets:
175 * .- bucket split
176 * |
177 * <-- first --> v <-- second -->
178 * [old] --> [00b] -> [01b] -> [10b] -> [11b] -> sentinel
179 * ^ ^
180 * [new0] -- -+ |
181 * [new1] -- -- -- -- -- -- -- -+
182 *
183 * Resize in greater detail:
184 *
185 * a) First, a resizer (a single background system work queue item
186 * in charge of resizing the table) allocates and initializes a new
187 * bucket head array. New bucket heads are pointed to the sentinel
188 * and marked Invalid (in the lower order bits of the pointer to the
189 * next item, ie the sentinel in this case):
190 *
191 * [old, N] --> [00b] -> [01b] -> [10b] -> [11b] -> sentinel
192 * ^ ^
193 * [new0, Inv] -------------------------------------+ |
194 * [new1, Inv] ---------------------------------------+
195 *
196 *
197 * b) Second, the resizer starts moving old bucket heads with the following
198 * lock-free protocol (from [5]) where cas(variable, expected_val, new_val)
199 * is short for compare-and-swap:
200 *
201 * old head new0 head transition to next state
202 * -------- --------- ------------------------
203 * addr, N sentinel, Inv cas(old, (addr, N), (addr, Const))
204 * .. mark the old head as immutable, so that
205 * updaters do not relink it to other nodes
206 * until the head move is done.
207 * addr, Const sentinel, Inv cas(new0, (sentinel, Inv), (addr, N))
208 * .. move the address to the new head and mark
209 * the new head normal so updaters can start
210 * using it.
211 * addr, Const addr, N cas(old, (addr, Const), (addr, Inv))
212 * .. mark the old head Invalid to signify
213 * the head move is done.
214 * addr, Inv addr, N
215 *
216 * Notice that concurrent updaters may step in at any point and correctly
217 * complete the head move without disrupting the resizer. At worst, the
218 * resizer or other concurrent updaters will attempt a number of CAS() that
219 * will correctly fail.
220 *
221 * [old, Inv] -> [00b] -> [01b] -> [10b] -> [11b] -> sentinel
222 * ^ ^
223 * [new0, N] ----+ |
224 * [new1, Inv] --------------------------------------+
225 *
226 *
227 * c) Third, buckets are split if the table is growing; or joined if
228 * shrinking (by the resizer or updaters depending on whoever accesses
229 * the bucket first). See split_bucket() and join_buckets() for details.
230 *
231 * 1) Mark the last item of new0 with JOIN_FOLLOWS:
232 * [old, Inv] -> [00b] -> [01b, JF] -> [10b] -> [11b] -> sentinel
233 * ^ ^
234 * [new0, N] ----+ |
235 * [new1, Inv] ------------------------------------------+
236 *
237 * 2) Mark the first item of new1 with JOIN_NODE:
238 * [old, Inv] -> [00b] -> [01b, JF] -> [10b, JN] -> [11b] -> sentinel
239 * ^ ^
240 * [new0, N] ----+ |
241 * [new1, Inv] ----------------------------------------------+
242 *
243 * 3) Point new1 to the join-node and mark new1 NORMAL.
244 * [old, Inv] -> [00b] -> [01b, JF] -> [10b, JN] -> [11b] -> sentinel
245 * ^ ^
246 * [new0, N] ----+ |
247 * [new1, N] --------------------------+
248 *
249 *
250 * d) Fourth, the resizer cleans up extra marks added during bucket
251 * splits/joins but only when it is sure all updaters are accessing
252 * the table via the new bucket heads only (ie it is certain there
253 * are no delayed updaters unaware of the resize and accessing the
254 * table via the old bucket head).
255 *
256 * [old, Inv] ---+
257 * v
258 * [new0, N] --> [00b] -> [01b, N] ---+
259 * v
260 * [new1, N] --> [10b, N] -> [11b] -> sentinel
261 *
262 *
263 * e) Last, the resizer publishes the new bucket head array for everyone
264 * to see and use. This signals the end of the resize and the old bucket
265 * array is freed.
266 *
267 *
268 * To understand details of how the table is resized, read [1, 3, 5]
269 * and comments in join_buckets(), split_bucket().
270 *
271 *
272 * [1] High performance dynamic lock-free hash tables and list-based sets,
273 * Michael, 2002
274 * http://www.research.ibm.com/people/m/michael/spaa-2002.pdf
275 * [2] Lock-free linked lists using compare-and-swap,
276 * Valois, 1995
277 * http://people.csail.mit.edu/bushl2/rpi/portfolio/lockfree-grape/documents/lock-free-linked-lists.pdf
278 * [3] Resizable, scalable, concurrent hash tables via relativistic programming,
279 * Triplett, 2011
280 * http://www.usenix.org/event/atc11/tech/final_files/Triplett.pdf
281 * [4] Split-ordered Lists: Lock-free Extensible Hash Tables,
282 * Shavit, 2006
283 * http://www.cs.ucf.edu/~dcm/Teaching/COT4810-Spring2011/Literature/SplitOrderedLists.pdf
284 * [5] Towards a Scalable Non-blocking Coding Style,
285 * Click, 2008
286 * http://www.azulsystems.com/events/javaone_2008/2008_CodingNonBlock.pdf
287 */
288
289
290#include <adt/cht.h>
291#include <adt/hash.h>
292#include <assert.h>
293#include <mm/slab.h>
294#include <barrier.h>
295#include <barrier.h>
296#include <atomic.h>
297#include <synch/rcu.h>
298
299/* Logarithm of the min bucket count. Must be at least 3. 2^6 == 64 buckets. */
300#define CHT_MIN_ORDER 6
301/* Logarithm of the max bucket count. */
302#define CHT_MAX_ORDER (8 * sizeof(size_t))
303/* Minimum number of hash table buckets. */
304#define CHT_MIN_BUCKET_CNT (1 << CHT_MIN_ORDER)
305/* Does not have to be a power of 2. */
306#define CHT_MAX_LOAD 2
307
308typedef cht_ptr_t marked_ptr_t;
309typedef bool (*equal_pred_t)(void *arg, const cht_link_t *item);
310
311/** The following mark items and bucket heads.
312 *
313 * They are stored in the two low order bits of the next item pointers.
314 * Some marks may be combined. Some marks share the same binary value and
315 * are distinguished only by context (eg bucket head vs an ordinary item),
316 * in particular by walk_mode_t.
317 */
318typedef enum mark {
319 /** Normal non-deleted item or a valid bucket head. */
320 N_NORMAL = 0,
321 /** Logically deleted item that might have already been unlinked.
322 *
323 * May be combined with N_JOIN and N_JOIN_FOLLOWS. Applicable only
324 * to items; never to bucket heads.
325 *
326 * Once marked deleted an item remains marked deleted.
327 */
328 N_DELETED = 1,
329 /** Immutable bucket head.
330 *
331 * The bucket is being moved or joined with another and its (old) head
332 * must not be modified.
333 *
334 * May be combined with N_INVALID. Applicable only to old bucket heads,
335 * ie cht_t.b and not cht_t.new_b.
336 */
337 N_CONST = 1,
338 /** Invalid bucket head. The bucket head must not be modified.
339 *
340 * Old bucket heads (ie cht_t.b) are marked invalid if they have
341 * already been moved to cht_t.new_b or if the bucket had already
342 * been merged with another when shrinking the table. New bucket
343 * heads (ie cht_t.new_b) are marked invalid if the old bucket had
344 * not yet been moved or if an old bucket had not yet been split
345 * when growing the table.
346 */
347 N_INVALID = 3,
348 /** The item is a join node, ie joining two buckets
349 *
350 * A join node is either the first node of the second part of
351 * a bucket to be split; or it is the first node of the bucket
352 * to be merged into/appended to/joined with another bucket.
353 *
354 * May be combined with N_DELETED. Applicable only to items, never
355 * to bucket heads.
356 *
357 * Join nodes are referred to from two different buckets and may,
358 * therefore, not be safely/atomically unlinked from both buckets.
359 * As a result join nodes are not unlinked but rather just marked
360 * deleted. Once resize completes join nodes marked deleted are
361 * garbage collected.
362 */
363 N_JOIN = 2,
364 /** The next node is a join node and will soon be marked so.
365 *
366 * A join-follows node is the last node of the first part of bucket
367 * that is to be split, ie it is the last node that will remain
368 * in the same bucket after splitting it.
369 *
370 * May be combined with N_DELETED. Applicable to items as well
371 * as to bucket heads of the bucket to be split (but only in cht_t.new_b).
372 */
373 N_JOIN_FOLLOWS = 2,
374 /** Bit mask to filter out the address to the next item from the next ptr. */
375 N_MARK_MASK = 3
376} mark_t;
377
378/** Determines */
379typedef enum walk_mode {
380 /** The table is not resizing. */
381 WM_NORMAL = 4,
382 /** The table is undergoing a resize. Join nodes may be encountered. */
383 WM_LEAVE_JOIN,
384 /** The table is growing. A join-follows node may be encountered. */
385 WM_MOVE_JOIN_FOLLOWS
386} walk_mode_t;
387
388/** Bucket position window. */
389typedef struct wnd {
390 /** Pointer to cur's predecessor. */
391 marked_ptr_t *ppred;
392 /** Current item. */
393 cht_link_t *cur;
394 /** Last encountered item. Deleted or not. */
395 cht_link_t *last;
396} wnd_t;
397
398
399/* Sentinel node used by all buckets. Stores the greatest possible hash value.*/
400static const cht_link_t sentinel = {
401 /* NULL and N_NORMAL */
402 .link = 0 | N_NORMAL,
403 .hash = -1
404};
405
406
407static size_t size_to_order(size_t bucket_cnt, size_t min_order);
408static cht_buckets_t *alloc_buckets(size_t order, bool set_invalid,
409 bool can_block);
410static inline cht_link_t *find_lazy(cht_t *h, void *key);
411static cht_link_t *search_bucket(cht_t *h, marked_ptr_t head, void *key,
412 size_t search_hash);
413static cht_link_t *find_resizing(cht_t *h, void *key, size_t hash,
414 marked_ptr_t old_head, size_t old_idx);
415static bool insert_impl(cht_t *h, cht_link_t *item, cht_link_t **dup_item);
416static bool insert_at(cht_link_t *item, const wnd_t *wnd, walk_mode_t walk_mode,
417 bool *resizing);
418static bool has_duplicate(cht_t *h, const cht_link_t *item, size_t hash,
419 cht_link_t *cur, cht_link_t **dup_item);
420static cht_link_t *find_duplicate(cht_t *h, const cht_link_t *item, size_t hash,
421 cht_link_t *start);
422static bool remove_pred(cht_t *h, size_t hash, equal_pred_t pred, void *pred_arg);
423static bool delete_at(cht_t *h, wnd_t *wnd, walk_mode_t walk_mode,
424 bool *deleted_but_gc, bool *resizing);
425static bool mark_deleted(cht_link_t *cur, walk_mode_t walk_mode, bool *resizing);
426static bool unlink_from_pred(wnd_t *wnd, walk_mode_t walk_mode, bool *resizing);
427static bool find_wnd_and_gc_pred(cht_t *h, size_t hash, walk_mode_t walk_mode,
428 equal_pred_t pred, void *pred_arg, wnd_t *wnd, bool *resizing);
429static bool find_wnd_and_gc(cht_t *h, size_t hash, walk_mode_t walk_mode,
430 wnd_t *wnd, bool *resizing);
431static bool gc_deleted_node(cht_t *h, walk_mode_t walk_mode, wnd_t *wnd,
432 bool *resizing);
433static bool join_completed(cht_t *h, const wnd_t *wnd);
434static void upd_resizing_head(cht_t *h, size_t hash, marked_ptr_t **phead,
435 bool *join_finishing, walk_mode_t *walk_mode);
436static void item_removed(cht_t *h);
437static void item_inserted(cht_t *h);
438static void free_later(cht_t *h, cht_link_t *item);
439static void help_head_move(marked_ptr_t *psrc_head, marked_ptr_t *pdest_head);
440static void start_head_move(marked_ptr_t *psrc_head);
441static void mark_const(marked_ptr_t *psrc_head);
442static void complete_head_move(marked_ptr_t *psrc_head, marked_ptr_t *pdest_head);
443static void split_bucket(cht_t *h, marked_ptr_t *psrc_head,
444 marked_ptr_t *pdest_head, size_t split_hash);
445static void mark_join_follows(cht_t *h, marked_ptr_t *psrc_head,
446 size_t split_hash, wnd_t *wnd);
447static void mark_join_node(cht_link_t *join_node);
448static void join_buckets(cht_t *h, marked_ptr_t *psrc_head,
449 marked_ptr_t *pdest_head, size_t split_hash);
450static void link_to_join_node(cht_t *h, marked_ptr_t *pdest_head,
451 cht_link_t *join_node, size_t split_hash);
452static void resize_table(work_t *arg);
453static void grow_table(cht_t *h);
454static void shrink_table(cht_t *h);
455static void cleanup_join_node(cht_t *h, marked_ptr_t *new_head);
456static void clear_join_and_gc(cht_t *h, cht_link_t *join_node,
457 marked_ptr_t *new_head);
458static void cleanup_join_follows(cht_t *h, marked_ptr_t *new_head);
459static marked_ptr_t make_link(const cht_link_t *next, mark_t mark);
460static cht_link_t *get_next(marked_ptr_t link);
461static mark_t get_mark(marked_ptr_t link);
462static void next_wnd(wnd_t *wnd);
463static bool same_node_pred(void *node, const cht_link_t *item2);
464static size_t calc_key_hash(cht_t *h, void *key);
465static size_t node_hash(cht_t *h, const cht_link_t *item);
466static size_t calc_node_hash(cht_t *h, const cht_link_t *item);
467static void memoize_node_hash(cht_t *h, cht_link_t *item);
468static size_t calc_split_hash(size_t split_idx, size_t order);
469static size_t calc_bucket_idx(size_t hash, size_t order);
470static size_t grow_to_split_idx(size_t old_idx);
471static size_t grow_idx(size_t idx);
472static size_t shrink_idx(size_t idx);
473static marked_ptr_t cas_link(marked_ptr_t *link, const cht_link_t *cur_next,
474 mark_t cur_mark, const cht_link_t *new_next, mark_t new_mark);
475static marked_ptr_t _cas_link(marked_ptr_t *link, marked_ptr_t cur,
476 marked_ptr_t new);
477static void cas_order_barrier(void);
478
479static void dummy_remove_callback(cht_link_t *item)
480{
481 /* empty */
482}
483
484/** Creates a concurrent hash table.
485 *
486 * @param h Valid pointer to a cht_t instance.
487 * @param op Item specific operations. All operations are compulsory.
488 * @return True if successfully created the table. False otherwise.
489 */
490bool cht_create_simple(cht_t *h, cht_ops_t *op)
491{
492 return cht_create(h, 0, 0, 0, false, op);
493}
494
495/** Creates a concurrent hash table.
496 *
497 * @param h Valid pointer to a cht_t instance.
498 * @param init_size The initial number of buckets the table should contain.
499 * The table may be shrunk below this value if deemed necessary.
500 * Uses the default value if 0.
501 * @param min_size Minimum number of buckets that the table should contain.
502 * The number of buckets never drops below this value,
503 * although it may be rounded up internally as appropriate.
504 * Uses the default value if 0.
505 * @param max_load Maximum average number of items per bucket that allowed
506 * before the table grows.
507 * @param can_block If true creating the table blocks until enough memory
508 * is available (possibly indefinitely). Otherwise,
509 * table creation does not block and returns immediately
510 * even if not enough memory is available.
511 * @param op Item specific operations. All operations are compulsory.
512 * @return True if successfully created the table. False otherwise.
513 */
514bool cht_create(cht_t *h, size_t init_size, size_t min_size, size_t max_load,
515 bool can_block, cht_ops_t *op)
516{
517 assert(h);
518 assert(op && op->hash && op->key_hash && op->equal && op->key_equal);
519 /* Memoized hashes are stored in the rcu_link.func function pointer. */
520 static_assert(sizeof(size_t) == sizeof(rcu_func_t), "");
521 assert(sentinel.hash == (uintptr_t)sentinel.rcu_link.func);
522
523 /* All operations are compulsory. */
524 if (!op || !op->hash || !op->key_hash || !op->equal || !op->key_equal)
525 return false;
526
527 size_t min_order = size_to_order(min_size, CHT_MIN_ORDER);
528 size_t order = size_to_order(init_size, min_order);
529
530 h->b = alloc_buckets(order, false, can_block);
531
532 if (!h->b)
533 return false;
534
535 h->max_load = (max_load == 0) ? CHT_MAX_LOAD : max_load;
536 h->min_order = min_order;
537 h->new_b = NULL;
538 h->op = op;
539 atomic_store(&h->item_cnt, 0);
540 atomic_store(&h->resize_reqs, 0);
541
542 if (NULL == op->remove_callback) {
543 h->op->remove_callback = dummy_remove_callback;
544 }
545
546 /*
547 * Cached item hashes are stored in item->rcu_link.func. Once the item
548 * is deleted rcu_link.func will contain the value of invalid_hash.
549 */
550 h->invalid_hash = (uintptr_t)h->op->remove_callback;
551
552 /* Ensure the initialization takes place before we start using the table. */
553 write_barrier();
554
555 return true;
556}
557
558/** Allocates and initializes 2^order buckets.
559 *
560 * All bucket heads are initialized to point to the sentinel node.
561 *
562 * @param order The number of buckets to allocate is 2^order.
563 * @param set_invalid Bucket heads are marked invalid if true; otherwise
564 * they are marked N_NORMAL.
565 * @param can_block If true memory allocation blocks until enough memory
566 * is available (possibly indefinitely). Otherwise,
567 * memory allocation does not block.
568 * @return Newly allocated and initialized buckets or NULL if not enough memory.
569 */
570static cht_buckets_t *alloc_buckets(size_t order, bool set_invalid, bool can_block)
571{
572 size_t bucket_cnt = (1 << order);
573 size_t bytes =
574 sizeof(cht_buckets_t) + (bucket_cnt - 1) * sizeof(marked_ptr_t);
575 cht_buckets_t *b = can_block ? nfmalloc(bytes) : malloc(bytes);
576
577 if (!b)
578 return NULL;
579
580 b->order = order;
581
582 marked_ptr_t head_link = set_invalid ?
583 make_link(&sentinel, N_INVALID) :
584 make_link(&sentinel, N_NORMAL);
585
586 for (size_t i = 0; i < bucket_cnt; ++i) {
587 b->head[i] = head_link;
588 }
589
590 return b;
591}
592
593/** Returns the smallest k such that bucket_cnt <= 2^k and min_order <= k.*/
594static size_t size_to_order(size_t bucket_cnt, size_t min_order)
595{
596 size_t order = min_order;
597
598 /* Find a power of two such that bucket_cnt <= 2^order */
599 do {
600 if (bucket_cnt <= ((size_t)1 << order))
601 return order;
602
603 ++order;
604 } while (order < CHT_MAX_ORDER);
605
606 return order;
607}
608
609/** Destroys a CHT successfully created via cht_create().
610 *
611 * Waits for all outstanding concurrent operations to complete and
612 * frees internal allocated resources. The table is however not cleared
613 * and items already present in the table (if any) are leaked.
614 */
615void cht_destroy(cht_t *h)
616{
617 cht_destroy_unsafe(h);
618
619 /* You must clear the table of items. Otherwise cht_destroy will leak. */
620 assert(atomic_load(&h->item_cnt) == 0);
621}
622
623/** Destroys a successfully created CHT but does no error checking. */
624void cht_destroy_unsafe(cht_t *h)
625{
626 /* Wait for resize to complete. */
627 while (0 < atomic_load(&h->resize_reqs)) {
628 rcu_barrier();
629 }
630
631 /* Wait for all remove_callback()s to complete. */
632 rcu_barrier();
633
634 free(h->b);
635 h->b = NULL;
636}
637
638/** Returns the first item equal to the search key or NULL if not found.
639 *
640 * The call must be enclosed in a rcu_read_lock() unlock() pair. The
641 * returned item is guaranteed to be allocated until rcu_read_unlock()
642 * although the item may be concurrently removed from the table by another
643 * cpu.
644 *
645 * Further items matching the key may be retrieved via cht_find_next().
646 *
647 * cht_find() sees the effects of any completed cht_remove(), cht_insert().
648 * If a concurrent remove or insert had not yet completed cht_find() may
649 * or may not see the effects of it (eg it may find an item being removed).
650 *
651 * @param h CHT to operate on.
652 * @param key Search key as defined by cht_ops_t.key_equal() and .key_hash().
653 * @return First item equal to the key or NULL if such an item does not exist.
654 */
655cht_link_t *cht_find(cht_t *h, void *key)
656{
657 /* Make the most recent changes to the table visible. */
658 read_barrier();
659 return cht_find_lazy(h, key);
660}
661
662/** Returns the first item equal to the search key or NULL if not found.
663 *
664 * Unlike cht_find(), cht_find_lazy() may not see the effects of
665 * cht_remove() or cht_insert() even though they have already completed.
666 * It may take a couple of milliseconds for those changes to propagate
667 * and become visible to cht_find_lazy(). On the other hand, cht_find_lazy()
668 * operates a bit faster than cht_find().
669 *
670 * See cht_find() for more details.
671 */
672cht_link_t *cht_find_lazy(cht_t *h, void *key)
673{
674 return find_lazy(h, key);
675}
676
677/** Finds the first item equal to the search key. */
678static inline cht_link_t *find_lazy(cht_t *h, void *key)
679{
680 assert(h);
681 /* See docs to cht_find() and cht_find_lazy(). */
682 assert(rcu_read_locked());
683
684 size_t hash = calc_key_hash(h, key);
685
686 cht_buckets_t *b = rcu_access(h->b);
687 size_t idx = calc_bucket_idx(hash, b->order);
688 /*
689 * No need for access_once. b->head[idx] will point to an allocated node
690 * even if marked invalid until we exit rcu read section.
691 */
692 marked_ptr_t head = b->head[idx];
693
694 /* Undergoing a resize - take the slow path. */
695 if (N_INVALID == get_mark(head))
696 return find_resizing(h, key, hash, head, idx);
697
698 return search_bucket(h, head, key, hash);
699}
700
701/** Returns the next item matching \a item.
702 *
703 * Must be enclosed in a rcu_read_lock()/unlock() pair. Effects of
704 * completed cht_remove(), cht_insert() are guaranteed to be visible
705 * to cht_find_next().
706 *
707 * See cht_find() for more details.
708 */
709cht_link_t *cht_find_next(cht_t *h, const cht_link_t *item)
710{
711 /* Make the most recent changes to the table visible. */
712 read_barrier();
713 return cht_find_next_lazy(h, item);
714}
715
716/** Returns the next item matching \a item.
717 *
718 * Must be enclosed in a rcu_read_lock()/unlock() pair. Effects of
719 * completed cht_remove(), cht_insert() may or may not be visible
720 * to cht_find_next_lazy().
721 *
722 * See cht_find_lazy() for more details.
723 */
724cht_link_t *cht_find_next_lazy(cht_t *h, const cht_link_t *item)
725{
726 assert(h);
727 assert(rcu_read_locked());
728 assert(item);
729
730 return find_duplicate(h, item, calc_node_hash(h, item), get_next(item->link));
731}
732
733/** Searches the bucket at head for key using search_hash. */
734static inline cht_link_t *search_bucket(cht_t *h, marked_ptr_t head, void *key,
735 size_t search_hash)
736{
737 /*
738 * It is safe to access nodes even outside of this bucket (eg when
739 * splitting the bucket). The resizer makes sure that any node we
740 * may find by following the next pointers is allocated.
741 */
742
743 cht_link_t *cur = NULL;
744 marked_ptr_t prev = head;
745
746try_again:
747 /* Filter out items with different hashes. */
748 do {
749 cur = get_next(prev);
750 assert(cur);
751 prev = cur->link;
752 } while (node_hash(h, cur) < search_hash);
753
754 /*
755 * Only search for an item with an equal key if cur is not the sentinel
756 * node or a node with a different hash.
757 */
758 while (node_hash(h, cur) == search_hash) {
759 if (h->op->key_equal(key, cur)) {
760 if (!(N_DELETED & get_mark(cur->link)))
761 return cur;
762 }
763
764 cur = get_next(cur->link);
765 assert(cur);
766 }
767
768 /*
769 * In the unlikely case that we have encountered a node whose cached
770 * hash has been overwritten due to a pending rcu_call for it, skip
771 * the node and try again.
772 */
773 if (node_hash(h, cur) == h->invalid_hash) {
774 prev = cur->link;
775 goto try_again;
776 }
777
778 return NULL;
779}
780
781/** Searches for the key while the table is undergoing a resize. */
782static cht_link_t *find_resizing(cht_t *h, void *key, size_t hash,
783 marked_ptr_t old_head, size_t old_idx)
784{
785 assert(N_INVALID == get_mark(old_head));
786 assert(h->new_b);
787
788 size_t new_idx = calc_bucket_idx(hash, h->new_b->order);
789 marked_ptr_t new_head = h->new_b->head[new_idx];
790 marked_ptr_t search_head = new_head;
791
792 /* Growing. */
793 if (h->b->order < h->new_b->order) {
794 /*
795 * Old bucket head is invalid, so it must have been already
796 * moved. Make the new head visible if still not visible, ie
797 * invalid.
798 */
799 if (N_INVALID == get_mark(new_head)) {
800 /*
801 * We should be searching a newly added bucket but the old
802 * moved bucket has not yet been split (its marked invalid)
803 * or we have not yet seen the split.
804 */
805 if (grow_idx(old_idx) != new_idx) {
806 /*
807 * Search the moved bucket. It is guaranteed to contain
808 * items of the newly added bucket that were present
809 * before the moved bucket was split.
810 */
811 new_head = h->new_b->head[grow_idx(old_idx)];
812 }
813
814 /* new_head is now the moved bucket, either valid or invalid. */
815
816 /*
817 * The old bucket was definitely moved to new_head but the
818 * change of new_head had not yet propagated to this cpu.
819 */
820 if (N_INVALID == get_mark(new_head)) {
821 /*
822 * We could issue a read_barrier() and make the now valid
823 * moved bucket head new_head visible, but instead fall back
824 * on using the old bucket. Although the old bucket head is
825 * invalid, it points to a node that is allocated and in the
826 * right bucket. Before the node can be freed, it must be
827 * unlinked from the head (or another item after that item
828 * modified the new_head) and a grace period must elapse.
829 * As a result had the node been already freed the grace
830 * period preceeding the free() would make the unlink and
831 * any changes to new_head visible. Therefore, it is safe
832 * to use the node pointed to from the old bucket head.
833 */
834
835 search_head = old_head;
836 } else {
837 search_head = new_head;
838 }
839 }
840
841 return search_bucket(h, search_head, key, hash);
842 } else if (h->b->order > h->new_b->order) {
843 /* Shrinking. */
844
845 /* Index of the bucket in the old table that was moved. */
846 size_t move_src_idx = grow_idx(new_idx);
847 marked_ptr_t moved_old_head = h->b->head[move_src_idx];
848
849 /*
850 * h->b->head[move_src_idx] had already been moved to new_head
851 * but the change to new_head had not yet propagated to us.
852 */
853 if (N_INVALID == get_mark(new_head)) {
854 /*
855 * new_head is definitely valid and we could make it visible
856 * to this cpu with a read_barrier(). Instead, use the bucket
857 * in the old table that was moved even though it is now marked
858 * as invalid. The node it points to must be allocated because
859 * a grace period would have to elapse before it could be freed;
860 * and the grace period would make the now valid new_head
861 * visible to all cpus.
862 *
863 * Note that move_src_idx may not be the same as old_idx.
864 * If move_src_idx != old_idx then old_idx is the bucket
865 * in the old table that is not moved but instead it is
866 * appended to the moved bucket, ie it is added at the tail
867 * of new_head. In that case an invalid old_head notes that
868 * it had already been merged into (the moved) new_head.
869 * We will try to search that bucket first because it
870 * may contain some newly added nodes after the bucket
871 * join. Moreover, the bucket joining link may already be
872 * visible even if new_head is not. Therefore, if we're
873 * lucky we'll find the item via moved_old_head. In any
874 * case, we'll retry in proper old_head if not found.
875 */
876 search_head = moved_old_head;
877 }
878
879 cht_link_t *ret = search_bucket(h, search_head, key, hash);
880
881 if (ret)
882 return ret;
883 /*
884 * Bucket old_head was already joined with moved_old_head
885 * in the new table but we have not yet seen change of the
886 * joining link (or the item is not in the table).
887 */
888 if (move_src_idx != old_idx && get_next(old_head) != &sentinel) {
889 /*
890 * Note that old_head (the bucket to be merged into new_head)
891 * points to an allocated join node (if non-null) even if marked
892 * invalid. Before the resizer lets join nodes to be unlinked
893 * (and freed) it sets old_head to NULL and waits for a grace period.
894 * So either the invalid old_head points to join node; or old_head
895 * is null and we would have seen a completed bucket join while
896 * traversing search_head.
897 */
898 assert(N_JOIN & get_mark(get_next(old_head)->link));
899 return search_bucket(h, old_head, key, hash);
900 }
901
902 return NULL;
903 } else {
904 /*
905 * Resize is almost done. The resizer is waiting to make
906 * sure all cpus see that the new table replaced the old one.
907 */
908 assert(h->b->order == h->new_b->order);
909 /*
910 * The resizer must ensure all new bucket heads are visible before
911 * replacing the old table.
912 */
913 assert(N_NORMAL == get_mark(new_head));
914 return search_bucket(h, new_head, key, hash);
915 }
916}
917
918/** Inserts an item. Succeeds even if an equal item is already present. */
919void cht_insert(cht_t *h, cht_link_t *item)
920{
921 insert_impl(h, item, NULL);
922}
923
924/** Inserts a unique item. Returns false if an equal item was already present.
925 *
926 * Use this function to atomically check if an equal/duplicate item had
927 * not yet been inserted into the table and to insert this item into the
928 * table.
929 *
930 * The following is @e NOT thread-safe, so do not use:
931 * @code
932 * if (!cht_find(h, key)) {
933 * // A concurrent insert here may go unnoticed by cht_find() above.
934 * item = malloc(..);
935 * cht_insert(h, item);
936 * // Now we may have two items with equal search keys.
937 * }
938 * @endcode
939 *
940 * Replace such code with:
941 * @code
942 * item = malloc(..);
943 * if (!cht_insert_unique(h, item, &dup_item)) {
944 * // Whoops, someone beat us to it - an equal item 'dup_item'
945 * // had already been inserted.
946 * free(item);
947 * } else {
948 * // Successfully inserted the item and we are guaranteed that
949 * // there are no other equal items.
950 * }
951 * @endcode
952 *
953 */
954bool cht_insert_unique(cht_t *h, cht_link_t *item, cht_link_t **dup_item)
955{
956 assert(rcu_read_locked());
957 assert(dup_item);
958 return insert_impl(h, item, dup_item);
959}
960
961/** Inserts the item into the table and checks for duplicates if dup_item. */
962static bool insert_impl(cht_t *h, cht_link_t *item, cht_link_t **dup_item)
963{
964 rcu_read_lock();
965
966 cht_buckets_t *b = rcu_access(h->b);
967 memoize_node_hash(h, item);
968 size_t hash = node_hash(h, item);
969 size_t idx = calc_bucket_idx(hash, b->order);
970 marked_ptr_t *phead = &b->head[idx];
971
972 bool resizing = false;
973 bool inserted = false;
974
975 do {
976 walk_mode_t walk_mode = WM_NORMAL;
977 bool join_finishing;
978
979 resizing = resizing || (N_NORMAL != get_mark(*phead));
980
981 /* The table is resizing. Get the correct bucket head. */
982 if (resizing) {
983 upd_resizing_head(h, hash, &phead, &join_finishing, &walk_mode);
984 }
985
986 wnd_t wnd = {
987 .ppred = phead,
988 .cur = get_next(*phead),
989 .last = NULL
990 };
991
992 if (!find_wnd_and_gc(h, hash, walk_mode, &wnd, &resizing)) {
993 /* Could not GC a node; or detected an unexpected resize. */
994 continue;
995 }
996
997 if (dup_item && has_duplicate(h, item, hash, wnd.cur, dup_item)) {
998 rcu_read_unlock();
999 return false;
1000 }
1001
1002 inserted = insert_at(item, &wnd, walk_mode, &resizing);
1003 } while (!inserted);
1004
1005 rcu_read_unlock();
1006
1007 item_inserted(h);
1008 return true;
1009}
1010
1011/** Inserts item between wnd.ppred and wnd.cur.
1012 *
1013 * @param item Item to link to wnd.ppred and wnd.cur.
1014 * @param wnd The item will be inserted before wnd.cur. Wnd.ppred
1015 * must be N_NORMAL.
1016 * @param walk_mode
1017 * @param resizing Set to true only if the table is undergoing resize
1018 * and it was not expected (ie walk_mode == WM_NORMAL).
1019 * @return True if the item was successfully linked to wnd.ppred. False
1020 * if whole insert operation must be retried because the predecessor
1021 * of wnd.cur has changed.
1022 */
1023inline static bool insert_at(cht_link_t *item, const wnd_t *wnd,
1024 walk_mode_t walk_mode, bool *resizing)
1025{
1026 marked_ptr_t ret;
1027
1028 if (walk_mode == WM_NORMAL) {
1029 item->link = make_link(wnd->cur, N_NORMAL);
1030 /* Initialize the item before adding it to a bucket. */
1031 memory_barrier();
1032
1033 /* Link a clean/normal predecessor to the item. */
1034 ret = cas_link(wnd->ppred, wnd->cur, N_NORMAL, item, N_NORMAL);
1035
1036 if (ret == make_link(wnd->cur, N_NORMAL)) {
1037 return true;
1038 } else {
1039 /* This includes an invalid head but not a const head. */
1040 *resizing = ((N_JOIN_FOLLOWS | N_JOIN) & get_mark(ret));
1041 return false;
1042 }
1043 } else if (walk_mode == WM_MOVE_JOIN_FOLLOWS) {
1044 /* Move JOIN_FOLLOWS mark but filter out the DELETED mark. */
1045 mark_t jf_mark = get_mark(*wnd->ppred) & N_JOIN_FOLLOWS;
1046 item->link = make_link(wnd->cur, jf_mark);
1047 /* Initialize the item before adding it to a bucket. */
1048 memory_barrier();
1049
1050 /* Link the not-deleted predecessor to the item. Move its JF mark. */
1051 ret = cas_link(wnd->ppred, wnd->cur, jf_mark, item, N_NORMAL);
1052
1053 return ret == make_link(wnd->cur, jf_mark);
1054 } else {
1055 assert(walk_mode == WM_LEAVE_JOIN);
1056
1057 item->link = make_link(wnd->cur, N_NORMAL);
1058 /* Initialize the item before adding it to a bucket. */
1059 memory_barrier();
1060
1061 mark_t pred_mark = get_mark(*wnd->ppred);
1062 /* If the predecessor is a join node it may be marked deleted.*/
1063 mark_t exp_pred_mark = (N_JOIN & pred_mark) ? pred_mark : N_NORMAL;
1064
1065 ret = cas_link(wnd->ppred, wnd->cur, exp_pred_mark, item, exp_pred_mark);
1066 return ret == make_link(wnd->cur, exp_pred_mark);
1067 }
1068}
1069
1070/** Returns true if the chain starting at cur has an item equal to \a item.
1071 *
1072 * @param h CHT to operate on.
1073 * @param item Item whose duplicates the function looks for.
1074 * @param hash Hash of \a item.
1075 * @param[in] cur The first node with a hash greater to or equal to item's hash.
1076 * @param[out] dup_item The first duplicate item encountered.
1077 * @return True if a non-deleted item equal to \a item exists in the table.
1078 */
1079static inline bool has_duplicate(cht_t *h, const cht_link_t *item, size_t hash,
1080 cht_link_t *cur, cht_link_t **dup_item)
1081{
1082 assert(cur);
1083 assert(cur == &sentinel || hash <= node_hash(h, cur) ||
1084 node_hash(h, cur) == h->invalid_hash);
1085
1086 /* hash < node_hash(h, cur) */
1087 if (hash != node_hash(h, cur) && h->invalid_hash != node_hash(h, cur))
1088 return false;
1089
1090 /*
1091 * Load the most recent node marks. Otherwise we might pronounce a
1092 * logically deleted node for a duplicate of the item just because
1093 * the deleted node's DEL mark had not yet propagated to this cpu.
1094 */
1095 read_barrier();
1096
1097 *dup_item = find_duplicate(h, item, hash, cur);
1098 return NULL != *dup_item;
1099}
1100
1101/** Returns an item that is equal to \a item starting in a chain at \a start. */
1102static cht_link_t *find_duplicate(cht_t *h, const cht_link_t *item, size_t hash,
1103 cht_link_t *start)
1104{
1105 assert(hash <= node_hash(h, start) || h->invalid_hash == node_hash(h, start));
1106
1107 cht_link_t *cur = start;
1108
1109try_again:
1110 assert(cur);
1111
1112 while (node_hash(h, cur) == hash) {
1113 assert(cur != &sentinel);
1114
1115 bool deleted = (N_DELETED & get_mark(cur->link));
1116
1117 /* Skip logically deleted nodes. */
1118 if (!deleted && h->op->equal(item, cur))
1119 return cur;
1120
1121 cur = get_next(cur->link);
1122 assert(cur);
1123 }
1124
1125 /* Skip logically deleted nodes with rcu_call() in progress. */
1126 if (h->invalid_hash == node_hash(h, cur)) {
1127 cur = get_next(cur->link);
1128 goto try_again;
1129 }
1130
1131 return NULL;
1132}
1133
1134/** Removes all items matching the search key. Returns the number of items removed.*/
1135size_t cht_remove_key(cht_t *h, void *key)
1136{
1137 assert(h);
1138
1139 size_t hash = calc_key_hash(h, key);
1140 size_t removed = 0;
1141
1142 while (remove_pred(h, hash, h->op->key_equal, key))
1143 ++removed;
1144
1145 return removed;
1146}
1147
1148/** Removes a specific item from the table.
1149 *
1150 * The called must hold rcu read lock.
1151 *
1152 * @param item Item presumably present in the table and to be removed.
1153 * @return True if the item was removed successfully; or false if it had
1154 * already been deleted.
1155 */
1156bool cht_remove_item(cht_t *h, cht_link_t *item)
1157{
1158 assert(h);
1159 assert(item);
1160 /* Otherwise a concurrent cht_remove_key might free the item. */
1161 assert(rcu_read_locked());
1162
1163 /*
1164 * Even though we know the node we want to delete we must unlink it
1165 * from the correct bucket and from a clean/normal predecessor. Therefore,
1166 * we search for it again from the beginning of the correct bucket.
1167 */
1168 size_t hash = calc_node_hash(h, item);
1169 return remove_pred(h, hash, same_node_pred, item);
1170}
1171
1172/** Removes an item equal to pred_arg according to the predicate pred. */
1173static bool remove_pred(cht_t *h, size_t hash, equal_pred_t pred, void *pred_arg)
1174{
1175 rcu_read_lock();
1176
1177 bool resizing = false;
1178 bool deleted = false;
1179 bool deleted_but_gc = false;
1180
1181 cht_buckets_t *b = rcu_access(h->b);
1182 size_t idx = calc_bucket_idx(hash, b->order);
1183 marked_ptr_t *phead = &b->head[idx];
1184
1185 do {
1186 walk_mode_t walk_mode = WM_NORMAL;
1187 bool join_finishing = false;
1188
1189 resizing = resizing || (N_NORMAL != get_mark(*phead));
1190
1191 /* The table is resizing. Get the correct bucket head. */
1192 if (resizing) {
1193 upd_resizing_head(h, hash, &phead, &join_finishing, &walk_mode);
1194 }
1195
1196 wnd_t wnd = {
1197 .ppred = phead,
1198 .cur = get_next(*phead),
1199 .last = NULL
1200 };
1201
1202 if (!find_wnd_and_gc_pred(
1203 h, hash, walk_mode, pred, pred_arg, &wnd, &resizing)) {
1204 /* Could not GC a node; or detected an unexpected resize. */
1205 continue;
1206 }
1207
1208 /*
1209 * The item lookup is affected by a bucket join but effects of
1210 * the bucket join have not been seen while searching for the item.
1211 */
1212 if (join_finishing && !join_completed(h, &wnd)) {
1213 /*
1214 * Bucket was appended at the end of another but the next
1215 * ptr linking them together was not visible on this cpu.
1216 * join_completed() makes this appended bucket visible.
1217 */
1218 continue;
1219 }
1220
1221 /* Already deleted, but delete_at() requested one GC pass. */
1222 if (deleted_but_gc)
1223 break;
1224
1225 bool found = (wnd.cur != &sentinel && pred(pred_arg, wnd.cur));
1226
1227 if (!found) {
1228 rcu_read_unlock();
1229 return false;
1230 }
1231
1232 deleted = delete_at(h, &wnd, walk_mode, &deleted_but_gc, &resizing);
1233 } while (!deleted || deleted_but_gc);
1234
1235 rcu_read_unlock();
1236 return true;
1237}
1238
1239/** Unlinks wnd.cur from wnd.ppred and schedules a deferred free for the item.
1240 *
1241 * Ignores nodes marked N_JOIN if walk mode is WM_LEAVE_JOIN.
1242 *
1243 * @param h CHT to operate on.
1244 * @param wnd Points to the item to delete and its N_NORMAL predecessor.
1245 * @param walk_mode Bucket chaing walk mode.
1246 * @param deleted_but_gc Set to true if the item had been logically deleted,
1247 * but a garbage collecting walk of the bucket is in order for
1248 * it to be fully unlinked.
1249 * @param resizing Set to true if the table is undergoing an unexpected
1250 * resize (ie walk_mode == WM_NORMAL).
1251 * @return False if the wnd.ppred changed in the meantime and the whole
1252 * delete operation must be retried.
1253 */
1254static inline bool delete_at(cht_t *h, wnd_t *wnd, walk_mode_t walk_mode,
1255 bool *deleted_but_gc, bool *resizing)
1256{
1257 assert(wnd->cur && wnd->cur != &sentinel);
1258
1259 *deleted_but_gc = false;
1260
1261 if (!mark_deleted(wnd->cur, walk_mode, resizing)) {
1262 /* Already deleted, or unexpectedly marked as JOIN/JOIN_FOLLOWS. */
1263 return false;
1264 }
1265
1266 /* Marked deleted. Unlink from the bucket. */
1267
1268 /* Never unlink join nodes. */
1269 if (walk_mode == WM_LEAVE_JOIN && (N_JOIN & get_mark(wnd->cur->link)))
1270 return true;
1271
1272 cas_order_barrier();
1273
1274 if (unlink_from_pred(wnd, walk_mode, resizing)) {
1275 free_later(h, wnd->cur);
1276 } else {
1277 *deleted_but_gc = true;
1278 }
1279
1280 return true;
1281}
1282
1283/** Marks cur logically deleted. Returns false to request a retry. */
1284static inline bool mark_deleted(cht_link_t *cur, walk_mode_t walk_mode,
1285 bool *resizing)
1286{
1287 assert(cur && cur != &sentinel);
1288
1289 /*
1290 * Btw, we could loop here if the cas fails but let's not complicate
1291 * things and let's retry from the head of the bucket.
1292 */
1293
1294 cht_link_t *next = get_next(cur->link);
1295
1296 if (walk_mode == WM_NORMAL) {
1297 /* Only mark clean/normal nodes - JF/JN is used only during resize. */
1298 marked_ptr_t ret = cas_link(&cur->link, next, N_NORMAL, next, N_DELETED);
1299
1300 if (ret != make_link(next, N_NORMAL)) {
1301 *resizing = (N_JOIN | N_JOIN_FOLLOWS) & get_mark(ret);
1302 return false;
1303 }
1304 } else {
1305 static_assert(N_JOIN == N_JOIN_FOLLOWS, "");
1306
1307 /* Keep the N_JOIN/N_JOIN_FOLLOWS mark but strip N_DELETED. */
1308 mark_t cur_mark = get_mark(cur->link) & N_JOIN_FOLLOWS;
1309
1310 marked_ptr_t ret =
1311 cas_link(&cur->link, next, cur_mark, next, cur_mark | N_DELETED);
1312
1313 if (ret != make_link(next, cur_mark))
1314 return false;
1315 }
1316
1317 return true;
1318}
1319
1320/** Unlinks wnd.cur from wnd.ppred. Returns false if it should be retried. */
1321static inline bool unlink_from_pred(wnd_t *wnd, walk_mode_t walk_mode,
1322 bool *resizing)
1323{
1324 assert(wnd->cur != &sentinel);
1325 assert(wnd->cur && (N_DELETED & get_mark(wnd->cur->link)));
1326
1327 cht_link_t *next = get_next(wnd->cur->link);
1328
1329 if (walk_mode == WM_LEAVE_JOIN) {
1330 /* Never try to unlink join nodes. */
1331 assert(!(N_JOIN & get_mark(wnd->cur->link)));
1332
1333 mark_t pred_mark = get_mark(*wnd->ppred);
1334 /* Succeed only if the predecessor is clean/normal or a join node. */
1335 mark_t exp_pred_mark = (N_JOIN & pred_mark) ? pred_mark : N_NORMAL;
1336
1337 marked_ptr_t pred_link = make_link(wnd->cur, exp_pred_mark);
1338 marked_ptr_t next_link = make_link(next, exp_pred_mark);
1339
1340 if (pred_link != _cas_link(wnd->ppred, pred_link, next_link))
1341 return false;
1342 } else {
1343 assert(walk_mode == WM_MOVE_JOIN_FOLLOWS || walk_mode == WM_NORMAL);
1344 /* Move the JF mark if set. Clear DEL mark. */
1345 mark_t cur_mark = N_JOIN_FOLLOWS & get_mark(wnd->cur->link);
1346
1347 /* The predecessor must be clean/normal. */
1348 marked_ptr_t pred_link = make_link(wnd->cur, N_NORMAL);
1349 /* Link to cur's successor keeping/copying cur's JF mark. */
1350 marked_ptr_t next_link = make_link(next, cur_mark);
1351
1352 marked_ptr_t ret = _cas_link(wnd->ppred, pred_link, next_link);
1353
1354 if (pred_link != ret) {
1355 /* If we're not resizing the table there are no JF/JN nodes. */
1356 *resizing = (walk_mode == WM_NORMAL) &&
1357 (N_JOIN_FOLLOWS & get_mark(ret));
1358 return false;
1359 }
1360 }
1361
1362 return true;
1363}
1364
1365/** Finds the first non-deleted item equal to \a pred_arg according to \a pred.
1366 *
1367 * The function returns the candidate item in \a wnd. Logically deleted
1368 * nodes are garbage collected so the predecessor will most likely not
1369 * be marked as deleted.
1370 *
1371 * Unlike find_wnd_and_gc(), this function never returns a node that
1372 * is known to have already been marked N_DELETED.
1373 *
1374 * Any logically deleted nodes (ie those marked N_DELETED) are garbage
1375 * collected, ie free in the background via rcu_call (except for join-nodes
1376 * if walk_mode == WM_LEAVE_JOIN).
1377 *
1378 * @param h CHT to operate on.
1379 * @param hash Hash the search for.
1380 * @param walk_mode Bucket chain walk mode.
1381 * @param pred Predicate used to find an item equal to pred_arg.
1382 * @param pred_arg Argument to pass to the equality predicate \a pred.
1383 * @param[in,out] wnd The search starts with wnd.cur. If the desired
1384 * item is found wnd.cur will point to it.
1385 * @param resizing Set to true if the table is resizing but it was not
1386 * expected (ie walk_mode == WM_NORMAL).
1387 * @return False if the operation has to be retried. True otherwise
1388 * (even if an equal item had not been found).
1389 */
1390static bool find_wnd_and_gc_pred(cht_t *h, size_t hash, walk_mode_t walk_mode,
1391 equal_pred_t pred, void *pred_arg, wnd_t *wnd, bool *resizing)
1392{
1393 assert(wnd->cur);
1394
1395 if (wnd->cur == &sentinel)
1396 return true;
1397
1398 /*
1399 * A read barrier is not needed here to bring up the most recent
1400 * node marks (esp the N_DELETED). At worst we'll try to delete
1401 * an already deleted node; fail in delete_at(); and retry.
1402 */
1403
1404 size_t cur_hash;
1405
1406try_again:
1407 cur_hash = node_hash(h, wnd->cur);
1408
1409 while (cur_hash <= hash) {
1410 assert(wnd->cur && wnd->cur != &sentinel);
1411
1412 /* GC any deleted nodes on the way. */
1413 if (N_DELETED & get_mark(wnd->cur->link)) {
1414 if (!gc_deleted_node(h, walk_mode, wnd, resizing)) {
1415 /* Retry from the head of a bucket. */
1416 return false;
1417 }
1418 } else {
1419 /* Is this the node we were looking for? */
1420 if (cur_hash == hash && pred(pred_arg, wnd->cur))
1421 return true;
1422
1423 next_wnd(wnd);
1424 }
1425
1426 cur_hash = node_hash(h, wnd->cur);
1427 }
1428
1429 if (cur_hash == h->invalid_hash) {
1430 next_wnd(wnd);
1431 assert(wnd->cur);
1432 goto try_again;
1433 }
1434
1435 /* The searched for node is not in the current bucket. */
1436 return true;
1437}
1438
1439/** Find the first item (deleted or not) with a hash greater or equal to \a hash.
1440 *
1441 * The function returns the first item with a hash that is greater or
1442 * equal to \a hash in \a wnd. Moreover it garbage collects logically
1443 * deleted node that have not yet been unlinked and freed. Therefore,
1444 * the returned node's predecessor will most likely be N_NORMAL.
1445 *
1446 * Unlike find_wnd_and_gc_pred(), this function may return a node
1447 * that is known to had been marked N_DELETED.
1448 *
1449 * @param h CHT to operate on.
1450 * @param hash Hash of the item to find.
1451 * @param walk_mode Bucket chain walk mode.
1452 * @param[in,out] wnd wnd.cur denotes the first node of the chain. If the
1453 * the operation is successful, \a wnd points to the desired
1454 * item.
1455 * @param resizing Set to true if a table resize was detected but walk_mode
1456 * suggested the table was not undergoing a resize.
1457 * @return False indicates the operation must be retried. True otherwise
1458 * (even if an item with exactly the same has was not found).
1459 */
1460static bool find_wnd_and_gc(cht_t *h, size_t hash, walk_mode_t walk_mode,
1461 wnd_t *wnd, bool *resizing)
1462{
1463try_again:
1464 assert(wnd->cur);
1465
1466 while (node_hash(h, wnd->cur) < hash) {
1467 /* GC any deleted nodes along the way to our desired node. */
1468 if (N_DELETED & get_mark(wnd->cur->link)) {
1469 if (!gc_deleted_node(h, walk_mode, wnd, resizing)) {
1470 /* Failed to remove the garbage node. Retry. */
1471 return false;
1472 }
1473 } else {
1474 next_wnd(wnd);
1475 }
1476
1477 assert(wnd->cur);
1478 }
1479
1480 if (node_hash(h, wnd->cur) == h->invalid_hash) {
1481 next_wnd(wnd);
1482 goto try_again;
1483 }
1484
1485 /* wnd->cur may be NULL or even marked N_DELETED. */
1486 return true;
1487}
1488
1489/** Garbage collects the N_DELETED node at \a wnd skipping join nodes. */
1490static bool gc_deleted_node(cht_t *h, walk_mode_t walk_mode, wnd_t *wnd,
1491 bool *resizing)
1492{
1493 assert(N_DELETED & get_mark(wnd->cur->link));
1494
1495 /* Skip deleted JOIN nodes. */
1496 if (walk_mode == WM_LEAVE_JOIN && (N_JOIN & get_mark(wnd->cur->link))) {
1497 next_wnd(wnd);
1498 } else {
1499 /* Ordinary deleted node or a deleted JOIN_FOLLOWS. */
1500 assert(walk_mode != WM_LEAVE_JOIN ||
1501 !((N_JOIN | N_JOIN_FOLLOWS) & get_mark(wnd->cur->link)));
1502
1503 /* Unlink an ordinary deleted node, move JOIN_FOLLOWS mark. */
1504 if (!unlink_from_pred(wnd, walk_mode, resizing)) {
1505 /* Retry. The predecessor was deleted, invalid, const, join_follows. */
1506 return false;
1507 }
1508
1509 free_later(h, wnd->cur);
1510
1511 /* Leave ppred as is. */
1512 wnd->last = wnd->cur;
1513 wnd->cur = get_next(wnd->cur->link);
1514 }
1515
1516 return true;
1517}
1518
1519/** Returns true if a bucket join had already completed.
1520 *
1521 * May only be called if upd_resizing_head() indicates a bucket join
1522 * may be in progress.
1523 *
1524 * If it returns false, the search must be retried in order to guarantee
1525 * all item that should have been encountered have been seen.
1526 */
1527static bool join_completed(cht_t *h, const wnd_t *wnd)
1528{
1529 /*
1530 * The table is shrinking and the searched for item is in a bucket
1531 * appended to another. Check that the link joining these two buckets
1532 * is visible and if not, make it visible to this cpu.
1533 */
1534
1535 /*
1536 * Resizer ensures h->b->order stays the same for the duration of this
1537 * func. We got here because there was an alternative head to search.
1538 * The resizer waits for all preexisting readers to finish after
1539 * it
1540 */
1541 assert(h->b->order > h->new_b->order);
1542 assert(wnd->cur);
1543
1544 /* Either we did not need the joining link or we have already followed it.*/
1545 if (wnd->cur != &sentinel)
1546 return true;
1547
1548 /* We have reached the end of a bucket. */
1549
1550 if (wnd->last != &sentinel) {
1551 size_t last_seen_hash = node_hash(h, wnd->last);
1552
1553 if (last_seen_hash == h->invalid_hash) {
1554 last_seen_hash = calc_node_hash(h, wnd->last);
1555 }
1556
1557 size_t last_old_idx = calc_bucket_idx(last_seen_hash, h->b->order);
1558 size_t move_src_idx = grow_idx(shrink_idx(last_old_idx));
1559
1560 /*
1561 * Last node seen was in the joining bucket - if the searched
1562 * for node is there we will find it.
1563 */
1564 if (move_src_idx != last_old_idx)
1565 return true;
1566 }
1567
1568 /*
1569 * Reached the end of the bucket but no nodes from the joining bucket
1570 * were seen. There should have at least been a JOIN node so we have
1571 * definitely not seen (and followed) the joining link. Make the link
1572 * visible and retry.
1573 */
1574 read_barrier();
1575 return false;
1576}
1577
1578/** When resizing returns the bucket head to start the search with in \a phead.
1579 *
1580 * If a resize had been detected (eg cht_t.b.head[idx] is marked immutable).
1581 * upd_resizing_head() moves the bucket for \a hash from the old head
1582 * to the new head. Moreover, it splits or joins buckets as necessary.
1583 *
1584 * @param h CHT to operate on.
1585 * @param hash Hash of an item whose chain we would like to traverse.
1586 * @param[out] phead Head of the bucket to search for \a hash.
1587 * @param[out] join_finishing Set to true if a bucket join might be
1588 * in progress and the bucket may have to traversed again
1589 * as indicated by join_completed().
1590 * @param[out] walk_mode Specifies how to interpret node marks.
1591 */
1592static void upd_resizing_head(cht_t *h, size_t hash, marked_ptr_t **phead,
1593 bool *join_finishing, walk_mode_t *walk_mode)
1594{
1595 cht_buckets_t *b = rcu_access(h->b);
1596 size_t old_idx = calc_bucket_idx(hash, b->order);
1597 size_t new_idx = calc_bucket_idx(hash, h->new_b->order);
1598
1599 marked_ptr_t *pold_head = &b->head[old_idx];
1600 marked_ptr_t *pnew_head = &h->new_b->head[new_idx];
1601
1602 /* In any case, use the bucket in the new table. */
1603 *phead = pnew_head;
1604
1605 /* Growing the table. */
1606 if (b->order < h->new_b->order) {
1607 size_t move_dest_idx = grow_idx(old_idx);
1608 marked_ptr_t *pmoved_head = &h->new_b->head[move_dest_idx];
1609
1610 /* Complete moving the bucket from the old to the new table. */
1611 help_head_move(pold_head, pmoved_head);
1612
1613 /* The hash belongs to the moved bucket. */
1614 if (move_dest_idx == new_idx) {
1615 assert(pmoved_head == pnew_head);
1616 /*
1617 * move_head() makes the new head of the moved bucket visible.
1618 * The new head may be marked with a JOIN_FOLLOWS
1619 */
1620 assert(!(N_CONST & get_mark(*pmoved_head)));
1621 *walk_mode = WM_MOVE_JOIN_FOLLOWS;
1622 } else {
1623 assert(pmoved_head != pnew_head);
1624 /*
1625 * The hash belongs to the bucket that is the result of splitting
1626 * the old/moved bucket, ie the bucket that contains the second
1627 * half of the split/old/moved bucket.
1628 */
1629
1630 /* The moved bucket has not yet been split. */
1631 if (N_NORMAL != get_mark(*pnew_head)) {
1632 size_t split_hash = calc_split_hash(new_idx, h->new_b->order);
1633 split_bucket(h, pmoved_head, pnew_head, split_hash);
1634 /*
1635 * split_bucket() makes the new head visible. No
1636 * JOIN_FOLLOWS in this part of split bucket.
1637 */
1638 assert(N_NORMAL == get_mark(*pnew_head));
1639 }
1640
1641 *walk_mode = WM_LEAVE_JOIN;
1642 }
1643 } else if (h->new_b->order < b->order) {
1644 /* Shrinking the table. */
1645
1646 size_t move_src_idx = grow_idx(new_idx);
1647
1648 /*
1649 * Complete moving the bucket from the old to the new table.
1650 * Makes a valid pnew_head visible if already moved.
1651 */
1652 help_head_move(&b->head[move_src_idx], pnew_head);
1653
1654 /* Hash belongs to the bucket to be joined with the moved bucket. */
1655 if (move_src_idx != old_idx) {
1656 /* Bucket join not yet completed. */
1657 if (N_INVALID != get_mark(*pold_head)) {
1658 size_t split_hash = calc_split_hash(old_idx, b->order);
1659 join_buckets(h, pold_head, pnew_head, split_hash);
1660 }
1661
1662 /*
1663 * The resizer sets pold_head to &sentinel when all cpus are
1664 * guaranteed to see the bucket join.
1665 */
1666 *join_finishing = (&sentinel != get_next(*pold_head));
1667 }
1668
1669 /* move_head() or join_buckets() makes it so or makes the mark visible.*/
1670 assert(N_INVALID == get_mark(*pold_head));
1671 /* move_head() makes it visible. No JOIN_FOLLOWS used when shrinking. */
1672 assert(N_NORMAL == get_mark(*pnew_head));
1673
1674 *walk_mode = WM_LEAVE_JOIN;
1675 } else {
1676 /*
1677 * Final stage of resize. The resizer is waiting for all
1678 * readers to notice that the old table had been replaced.
1679 */
1680 assert(b == h->new_b);
1681 *walk_mode = WM_NORMAL;
1682 }
1683}
1684
1685
1686#if 0
1687static void move_head(marked_ptr_t *psrc_head, marked_ptr_t *pdest_head)
1688{
1689 start_head_move(psrc_head);
1690 cas_order_barrier();
1691 complete_head_move(psrc_head, pdest_head);
1692}
1693#endif
1694
1695/** Moves an immutable head \a psrc_head of cht_t.b to \a pdest_head of cht_t.new_b.
1696 *
1697 * The function guarantees the move will be visible on this cpu once
1698 * it completes. In particular, *pdest_head will not be N_INVALID.
1699 *
1700 * Unlike complete_head_move(), help_head_move() checks if the head had already
1701 * been moved and tries to avoid moving the bucket heads if possible.
1702 */
1703static inline void help_head_move(marked_ptr_t *psrc_head,
1704 marked_ptr_t *pdest_head)
1705{
1706 /* Head move has to in progress already when calling this func. */
1707 assert(N_CONST & get_mark(*psrc_head));
1708
1709 /* Head already moved. */
1710 if (N_INVALID == get_mark(*psrc_head)) {
1711 /* Effects of the head move have not yet propagated to this cpu. */
1712 if (N_INVALID == get_mark(*pdest_head)) {
1713 /* Make the move visible on this cpu. */
1714 read_barrier();
1715 }
1716 } else {
1717 complete_head_move(psrc_head, pdest_head);
1718 }
1719
1720 assert(!(N_CONST & get_mark(*pdest_head)));
1721}
1722
1723/** Initiates the move of the old head \a psrc_head.
1724 *
1725 * The move may be completed with help_head_move().
1726 */
1727static void start_head_move(marked_ptr_t *psrc_head)
1728{
1729 /* Mark src head immutable. */
1730 mark_const(psrc_head);
1731}
1732
1733/** Marks the head immutable. */
1734static void mark_const(marked_ptr_t *psrc_head)
1735{
1736 marked_ptr_t ret, src_link;
1737
1738 /* Mark src head immutable. */
1739 do {
1740 cht_link_t *next = get_next(*psrc_head);
1741 src_link = make_link(next, N_NORMAL);
1742
1743 /* Mark the normal/clean src link immutable/const. */
1744 ret = cas_link(psrc_head, next, N_NORMAL, next, N_CONST);
1745 } while (ret != src_link && !(N_CONST & get_mark(ret)));
1746}
1747
1748/** Completes moving head psrc_head to pdest_head (started by start_head_move()).*/
1749static void complete_head_move(marked_ptr_t *psrc_head, marked_ptr_t *pdest_head)
1750{
1751 assert(N_JOIN_FOLLOWS != get_mark(*psrc_head));
1752 assert(N_CONST & get_mark(*psrc_head));
1753
1754 cht_link_t *next = get_next(*psrc_head);
1755
1756#ifdef CONFIG_DEBUG
1757 marked_ptr_t ret =
1758#endif
1759 cas_link(pdest_head, &sentinel, N_INVALID, next, N_NORMAL);
1760 assert(ret == make_link(&sentinel, N_INVALID) || (N_NORMAL == get_mark(ret)));
1761 cas_order_barrier();
1762
1763#ifdef CONFIG_DEBUG
1764 ret =
1765#endif
1766 cas_link(psrc_head, next, N_CONST, next, N_INVALID);
1767 assert(ret == make_link(next, N_CONST) || (N_INVALID == get_mark(ret)));
1768 cas_order_barrier();
1769}
1770
1771/** Splits the bucket at psrc_head and links to the remainder from pdest_head.
1772 *
1773 * Items with hashes greater or equal to \a split_hash are moved to bucket
1774 * with head at \a pdest_head.
1775 *
1776 * @param h CHT to operate on.
1777 * @param psrc_head Head of the bucket to split (in cht_t.new_b).
1778 * @param pdest_head Head of the bucket that points to the second part
1779 * of the split bucket in psrc_head. (in cht_t.new_b)
1780 * @param split_hash Hash of the first possible item in the remainder of
1781 * psrc_head, ie the smallest hash pdest_head is allowed
1782 * to point to..
1783 */
1784static void split_bucket(cht_t *h, marked_ptr_t *psrc_head,
1785 marked_ptr_t *pdest_head, size_t split_hash)
1786{
1787 /* Already split. */
1788 if (N_NORMAL == get_mark(*pdest_head))
1789 return;
1790
1791 /*
1792 * L == Last node of the first part of the split bucket. That part
1793 * remains in the original/src bucket.
1794 * F == First node of the second part of the split bucket. That part
1795 * will be referenced from the dest bucket head.
1796 *
1797 * We want to first mark a clean L as JF so that updaters unaware of
1798 * the split (or table resize):
1799 * - do not insert a new node between L and F
1800 * - do not unlink L (that is why it has to be clean/normal)
1801 * - do not unlink F
1802 *
1803 * Then we can safely mark F as JN even if it has been marked deleted.
1804 * Once F is marked as JN updaters aware of table resize will not
1805 * attempt to unlink it (JN will have two predecessors - we cannot
1806 * safely unlink from both at the same time). Updaters unaware of
1807 * ongoing resize can reach F only via L and that node is already
1808 * marked JF, so they won't unlink F.
1809 *
1810 * Last, link the new/dest head to F.
1811 *
1812 *
1813 * 0) ,-- split_hash, first hash of the dest bucket
1814 * v
1815 * [src_head | N] -> .. -> [L] -> [F]
1816 * [dest_head | Inv]
1817 *
1818 * 1) ,-- split_hash
1819 * v
1820 * [src_head | N] -> .. -> [JF] -> [F]
1821 * [dest_head | Inv]
1822 *
1823 * 2) ,-- split_hash
1824 * v
1825 * [src_head | N] -> .. -> [JF] -> [JN]
1826 * [dest_head | Inv]
1827 *
1828 * 3) ,-- split_hash
1829 * v
1830 * [src_head | N] -> .. -> [JF] -> [JN]
1831 * ^
1832 * [dest_head | N] -----------------'
1833 */
1834 wnd_t wnd;
1835
1836 rcu_read_lock();
1837
1838 /* Mark the last node of the first part of the split bucket as JF. */
1839 mark_join_follows(h, psrc_head, split_hash, &wnd);
1840 cas_order_barrier();
1841
1842 /* There are nodes in the dest bucket, ie the second part of the split. */
1843 if (wnd.cur != &sentinel) {
1844 /*
1845 * Mark the first node of the dest bucket as a join node so
1846 * updaters do not attempt to unlink it if it is deleted.
1847 */
1848 mark_join_node(wnd.cur);
1849 cas_order_barrier();
1850 } else {
1851 /*
1852 * Second part of the split bucket is empty. There are no nodes
1853 * to mark as JOIN nodes and there never will be.
1854 */
1855 }
1856
1857 /* Link the dest head to the second part of the split. */
1858#ifdef CONFIG_DEBUG
1859 marked_ptr_t ret =
1860#endif
1861 cas_link(pdest_head, &sentinel, N_INVALID, wnd.cur, N_NORMAL);
1862 assert(ret == make_link(&sentinel, N_INVALID) || (N_NORMAL == get_mark(ret)));
1863 cas_order_barrier();
1864
1865 rcu_read_unlock();
1866}
1867
1868/** Finds and marks the last node of psrc_head w/ hash less than split_hash.
1869 *
1870 * Finds a node in psrc_head with the greatest hash that is strictly less
1871 * than split_hash and marks it with N_JOIN_FOLLOWS.
1872 *
1873 * Returns a window pointing to that node.
1874 *
1875 * Any logically deleted nodes along the way are
1876 * garbage collected; therefore, the predecessor node (if any) will most
1877 * likely not be marked N_DELETED.
1878 *
1879 * @param h CHT to operate on.
1880 * @param psrc_head Bucket head.
1881 * @param split_hash The smallest hash a join node (ie the node following
1882 * the desired join-follows node) may have.
1883 * @param[out] wnd Points to the node marked with N_JOIN_FOLLOWS.
1884 */
1885static void mark_join_follows(cht_t *h, marked_ptr_t *psrc_head,
1886 size_t split_hash, wnd_t *wnd)
1887{
1888 /* See comment in split_bucket(). */
1889
1890 bool done = false;
1891
1892 do {
1893 bool resizing = false;
1894 wnd->ppred = psrc_head;
1895 wnd->cur = get_next(*psrc_head);
1896
1897 /*
1898 * Find the split window, ie the last node of the first part of
1899 * the split bucket and the its successor - the first node of
1900 * the second part of the split bucket. Retry if GC failed.
1901 */
1902 if (!find_wnd_and_gc(h, split_hash, WM_MOVE_JOIN_FOLLOWS, wnd, &resizing))
1903 continue;
1904
1905 /* Must not report that the table is resizing if WM_MOVE_JOIN_FOLLOWS.*/
1906 assert(!resizing);
1907 /*
1908 * Mark the last node of the first half of the split bucket
1909 * that a join node follows. It must be clean/normal.
1910 */
1911 marked_ptr_t ret =
1912 cas_link(wnd->ppred, wnd->cur, N_NORMAL, wnd->cur, N_JOIN_FOLLOWS);
1913
1914 /*
1915 * Successfully marked as a JF node or already marked that way (even
1916 * if also marked deleted - unlinking the node will move the JF mark).
1917 */
1918 done = (ret == make_link(wnd->cur, N_NORMAL)) ||
1919 (N_JOIN_FOLLOWS & get_mark(ret));
1920 } while (!done);
1921}
1922
1923/** Marks join_node with N_JOIN. */
1924static void mark_join_node(cht_link_t *join_node)
1925{
1926 /* See comment in split_bucket(). */
1927
1928 bool done;
1929 do {
1930 cht_link_t *next = get_next(join_node->link);
1931 mark_t mark = get_mark(join_node->link);
1932
1933 /*
1934 * May already be marked as deleted, but it won't be unlinked
1935 * because its predecessor is marked with JOIN_FOLLOWS or CONST.
1936 */
1937 marked_ptr_t ret =
1938 cas_link(&join_node->link, next, mark, next, mark | N_JOIN);
1939
1940 /* Successfully marked or already marked as a join node. */
1941 done = (ret == make_link(next, mark)) ||
1942 (N_JOIN & get_mark(ret));
1943 } while (!done);
1944}
1945
1946/** Appends the bucket at psrc_head to the bucket at pdest_head.
1947 *
1948 * @param h CHT to operate on.
1949 * @param psrc_head Bucket to merge with pdest_head.
1950 * @param pdest_head Bucket to be joined by psrc_head.
1951 * @param split_hash The smallest hash psrc_head may contain.
1952 */
1953static void join_buckets(cht_t *h, marked_ptr_t *psrc_head,
1954 marked_ptr_t *pdest_head, size_t split_hash)
1955{
1956 /* Buckets already joined. */
1957 if (N_INVALID == get_mark(*psrc_head))
1958 return;
1959 /*
1960 * F == First node of psrc_head, ie the bucket we want to append
1961 * to (ie join with) the bucket starting at pdest_head.
1962 * L == Last node of pdest_head, ie the bucket that psrc_head will
1963 * be appended to.
1964 *
1965 * (1) We first mark psrc_head immutable to signal that a join is
1966 * in progress and so that updaters unaware of the join (or table
1967 * resize):
1968 * - do not insert new nodes between the head psrc_head and F
1969 * - do not unlink F (it may already be marked deleted)
1970 *
1971 * (2) Next, F is marked as a join node. Updaters aware of table resize
1972 * will not attempt to unlink it. We cannot safely/atomically unlink
1973 * the join node because it will be pointed to from two different
1974 * buckets. Updaters unaware of resize will fail to unlink the join
1975 * node due to the head being marked immutable.
1976 *
1977 * (3) Then the tail of the bucket at pdest_head is linked to the join
1978 * node. From now on, nodes in both buckets can be found via pdest_head.
1979 *
1980 * (4) Last, mark immutable psrc_head as invalid. It signals updaters
1981 * that the join is complete and they can insert new nodes (originally
1982 * destined for psrc_head) into pdest_head.
1983 *
1984 * Note that pdest_head keeps pointing at the join node. This allows
1985 * lookups and updaters to determine if they should see a link between
1986 * the tail L and F when searching for nodes originally in psrc_head
1987 * via pdest_head. If they reach the tail of pdest_head without
1988 * encountering any nodes of psrc_head, either there were no nodes
1989 * in psrc_head to begin with or the link between L and F did not
1990 * yet propagate to their cpus. If psrc_head was empty, it remains
1991 * NULL. Otherwise psrc_head points to a join node (it will not be
1992 * unlinked until table resize completes) and updaters/lookups
1993 * should issue a read_barrier() to make the link [L]->[JN] visible.
1994 *
1995 * 0) ,-- split_hash, first hash of the src bucket
1996 * v
1997 * [dest_head | N]-> .. -> [L]
1998 * [src_head | N]--> [F] -> ..
1999 * ^
2000 * ` split_hash, first hash of the src bucket
2001 *
2002 * 1) ,-- split_hash
2003 * v
2004 * [dest_head | N]-> .. -> [L]
2005 * [src_head | C]--> [F] -> ..
2006 *
2007 * 2) ,-- split_hash
2008 * v
2009 * [dest_head | N]-> .. -> [L]
2010 * [src_head | C]--> [JN] -> ..
2011 *
2012 * 3) ,-- split_hash
2013 * v
2014 * [dest_head | N]-> .. -> [L] --+
2015 * v
2016 * [src_head | C]-------------> [JN] -> ..
2017 *
2018 * 4) ,-- split_hash
2019 * v
2020 * [dest_head | N]-> .. -> [L] --+
2021 * v
2022 * [src_head | Inv]-----------> [JN] -> ..
2023 */
2024
2025 rcu_read_lock();
2026
2027 /* Mark src_head immutable - signals updaters that bucket join started. */
2028 mark_const(psrc_head);
2029 cas_order_barrier();
2030
2031 cht_link_t *join_node = get_next(*psrc_head);
2032
2033 if (join_node != &sentinel) {
2034 mark_join_node(join_node);
2035 cas_order_barrier();
2036
2037 link_to_join_node(h, pdest_head, join_node, split_hash);
2038 cas_order_barrier();
2039 }
2040
2041#ifdef CONFIG_DEBUG
2042 marked_ptr_t ret =
2043#endif
2044 cas_link(psrc_head, join_node, N_CONST, join_node, N_INVALID);
2045 assert(ret == make_link(join_node, N_CONST) || (N_INVALID == get_mark(ret)));
2046 cas_order_barrier();
2047
2048 rcu_read_unlock();
2049}
2050
2051/** Links the tail of pdest_head to join_node.
2052 *
2053 * @param h CHT to operate on.
2054 * @param pdest_head Head of the bucket whose tail is to be linked to join_node.
2055 * @param join_node A node marked N_JOIN with a hash greater or equal to
2056 * split_hash.
2057 * @param split_hash The least hash that is greater than the hash of any items
2058 * (originally) in pdest_head.
2059 */
2060static void link_to_join_node(cht_t *h, marked_ptr_t *pdest_head,
2061 cht_link_t *join_node, size_t split_hash)
2062{
2063 bool done = false;
2064
2065 do {
2066 wnd_t wnd = {
2067 .ppred = pdest_head,
2068 .cur = get_next(*pdest_head)
2069 };
2070
2071 bool resizing = false;
2072
2073 if (!find_wnd_and_gc(h, split_hash, WM_LEAVE_JOIN, &wnd, &resizing))
2074 continue;
2075
2076 assert(!resizing);
2077
2078 if (wnd.cur != &sentinel) {
2079 /* Must be from the new appended bucket. */
2080 assert(split_hash <= node_hash(h, wnd.cur) ||
2081 h->invalid_hash == node_hash(h, wnd.cur));
2082 return;
2083 }
2084
2085 /* Reached the tail of pdest_head - link it to the join node. */
2086 marked_ptr_t ret =
2087 cas_link(wnd.ppred, &sentinel, N_NORMAL, join_node, N_NORMAL);
2088
2089 done = (ret == make_link(&sentinel, N_NORMAL));
2090 } while (!done);
2091}
2092
2093/** Instructs RCU to free the item once all preexisting references are dropped.
2094 *
2095 * The item is freed via op->remove_callback().
2096 */
2097static void free_later(cht_t *h, cht_link_t *item)
2098{
2099 assert(item != &sentinel);
2100
2101 /*
2102 * remove_callback only works as rcu_func_t because rcu_link is the first
2103 * field in cht_link_t.
2104 */
2105 rcu_call(&item->rcu_link, (rcu_func_t)h->op->remove_callback);
2106
2107 item_removed(h);
2108}
2109
2110/** Notes that an item had been unlinked from the table and shrinks it if needed.
2111 *
2112 * If the number of items in the table drops below 1/4 of the maximum
2113 * allowed load the table is shrunk in the background.
2114 */
2115static inline void item_removed(cht_t *h)
2116{
2117 size_t items = (size_t) atomic_predec(&h->item_cnt);
2118 size_t bucket_cnt = (1 << h->b->order);
2119
2120 bool need_shrink = (items == h->max_load * bucket_cnt / 4);
2121 bool missed_shrink = (items == h->max_load * bucket_cnt / 8);
2122
2123 if ((need_shrink || missed_shrink) && h->b->order > h->min_order) {
2124 size_t resize_reqs = atomic_preinc(&h->resize_reqs);
2125 /* The first resize request. Start the resizer. */
2126 if (1 == resize_reqs) {
2127 workq_global_enqueue_noblock(&h->resize_work, resize_table);
2128 }
2129 }
2130}
2131
2132/** Notes an item had been inserted and grows the table if needed.
2133 *
2134 * The table is resized in the background.
2135 */
2136static inline void item_inserted(cht_t *h)
2137{
2138 size_t items = (size_t) atomic_preinc(&h->item_cnt);
2139 size_t bucket_cnt = (1 << h->b->order);
2140
2141 bool need_grow = (items == h->max_load * bucket_cnt);
2142 bool missed_grow = (items == 2 * h->max_load * bucket_cnt);
2143
2144 if ((need_grow || missed_grow) && h->b->order < CHT_MAX_ORDER) {
2145 size_t resize_reqs = atomic_preinc(&h->resize_reqs);
2146 /* The first resize request. Start the resizer. */
2147 if (1 == resize_reqs) {
2148 workq_global_enqueue_noblock(&h->resize_work, resize_table);
2149 }
2150 }
2151}
2152
2153/** Resize request handler. Invoked on the system work queue. */
2154static void resize_table(work_t *arg)
2155{
2156 cht_t *h = member_to_inst(arg, cht_t, resize_work);
2157
2158#ifdef CONFIG_DEBUG
2159 assert(h->b);
2160 /* Make resize_reqs visible. */
2161 read_barrier();
2162 assert(0 < atomic_load(&h->resize_reqs));
2163#endif
2164
2165 bool done = false;
2166
2167 do {
2168 /* Load the most recent h->item_cnt. */
2169 read_barrier();
2170 size_t cur_items = (size_t) atomic_load(&h->item_cnt);
2171 size_t bucket_cnt = (1 << h->b->order);
2172 size_t max_items = h->max_load * bucket_cnt;
2173
2174 if (cur_items >= max_items && h->b->order < CHT_MAX_ORDER) {
2175 grow_table(h);
2176 } else if (cur_items <= max_items / 4 && h->b->order > h->min_order) {
2177 shrink_table(h);
2178 } else {
2179 /* Table is just the right size. */
2180 size_t reqs = atomic_predec(&h->resize_reqs);
2181 done = (reqs == 0);
2182 }
2183 } while (!done);
2184}
2185
2186/** Increases the number of buckets two-fold. Blocks until done. */
2187static void grow_table(cht_t *h)
2188{
2189 if (h->b->order >= CHT_MAX_ORDER)
2190 return;
2191
2192 h->new_b = alloc_buckets(h->b->order + 1, true, false);
2193
2194 /* Failed to alloc a new table - try next time the resizer is run. */
2195 if (!h->new_b)
2196 return;
2197
2198 /* Wait for all readers and updaters to see the initialized new table. */
2199 rcu_synchronize();
2200 size_t old_bucket_cnt = (1 << h->b->order);
2201
2202 /*
2203 * Give updaters a chance to help out with the resize. Do the minimum
2204 * work needed to announce a resize is in progress, ie start moving heads.
2205 */
2206 for (size_t idx = 0; idx < old_bucket_cnt; ++idx) {
2207 start_head_move(&h->b->head[idx]);
2208 }
2209
2210 /* Order start_head_move() wrt complete_head_move(). */
2211 cas_order_barrier();
2212
2213 /* Complete moving heads and split any buckets not yet split by updaters. */
2214 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2215 marked_ptr_t *move_dest_head = &h->new_b->head[grow_idx(old_idx)];
2216 marked_ptr_t *move_src_head = &h->b->head[old_idx];
2217
2218 /* Head move not yet completed. */
2219 if (N_INVALID != get_mark(*move_src_head)) {
2220 complete_head_move(move_src_head, move_dest_head);
2221 }
2222
2223 size_t split_idx = grow_to_split_idx(old_idx);
2224 size_t split_hash = calc_split_hash(split_idx, h->new_b->order);
2225 marked_ptr_t *split_dest_head = &h->new_b->head[split_idx];
2226
2227 split_bucket(h, move_dest_head, split_dest_head, split_hash);
2228 }
2229
2230 /*
2231 * Wait for all updaters to notice the new heads. Once everyone sees
2232 * the invalid old bucket heads they will know a resize is in progress
2233 * and updaters will modify the correct new buckets.
2234 */
2235 rcu_synchronize();
2236
2237 /* Clear the JOIN_FOLLOWS mark and remove the link between the split buckets.*/
2238 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2239 size_t new_idx = grow_idx(old_idx);
2240
2241 cleanup_join_follows(h, &h->new_b->head[new_idx]);
2242 }
2243
2244 /*
2245 * Wait for everyone to notice that buckets were split, ie link connecting
2246 * the join follows and join node has been cut.
2247 */
2248 rcu_synchronize();
2249
2250 /* Clear the JOIN mark and GC any deleted join nodes. */
2251 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2252 size_t new_idx = grow_to_split_idx(old_idx);
2253
2254 cleanup_join_node(h, &h->new_b->head[new_idx]);
2255 }
2256
2257 /* Wait for everyone to see that the table is clear of any resize marks. */
2258 rcu_synchronize();
2259
2260 cht_buckets_t *old_b = h->b;
2261 rcu_assign(h->b, h->new_b);
2262
2263 /* Wait for everyone to start using the new table. */
2264 rcu_synchronize();
2265
2266 free(old_b);
2267
2268 /* Not needed; just for increased readability. */
2269 h->new_b = NULL;
2270}
2271
2272/** Halfs the number of buckets. Blocks until done. */
2273static void shrink_table(cht_t *h)
2274{
2275 if (h->b->order <= h->min_order)
2276 return;
2277
2278 h->new_b = alloc_buckets(h->b->order - 1, true, false);
2279
2280 /* Failed to alloc a new table - try next time the resizer is run. */
2281 if (!h->new_b)
2282 return;
2283
2284 /* Wait for all readers and updaters to see the initialized new table. */
2285 rcu_synchronize();
2286
2287 size_t old_bucket_cnt = (1 << h->b->order);
2288
2289 /*
2290 * Give updaters a chance to help out with the resize. Do the minimum
2291 * work needed to announce a resize is in progress, ie start moving heads.
2292 */
2293 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2294 size_t new_idx = shrink_idx(old_idx);
2295
2296 /* This bucket should be moved. */
2297 if (grow_idx(new_idx) == old_idx) {
2298 start_head_move(&h->b->head[old_idx]);
2299 } else {
2300 /* This bucket should join the moved bucket once the move is done.*/
2301 }
2302 }
2303
2304 /* Order start_head_move() wrt to complete_head_move(). */
2305 cas_order_barrier();
2306
2307 /* Complete moving heads and join buckets with the moved buckets. */
2308 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2309 size_t new_idx = shrink_idx(old_idx);
2310 size_t move_src_idx = grow_idx(new_idx);
2311
2312 /* This bucket should be moved. */
2313 if (move_src_idx == old_idx) {
2314 /* Head move not yet completed. */
2315 if (N_INVALID != get_mark(h->b->head[old_idx])) {
2316 complete_head_move(&h->b->head[old_idx], &h->new_b->head[new_idx]);
2317 }
2318 } else {
2319 /* This bucket should join the moved bucket. */
2320 size_t split_hash = calc_split_hash(old_idx, h->b->order);
2321 join_buckets(h, &h->b->head[old_idx], &h->new_b->head[new_idx],
2322 split_hash);
2323 }
2324 }
2325
2326 /*
2327 * Wait for all updaters to notice the new heads. Once everyone sees
2328 * the invalid old bucket heads they will know a resize is in progress
2329 * and updaters will modify the correct new buckets.
2330 */
2331 rcu_synchronize();
2332
2333 /* Let everyone know joins are complete and fully visible. */
2334 for (size_t old_idx = 0; old_idx < old_bucket_cnt; ++old_idx) {
2335 size_t move_src_idx = grow_idx(shrink_idx(old_idx));
2336
2337 /* Set the invalid joinee head to NULL. */
2338 if (old_idx != move_src_idx) {
2339 assert(N_INVALID == get_mark(h->b->head[old_idx]));
2340
2341 if (&sentinel != get_next(h->b->head[old_idx]))
2342 h->b->head[old_idx] = make_link(&sentinel, N_INVALID);
2343 }
2344 }
2345
2346 /* todo comment join node vs reset joinee head*/
2347 rcu_synchronize();
2348
2349 size_t new_bucket_cnt = (1 << h->new_b->order);
2350
2351 /* Clear the JOIN mark and GC any deleted join nodes. */
2352 for (size_t new_idx = 0; new_idx < new_bucket_cnt; ++new_idx) {
2353 cleanup_join_node(h, &h->new_b->head[new_idx]);
2354 }
2355
2356 /* Wait for everyone to see that the table is clear of any resize marks. */
2357 rcu_synchronize();
2358
2359 cht_buckets_t *old_b = h->b;
2360 rcu_assign(h->b, h->new_b);
2361
2362 /* Wait for everyone to start using the new table. */
2363 rcu_synchronize();
2364
2365 free(old_b);
2366
2367 /* Not needed; just for increased readability. */
2368 h->new_b = NULL;
2369}
2370
2371/** Finds and clears the N_JOIN mark from a node in new_head (if present). */
2372static void cleanup_join_node(cht_t *h, marked_ptr_t *new_head)
2373{
2374 rcu_read_lock();
2375
2376 cht_link_t *cur = get_next(*new_head);
2377
2378 while (cur != &sentinel) {
2379 /* Clear the join node's JN mark - even if it is marked as deleted. */
2380 if (N_JOIN & get_mark(cur->link)) {
2381 clear_join_and_gc(h, cur, new_head);
2382 break;
2383 }
2384
2385 cur = get_next(cur->link);
2386 }
2387
2388 rcu_read_unlock();
2389}
2390
2391/** Clears the join_node's N_JOIN mark frees it if marked N_DELETED as well. */
2392static void clear_join_and_gc(cht_t *h, cht_link_t *join_node,
2393 marked_ptr_t *new_head)
2394{
2395 assert(join_node != &sentinel);
2396 assert(join_node && (N_JOIN & get_mark(join_node->link)));
2397
2398 bool done;
2399
2400 /* Clear the JN mark. */
2401 do {
2402 marked_ptr_t jn_link = join_node->link;
2403 cht_link_t *next = get_next(jn_link);
2404 /* Clear the JOIN mark but keep the DEL mark if present. */
2405 mark_t cleared_mark = get_mark(jn_link) & N_DELETED;
2406
2407 marked_ptr_t ret =
2408 _cas_link(&join_node->link, jn_link, make_link(next, cleared_mark));
2409
2410 /* Done if the mark was cleared. Retry if a new node was inserted. */
2411 done = (ret == jn_link);
2412 assert(ret == jn_link || (get_mark(ret) & N_JOIN));
2413 } while (!done);
2414
2415 if (!(N_DELETED & get_mark(join_node->link)))
2416 return;
2417
2418 /* The join node had been marked as deleted - GC it. */
2419
2420 /* Clear the JOIN mark before trying to unlink the deleted join node.*/
2421 cas_order_barrier();
2422
2423 size_t jn_hash = node_hash(h, join_node);
2424 do {
2425 bool resizing = false;
2426
2427 wnd_t wnd = {
2428 .ppred = new_head,
2429 .cur = get_next(*new_head)
2430 };
2431
2432 done = find_wnd_and_gc_pred(h, jn_hash, WM_NORMAL, same_node_pred,
2433 join_node, &wnd, &resizing);
2434
2435 assert(!resizing);
2436 } while (!done);
2437}
2438
2439/** Finds a non-deleted node with N_JOIN_FOLLOWS and clears the mark. */
2440static void cleanup_join_follows(cht_t *h, marked_ptr_t *new_head)
2441{
2442 assert(new_head);
2443
2444 rcu_read_lock();
2445
2446 wnd_t wnd = {
2447 .ppred = NULL,
2448 .cur = NULL
2449 };
2450 marked_ptr_t *cur_link = new_head;
2451
2452 /*
2453 * Find the non-deleted node with a JF mark and clear the JF mark.
2454 * The JF node may be deleted and/or the mark moved to its neighbors
2455 * at any time. Therefore, we GC deleted nodes until we find the JF
2456 * node in order to remove stale/deleted JF nodes left behind eg by
2457 * delayed threads that did not yet get a chance to unlink the deleted
2458 * JF node and move its mark.
2459 *
2460 * Note that the head may be marked JF (but never DELETED).
2461 */
2462 while (true) {
2463 bool is_jf_node = N_JOIN_FOLLOWS & get_mark(*cur_link);
2464
2465 /* GC any deleted nodes on the way - even deleted JOIN_FOLLOWS. */
2466 if (N_DELETED & get_mark(*cur_link)) {
2467 assert(cur_link != new_head);
2468 assert(wnd.ppred && wnd.cur && wnd.cur != &sentinel);
2469 assert(cur_link == &wnd.cur->link);
2470
2471 bool dummy;
2472 bool deleted = gc_deleted_node(h, WM_MOVE_JOIN_FOLLOWS, &wnd, &dummy);
2473
2474 /* Failed to GC or collected a deleted JOIN_FOLLOWS. */
2475 if (!deleted || is_jf_node) {
2476 /* Retry from the head of the bucket. */
2477 cur_link = new_head;
2478 continue;
2479 }
2480 } else {
2481 /* Found a non-deleted JF. Clear its JF mark. */
2482 if (is_jf_node) {
2483 cht_link_t *next = get_next(*cur_link);
2484 marked_ptr_t ret =
2485 cas_link(cur_link, next, N_JOIN_FOLLOWS, &sentinel, N_NORMAL);
2486
2487 assert(next == &sentinel ||
2488 ((N_JOIN | N_JOIN_FOLLOWS) & get_mark(ret)));
2489
2490 /* Successfully cleared the JF mark of a non-deleted node. */
2491 if (ret == make_link(next, N_JOIN_FOLLOWS)) {
2492 break;
2493 } else {
2494 /*
2495 * The JF node had been deleted or a new node inserted
2496 * right after it. Retry from the head.
2497 */
2498 cur_link = new_head;
2499 continue;
2500 }
2501 } else {
2502 wnd.ppred = cur_link;
2503 wnd.cur = get_next(*cur_link);
2504 }
2505 }
2506
2507 /* We must encounter a JF node before we reach the end of the bucket. */
2508 assert(wnd.cur && wnd.cur != &sentinel);
2509 cur_link = &wnd.cur->link;
2510 }
2511
2512 rcu_read_unlock();
2513}
2514
2515/** Returns the first possible hash following a bucket split point.
2516 *
2517 * In other words the returned hash is the smallest possible hash
2518 * the remainder of the split bucket may contain.
2519 */
2520static inline size_t calc_split_hash(size_t split_idx, size_t order)
2521{
2522 assert(1 <= order && order <= 8 * sizeof(size_t));
2523 return split_idx << (8 * sizeof(size_t) - order);
2524}
2525
2526/** Returns the bucket head index given the table size order and item hash. */
2527static inline size_t calc_bucket_idx(size_t hash, size_t order)
2528{
2529 assert(1 <= order && order <= 8 * sizeof(size_t));
2530 return hash >> (8 * sizeof(size_t) - order);
2531}
2532
2533/** Returns the bucket index of destination*/
2534static inline size_t grow_to_split_idx(size_t old_idx)
2535{
2536 return grow_idx(old_idx) | 1;
2537}
2538
2539/** Returns the destination index of a bucket head when the table is growing. */
2540static inline size_t grow_idx(size_t idx)
2541{
2542 return idx << 1;
2543}
2544
2545/** Returns the destination index of a bucket head when the table is shrinking.*/
2546static inline size_t shrink_idx(size_t idx)
2547{
2548 return idx >> 1;
2549}
2550
2551/** Returns a mixed hash of the search key.*/
2552static inline size_t calc_key_hash(cht_t *h, void *key)
2553{
2554 /* Mimic calc_node_hash. */
2555 return hash_mix(h->op->key_hash(key)) & ~(size_t)1;
2556}
2557
2558/** Returns a memoized mixed hash of the item. */
2559static inline size_t node_hash(cht_t *h, const cht_link_t *item)
2560{
2561 assert(item->hash == h->invalid_hash ||
2562 item->hash == sentinel.hash ||
2563 item->hash == calc_node_hash(h, item));
2564
2565 return item->hash;
2566}
2567
2568/** Calculates and mixed the hash of the item. */
2569static inline size_t calc_node_hash(cht_t *h, const cht_link_t *item)
2570{
2571 assert(item != &sentinel);
2572 /*
2573 * Clear the lowest order bit in order for sentinel's node hash
2574 * to be the greatest possible.
2575 */
2576 return hash_mix(h->op->hash(item)) & ~(size_t)1;
2577}
2578
2579/** Computes and memoizes the hash of the item. */
2580static inline void memoize_node_hash(cht_t *h, cht_link_t *item)
2581{
2582 item->hash = calc_node_hash(h, item);
2583}
2584
2585/** Packs the next pointer address and the mark into a single pointer. */
2586static inline marked_ptr_t make_link(const cht_link_t *next, mark_t mark)
2587{
2588 marked_ptr_t ptr = (marked_ptr_t) next;
2589
2590 assert(!(ptr & N_MARK_MASK));
2591 assert(!((unsigned)mark & ~N_MARK_MASK));
2592
2593 return ptr | mark;
2594}
2595
2596/** Strips any marks from the next item link and returns the next item's address.*/
2597static inline cht_link_t *get_next(marked_ptr_t link)
2598{
2599 return (cht_link_t *)(link & ~N_MARK_MASK);
2600}
2601
2602/** Returns the current node's mark stored in the next item link. */
2603static inline mark_t get_mark(marked_ptr_t link)
2604{
2605 return (mark_t)(link & N_MARK_MASK);
2606}
2607
2608/** Moves the window by one item so that is points to the next item. */
2609static inline void next_wnd(wnd_t *wnd)
2610{
2611 assert(wnd);
2612 assert(wnd->cur);
2613
2614 wnd->last = wnd->cur;
2615 wnd->ppred = &wnd->cur->link;
2616 wnd->cur = get_next(wnd->cur->link);
2617}
2618
2619/** Predicate that matches only exactly the same node. */
2620static bool same_node_pred(void *node, const cht_link_t *item2)
2621{
2622 const cht_link_t *item1 = (const cht_link_t *) node;
2623 return item1 == item2;
2624}
2625
2626/** Compare-and-swaps a next item link. */
2627static inline marked_ptr_t cas_link(marked_ptr_t *link, const cht_link_t *cur_next,
2628 mark_t cur_mark, const cht_link_t *new_next, mark_t new_mark)
2629{
2630 return _cas_link(link, make_link(cur_next, cur_mark),
2631 make_link(new_next, new_mark));
2632}
2633
2634/** Compare-and-swaps a next item link. */
2635static inline marked_ptr_t _cas_link(marked_ptr_t *link, marked_ptr_t cur,
2636 marked_ptr_t new)
2637{
2638 assert(link != &sentinel.link);
2639 /*
2640 * cas(x) on the same location x on one cpu must be ordered, but do not
2641 * have to be ordered wrt to other cas(y) to a different location y
2642 * on the same cpu.
2643 *
2644 * cas(x) must act as a write barrier on x, ie if cas(x) succeeds
2645 * and is observed by another cpu, then all cpus must be able to
2646 * make the effects of cas(x) visible just by issuing a load barrier.
2647 * For example:
2648 * cpu1 cpu2 cpu3
2649 * cas(x, 0 -> 1), succeeds
2650 * cas(x, 0 -> 1), fails
2651 * MB, to order load of x in cas and store to y
2652 * y = 7
2653 * sees y == 7
2654 * loadMB must be enough to make cas(x) on cpu3 visible to cpu1, ie x == 1.
2655 *
2656 * If cas() did not work this way:
2657 * a) our head move protocol would not be correct.
2658 * b) freeing an item linked to a moved head after another item was
2659 * inserted in front of it, would require more than one grace period.
2660 *
2661 * Ad (a): In the following example, cpu1 starts moving old_head
2662 * to new_head, cpu2 completes the move and cpu3 notices cpu2
2663 * completed the move before cpu1 gets a chance to notice cpu2
2664 * had already completed the move. Our requirements for cas()
2665 * assume cpu3 will see a valid and mutable value in new_head
2666 * after issuing a load memory barrier once it has determined
2667 * the old_head's value had been successfully moved to new_head
2668 * (because it sees old_head marked invalid).
2669 *
2670 * cpu1 cpu2 cpu3
2671 * cas(old_head, <addr, N>, <addr, Const>), succeeds
2672 * cas-order-barrier
2673 * // Move from old_head to new_head started, now the interesting stuff:
2674 * cas(new_head, <0, Inv>, <addr, N>), succeeds
2675 *
2676 * cas(new_head, <0, Inv>, <addr, N>), but fails
2677 * cas-order-barrier
2678 * cas(old_head, <addr, Const>, <addr, Inv>), succeeds
2679 *
2680 * Sees old_head marked Inv (by cpu2)
2681 * load-MB
2682 * assert(new_head == <addr, N>)
2683 *
2684 * cas-order-barrier
2685 *
2686 * Even though cpu1 did not yet issue a cas-order-barrier, cpu1's store
2687 * to new_head (successful cas()) must be made visible to cpu3 with
2688 * a load memory barrier if cpu1's store to new_head is visible
2689 * on another cpu (cpu2) and that cpu's (cpu2's) store to old_head
2690 * is already visible to cpu3. *
2691 */
2692 void *expected = (void *)cur;
2693
2694 /*
2695 * Use the acquire-release model, although we could probably
2696 * get away even with the relaxed memory model due to our use
2697 * of explicit memory barriers.
2698 */
2699 __atomic_compare_exchange_n((void **)link, &expected, (void *)new, false,
2700 __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE);
2701
2702 return (marked_ptr_t) expected;
2703}
2704
2705/** Orders compare-and-swaps to different memory locations. */
2706static inline void cas_order_barrier(void)
2707{
2708 /* Make sure CAS to different memory locations are ordered. */
2709 write_barrier();
2710}
2711
2712
2713/** @}
2714 */
Note: See TracBrowser for help on using the repository browser.