rcuja: free all leaf nodes at destruction
[userspace-rcu.git] / doc / uatomic-api.txt
CommitLineData
b9dc49f6
MD
1Userspace RCU Atomic Operations API
2by Mathieu Desnoyers and Paul E. McKenney
3
4
5This document describes the <urcu/uatomic.h> API. Those are the atomic
6operations provided by the Userspace RCU library. The general rule
7regarding memory barriers is that only uatomic_xchg(),
8uatomic_cmpxchg(), uatomic_add_return(), and uatomic_sub_return() imply
9full memory barriers before and after the atomic operation. Other
10primitives don't guarantee any memory barrier.
11
12Only atomic operations performed on integers ("int" and "long", signed
13and unsigned) are supported on all architectures. Some architectures
14also support 1-byte and 2-byte atomic operations. Those respectively
15have UATOMIC_HAS_ATOMIC_BYTE and UATOMIC_HAS_ATOMIC_SHORT defined when
16uatomic.h is included. An architecture trying to perform an atomic write
17to a type size not supported by the architecture will trigger an illegal
18instruction.
19
20In the description below, "type" is a type that can be atomically
21written to by the architecture. It needs to be at most word-sized, and
22its alignment needs to greater or equal to its size.
23
24void uatomic_set(type *addr, type v)
25
26 Atomically write @v into @addr. By "atomically", we mean that no
27 concurrent operation that reads from addr will see partial
28 effects of uatomic_set().
29
30type uatomic_read(type *addr)
31
32 Atomically read @v from @addr. By "atomically", we mean that
33 uatomic_read() cannot see a partial effect of any concurrent
34 uatomic update.
35
36type uatomic_cmpxchg(type *addr, type old, type new)
37
38 An atomic read-modify-write operation that performs this
39 sequence of operations atomically: check if @addr contains @old.
40 If true, then replace the content of @addr by @new. Return the
41 value previously contained by @addr. This function imply a full
42 memory barrier before and after the atomic operation.
43
44type uatomic_xchg(type *addr, type new)
45
46 An atomic read-modify-write operation that performs this sequence
47 of operations atomically: replace the content of @addr by @new,
48 and return the value previously contained by @addr. This
49 function imply a full memory barrier before and after the atomic
50 operation.
51
52type uatomic_add_return(type *addr, type v)
53type uatomic_sub_return(type *addr, type v)
54
55 An atomic read-modify-write operation that performs this
56 sequence of operations atomically: increment/decrement the
57 content of @addr by @v, and return the resulting value. This
58 function imply a full memory barrier before and after the atomic
59 operation.
60
61void uatomic_and(type *addr, type mask)
62void uatomic_or(type *addr, type mask)
63
64 Atomically write the result of bitwise "and"/"or" between the
65 content of @addr and @mask into @addr.
66 These operations do not necessarily imply memory barriers.
67 If memory barriers are needed, they may be provided by
68 explicitly using
69 cmm_smp_mb__before_uatomic_and(),
70 cmm_smp_mb__after_uatomic_and(),
71 cmm_smp_mb__before_uatomic_or(), and
72 cmm_smp_mb__after_uatomic_or(). These explicit barriers are
73 no-ops on architectures in which the underlying atomic
74 instructions implicitly supply the needed memory barriers.
75
76void uatomic_add(type *addr, type v)
77void uatomic_sub(type *addr, type v)
78
79 Atomically increment/decrement the content of @addr by @v.
80 These operations do not necessarily imply memory barriers.
81 If memory barriers are needed, they may be provided by
82 explicitly using
83 cmm_smp_mb__before_uatomic_add(),
84 cmm_smp_mb__after_uatomic_add(),
85 cmm_smp_mb__before_uatomic_sub(), and
86 cmm_smp_mb__after_uatomic_sub(). These explicit barriers are
87 no-ops on architectures in which the underlying atomic
88 instructions implicitly supply the needed memory barriers.
89
90void uatomic_inc(type *addr)
91void uatomic_dec(type *addr)
92
93 Atomically increment/decrement the content of @addr by 1.
94 These operations do not necessarily imply memory barriers.
95 If memory barriers are needed, they may be provided by
96 explicitly using
97 cmm_smp_mb__before_uatomic_inc(),
98 cmm_smp_mb__after_uatomic_inc(),
99 cmm_smp_mb__before_uatomic_dec(), and
100 cmm_smp_mb__after_uatomic_dec(). These explicit barriers are
101 no-ops on architectures in which the underlying atomic
102 instructions implicitly supply the needed memory barriers.
This page took 0.092263 seconds and 4 git commands to generate.