1 Userspace RCU Atomic Operations API
2 ===================================
4 by Mathieu Desnoyers and Paul E. McKenney
6 This document describes the `<urcu/uatomic.h>` API. Those are the atomic
7 operations provided by the Userspace RCU library. The general rule
8 regarding memory barriers is that only `uatomic_xchg()`,
9 `uatomic_cmpxchg()`, `uatomic_add_return()`, and `uatomic_sub_return()` imply
10 full memory barriers before and after the atomic operation. Other
11 primitives don't guarantee any memory barrier.
13 Only atomic operations performed on integers (`int` and `long`, signed
14 and unsigned) are supported on all architectures. Some architectures
15 also support 1-byte and 2-byte atomic operations. Those respectively
16 have `UATOMIC_HAS_ATOMIC_BYTE` and `UATOMIC_HAS_ATOMIC_SHORT` defined when
17 `uatomic.h` is included. An architecture trying to perform an atomic write
18 to a type size not supported by the architecture will trigger an illegal
21 In the description below, `type` is a type that can be atomically
22 written to by the architecture. It needs to be at most word-sized, and
23 its alignment needs to greater or equal to its size.
30 void uatomic_set(type *addr, type v)
33 Atomically write `v` into `addr`. By "atomically", we mean that no
34 concurrent operation that reads from addr will see partial
35 effects of `uatomic_set()`.
39 type uatomic_read(type *addr)
42 Atomically read `v` from `addr`. By "atomically", we mean that
43 `uatomic_read()` cannot see a partial effect of any concurrent
48 type uatomic_cmpxchg(type *addr, type old, type new)
51 An atomic read-modify-write operation that performs this
52 sequence of operations atomically: check if `addr` contains `old`.
53 If true, then replace the content of `addr` by `new`. Return the
54 value previously contained by `addr`. This function implies a full
55 memory barrier before and after the atomic operation.
59 type uatomic_xchg(type *addr, type new)
62 An atomic read-modify-write operation that performs this sequence
63 of operations atomically: replace the content of `addr` by `new`,
64 and return the value previously contained by `addr`. This
65 function implies a full memory barrier before and after the atomic
70 type uatomic_add_return(type *addr, type v)
71 type uatomic_sub_return(type *addr, type v)
74 An atomic read-modify-write operation that performs this
75 sequence of operations atomically: increment/decrement the
76 content of `addr` by `v`, and return the resulting value. This
77 function implies a full memory barrier before and after the atomic
82 void uatomic_and(type *addr, type mask)
83 void uatomic_or(type *addr, type mask)
86 Atomically write the result of bitwise "and"/"or" between the
87 content of `addr` and `mask` into `addr`.
89 These operations do not necessarily imply memory barriers.
90 If memory barriers are needed, they may be provided by explicitly using
91 `cmm_smp_mb__before_uatomic_and()`, `cmm_smp_mb__after_uatomic_and()`,
92 `cmm_smp_mb__before_uatomic_or()`, and `cmm_smp_mb__after_uatomic_or()`.
93 These explicit barriers are no-ops on architectures in which the underlying
94 atomic instructions implicitly supply the needed memory barriers.
98 void uatomic_add(type *addr, type v)
99 void uatomic_sub(type *addr, type v)
102 Atomically increment/decrement the content of `addr` by `v`.
103 These operations do not necessarily imply memory barriers.
104 If memory barriers are needed, they may be provided by
105 explicitly using `cmm_smp_mb__before_uatomic_add()`,
106 `cmm_smp_mb__after_uatomic_add()`, `cmm_smp_mb__before_uatomic_sub()`, and
107 `cmm_smp_mb__after_uatomic_sub()`. These explicit barriers are
108 no-ops on architectures in which the underlying atomic
109 instructions implicitly supply the needed memory barriers.
113 void uatomic_inc(type *addr)
114 void uatomic_dec(type *addr)
117 Atomically increment/decrement the content of `addr` by 1.
118 These operations do not necessarily imply memory barriers.
119 If memory barriers are needed, they may be provided by
120 explicitly using `cmm_smp_mb__before_uatomic_inc()`,
121 `cmm_smp_mb__after_uatomic_inc()`, `cmm_smp_mb__before_uatomic_dec()`,
122 and `cmm_smp_mb__after_uatomic_dec()`. These explicit barriers are
123 no-ops on architectures in which the underlying atomic
124 instructions implicitly supply the needed memory barriers.