Commit | Line | Data |
---|---|---|
b9dc49f6 MD |
1 | Userspace RCU Atomic Operations API |
2 | by Mathieu Desnoyers and Paul E. McKenney | |
3 | ||
4 | ||
5 | This document describes the <urcu/uatomic.h> API. Those are the atomic | |
6 | operations provided by the Userspace RCU library. The general rule | |
7 | regarding memory barriers is that only uatomic_xchg(), | |
8 | uatomic_cmpxchg(), uatomic_add_return(), and uatomic_sub_return() imply | |
9 | full memory barriers before and after the atomic operation. Other | |
10 | primitives don't guarantee any memory barrier. | |
11 | ||
12 | Only atomic operations performed on integers ("int" and "long", signed | |
13 | and unsigned) are supported on all architectures. Some architectures | |
14 | also support 1-byte and 2-byte atomic operations. Those respectively | |
15 | have UATOMIC_HAS_ATOMIC_BYTE and UATOMIC_HAS_ATOMIC_SHORT defined when | |
16 | uatomic.h is included. An architecture trying to perform an atomic write | |
17 | to a type size not supported by the architecture will trigger an illegal | |
18 | instruction. | |
19 | ||
20 | In the description below, "type" is a type that can be atomically | |
21 | written to by the architecture. It needs to be at most word-sized, and | |
22 | its alignment needs to greater or equal to its size. | |
23 | ||
24 | void uatomic_set(type *addr, type v) | |
25 | ||
26 | Atomically write @v into @addr. By "atomically", we mean that no | |
27 | concurrent operation that reads from addr will see partial | |
28 | effects of uatomic_set(). | |
29 | ||
30 | type uatomic_read(type *addr) | |
31 | ||
32 | Atomically read @v from @addr. By "atomically", we mean that | |
33 | uatomic_read() cannot see a partial effect of any concurrent | |
34 | uatomic update. | |
35 | ||
36 | type uatomic_cmpxchg(type *addr, type old, type new) | |
37 | ||
38 | An atomic read-modify-write operation that performs this | |
39 | sequence of operations atomically: check if @addr contains @old. | |
40 | If true, then replace the content of @addr by @new. Return the | |
41 | value previously contained by @addr. This function imply a full | |
42 | memory barrier before and after the atomic operation. | |
43 | ||
44 | type uatomic_xchg(type *addr, type new) | |
45 | ||
46 | An atomic read-modify-write operation that performs this sequence | |
47 | of operations atomically: replace the content of @addr by @new, | |
48 | and return the value previously contained by @addr. This | |
49 | function imply a full memory barrier before and after the atomic | |
50 | operation. | |
51 | ||
52 | type uatomic_add_return(type *addr, type v) | |
53 | type uatomic_sub_return(type *addr, type v) | |
54 | ||
55 | An atomic read-modify-write operation that performs this | |
56 | sequence of operations atomically: increment/decrement the | |
57 | content of @addr by @v, and return the resulting value. This | |
58 | function imply a full memory barrier before and after the atomic | |
59 | operation. | |
60 | ||
61 | void uatomic_and(type *addr, type mask) | |
62 | void uatomic_or(type *addr, type mask) | |
63 | ||
64 | Atomically write the result of bitwise "and"/"or" between the | |
65 | content of @addr and @mask into @addr. | |
66 | These operations do not necessarily imply memory barriers. | |
67 | If memory barriers are needed, they may be provided by | |
68 | explicitly using | |
69 | cmm_smp_mb__before_uatomic_and(), | |
70 | cmm_smp_mb__after_uatomic_and(), | |
71 | cmm_smp_mb__before_uatomic_or(), and | |
72 | cmm_smp_mb__after_uatomic_or(). These explicit barriers are | |
73 | no-ops on architectures in which the underlying atomic | |
74 | instructions implicitly supply the needed memory barriers. | |
75 | ||
76 | void uatomic_add(type *addr, type v) | |
77 | void uatomic_sub(type *addr, type v) | |
78 | ||
79 | Atomically increment/decrement the content of @addr by @v. | |
80 | These operations do not necessarily imply memory barriers. | |
81 | If memory barriers are needed, they may be provided by | |
82 | explicitly using | |
83 | cmm_smp_mb__before_uatomic_add(), | |
84 | cmm_smp_mb__after_uatomic_add(), | |
85 | cmm_smp_mb__before_uatomic_sub(), and | |
86 | cmm_smp_mb__after_uatomic_sub(). These explicit barriers are | |
87 | no-ops on architectures in which the underlying atomic | |
88 | instructions implicitly supply the needed memory barriers. | |
89 | ||
90 | void uatomic_inc(type *addr) | |
91 | void uatomic_dec(type *addr) | |
92 | ||
93 | Atomically increment/decrement the content of @addr by 1. | |
94 | These operations do not necessarily imply memory barriers. | |
95 | If memory barriers are needed, they may be provided by | |
96 | explicitly using | |
97 | cmm_smp_mb__before_uatomic_inc(), | |
98 | cmm_smp_mb__after_uatomic_inc(), | |
99 | cmm_smp_mb__before_uatomic_dec(), and | |
100 | cmm_smp_mb__after_uatomic_dec(). These explicit barriers are | |
101 | no-ops on architectures in which the underlying atomic | |
102 | instructions implicitly supply the needed memory barriers. |