uatomic/x86: Remove redundant memory barriers
[urcu.git] / doc / uatomic-api.md
CommitLineData
d001c886
MJ
1<!--
2SPDX-FileCopyrightText: 2023 EfficiOS Inc.
3
4SPDX-License-Identifier: CC-BY-4.0
5-->
6
dcb9c05a
PP
7Userspace RCU Atomic Operations API
8===================================
9
10by Mathieu Desnoyers and Paul E. McKenney
11
12This document describes the `<urcu/uatomic.h>` API. Those are the atomic
13operations provided by the Userspace RCU library. The general rule
14regarding memory barriers is that only `uatomic_xchg()`,
15`uatomic_cmpxchg()`, `uatomic_add_return()`, and `uatomic_sub_return()` imply
16full memory barriers before and after the atomic operation. Other
17primitives don't guarantee any memory barrier.
18
19Only atomic operations performed on integers (`int` and `long`, signed
20and unsigned) are supported on all architectures. Some architectures
21also support 1-byte and 2-byte atomic operations. Those respectively
22have `UATOMIC_HAS_ATOMIC_BYTE` and `UATOMIC_HAS_ATOMIC_SHORT` defined when
23`uatomic.h` is included. An architecture trying to perform an atomic write
24to a type size not supported by the architecture will trigger an illegal
25instruction.
26
27In the description below, `type` is a type that can be atomically
28written to by the architecture. It needs to be at most word-sized, and
29its alignment needs to greater or equal to its size.
30
31
32API
33---
34
35```c
0474164f 36void uatomic_set(type *addr, type v);
dcb9c05a
PP
37```
38
39Atomically write `v` into `addr`. By "atomically", we mean that no
40concurrent operation that reads from addr will see partial
41effects of `uatomic_set()`.
42
43
44```c
0474164f 45type uatomic_read(type *addr);
dcb9c05a
PP
46```
47
48Atomically read `v` from `addr`. By "atomically", we mean that
49`uatomic_read()` cannot see a partial effect of any concurrent
50uatomic update.
51
52
53```c
0474164f 54type uatomic_cmpxchg(type *addr, type old, type new);
dcb9c05a
PP
55```
56
57An atomic read-modify-write operation that performs this
58sequence of operations atomically: check if `addr` contains `old`.
59If true, then replace the content of `addr` by `new`. Return the
20d8db46 60value previously contained by `addr`. This function implies a full
d1854484
OD
61memory barrier before and after the atomic operation on success.
62On failure, no memory order is guaranteed.
dcb9c05a
PP
63
64
65```c
0474164f 66type uatomic_xchg(type *addr, type new);
dcb9c05a
PP
67```
68
69An atomic read-modify-write operation that performs this sequence
70of operations atomically: replace the content of `addr` by `new`,
71and return the value previously contained by `addr`. This
20d8db46 72function implies a full memory barrier before and after the atomic
dcb9c05a
PP
73operation.
74
75
76```c
0474164f
WY
77type uatomic_add_return(type *addr, type v);
78type uatomic_sub_return(type *addr, type v);
dcb9c05a
PP
79```
80
81An atomic read-modify-write operation that performs this
82sequence of operations atomically: increment/decrement the
83content of `addr` by `v`, and return the resulting value. This
20d8db46 84function implies a full memory barrier before and after the atomic
dcb9c05a
PP
85operation.
86
87
88```c
0474164f
WY
89void uatomic_and(type *addr, type mask);
90void uatomic_or(type *addr, type mask);
dcb9c05a
PP
91```
92
93Atomically write the result of bitwise "and"/"or" between the
94content of `addr` and `mask` into `addr`.
95
96These operations do not necessarily imply memory barriers.
97If memory barriers are needed, they may be provided by explicitly using
98`cmm_smp_mb__before_uatomic_and()`, `cmm_smp_mb__after_uatomic_and()`,
99`cmm_smp_mb__before_uatomic_or()`, and `cmm_smp_mb__after_uatomic_or()`.
100These explicit barriers are no-ops on architectures in which the underlying
101atomic instructions implicitly supply the needed memory barriers.
102
103
104```c
0474164f
WY
105void uatomic_add(type *addr, type v);
106void uatomic_sub(type *addr, type v);
dcb9c05a
PP
107```
108
109Atomically increment/decrement the content of `addr` by `v`.
110These operations do not necessarily imply memory barriers.
111If memory barriers are needed, they may be provided by
112explicitly using `cmm_smp_mb__before_uatomic_add()`,
113`cmm_smp_mb__after_uatomic_add()`, `cmm_smp_mb__before_uatomic_sub()`, and
114`cmm_smp_mb__after_uatomic_sub()`. These explicit barriers are
115no-ops on architectures in which the underlying atomic
116instructions implicitly supply the needed memory barriers.
117
118
119```c
0474164f
WY
120void uatomic_inc(type *addr);
121void uatomic_dec(type *addr);
dcb9c05a
PP
122```
123
124Atomically increment/decrement the content of `addr` by 1.
125These operations do not necessarily imply memory barriers.
126If memory barriers are needed, they may be provided by
127explicitly using `cmm_smp_mb__before_uatomic_inc()`,
128`cmm_smp_mb__after_uatomic_inc()`, `cmm_smp_mb__before_uatomic_dec()`,
129and `cmm_smp_mb__after_uatomic_dec()`. These explicit barriers are
130no-ops on architectures in which the underlying atomic
131instructions implicitly supply the needed memory barriers.
This page took 0.040093 seconds and 4 git commands to generate.