Performance: mark ring buffer do_copy callers always inline
authorMathieu Desnoyers <mathieu.desnoyers@efficios.com>
Sun, 25 Sep 2016 14:50:22 +0000 (10:50 -0400)
committerMathieu Desnoyers <mathieu.desnoyers@efficios.com>
Sun, 25 Sep 2016 14:50:22 +0000 (10:50 -0400)
commit00d0f8eb40e77bb8915be29c527f48fb7e006b61
tree6ba54f62c4266968362bbf8d1f0a81ea5167391f
parenta3492932cffa2c9dfbc9416792b20ce763708fc1
Performance: mark ring buffer do_copy callers always inline

The underlying copy operation is more efficient if the size is a
constant, which only happens if this function is inlined in the caller.
Otherwise, we end up calling memcpy for each field.

Force inlining for performance reasons for:
  - lib_ring_buffer_write,
  - lib_ring_buffer_do_strcpy,
  - lib_ring_buffer_strcpy.

Note that in lttng-ust, the probe provider serialization functions need
to call the lttng_event_write() client callback, which will fallback to
the memcpy operation.

Inlining those functions helps for the event header code, which can
inline them.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
libringbuffer/backend.h
This page took 0.024908 seconds and 4 git commands to generate.