1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
7 include::../common/copyright.txt[]
10 include::../common/welcome.txt[]
13 include::../common/audience.txt[]
17 === What's in this documentation?
19 The LTTng Documentation is divided into the following sections:
21 * ``**<<nuts-and-bolts,Nuts and bolts>>**'' explains the
22 rudiments of software tracing and the rationale behind the
25 Skip this section if you’re familiar with software tracing and with the
28 * ``**<<installing-lttng,Installation>>**'' describes the steps to
29 install the LTTng packages on common Linux distributions and from
32 Skip this section if you already properly installed LTTng on your target
35 * ``**<<getting-started,Quick start>>**'' is a concise guide to
36 get started quickly with LTTng kernel and user space tracing.
38 We recommend this section if you're new to LTTng or to software tracing
41 Skip this section if you're not new to LTTng.
43 * ``**<<core-concepts,Core concepts>>**'' explains the concepts at
46 It's a good idea to become familiar with the core concepts
47 before attempting to use the toolkit.
49 * ``**<<plumbing,Components of LTTng>>**'' describes the various
50 components of the LTTng machinery, like the daemons, the libraries,
51 and the command-line interface.
53 * ``**<<instrumenting,Instrumentation>>**'' shows different ways to
54 instrument user applications and the Linux kernel for LTTng tracing.
56 Instrumenting source code is essential to provide a meaningful
59 Skip this section if you don't have a programming background.
61 * ``**<<controlling-tracing,Tracing control>>**'' is divided into topics
62 which demonstrate how to use the vast array of features that
63 LTTng{nbsp}{revision} offers.
65 * ``**<<reference,Reference>>**'' contains API reference tables.
67 * ``**<<glossary,Glossary>>**'' is a specialized dictionary of terms
68 related to LTTng or to the field of software tracing.
71 include::../common/convention.txt[]
74 include::../common/acknowledgements.txt[]
78 == What's new in LTTng{nbsp}{revision}?
80 LTTng{nbsp}{revision} bears the name _Nordicité_, the product of a
81 collaboration between https://champlibre.co/[Champ Libre] and
82 https://champlibre.co/[Boréale]. This farmhouse IPA is brewed with
83 https://en.wikipedia.org/wiki/Kveik[Kveik] yeast and Québec-grown
84 barley, oats, and juniper branches. The result is a remarkable, fruity,
85 hazy golden IPA that offers a balanced touch of resinous and woodsy
88 New features and changes in LTTng{nbsp}{revision}:
92 * The LTTng trigger API of <<liblttng-ctl-lttng,`liblttng-ctl`>> now
93 offers the ``__event rule matches__'' condition (an <<event-rule,event
94 rule>> matches an event) as well as the following new actions:
97 * <<basic-tracing-session-control,Start or stop>> a recording session.
98 * <<session-rotation,Archive the current trace chunk>> of a
99 recording session (rotate).
100 * <<taking-a-snapshot,Take a snapshot>> of a recording session.
103 As a reminder, a <<trigger,trigger>> is a condition-actions pair. When
104 the condition of a trigger is satisfied, LTTng attempts to execute its
107 This feature is also available with the new man:lttng-add-trigger(1),
108 man:lttng-remove-trigger(1), and man:lttng-list-triggers(1)
109 <<lttng-cli,cmd:lttng>> commands.
111 Starting from LTTng{nbsp}{revision}, a trigger may have more than one
114 See “<<add-event-rule-matches-trigger,Add an ``event rule matches''
115 trigger to a session daemon>>” to learn more.
117 * The LTTng <<lttng-ust,user space>> and <<lttng-modules,kernel>>
118 tracers offer the new namespace context field `time_ns`, which is the
119 inode number, in the proc file system, of the current clock namespace.
121 See man:lttng-add-context(1), man:lttng-ust(3), and
122 man:time_namespaces(7).
124 * The link:/man[manual pages] of LTTng-tools now have a terminology and
125 style which match the LTTng Documentation, many fixes, more internal
126 and manual page links, clearer lists and procedures, superior
127 consistency, and usage examples.
129 The new man:lttng-event-rule(7) manual page explains the new, common
130 way to specify an event rule on the command line.
132 The new man:lttng-concepts(7) manual page explains the core concepts of
133 LTTng. Its contents is essentially the ``<<core-concepts,Core
134 concepts>>'' section of this documentation, but more adapted to the
141 The major version part of the `liblttng-ust`
142 https://en.wikipedia.org/wiki/Soname[soname] is bumped, which means you
143 **must recompile** your instrumented applications/libraries and
144 <<tracepoint-provider,tracepoint provider packages>> to use
145 LTTng-UST{nbsp}{revision}.
147 This change became a necessity to clean up the library and for
148 `liblttng-ust` to stop exporting private symbols.
150 Also, LTTng{nbsp}{revision} prepends the `lttng_ust_` and `LTTNG_UST_`
151 prefix to all public macro/definition/function names to offer a
152 consistent API namespace. The LTTng{nbsp}2.12 API is still available;
153 see the ``Compatibility with previous APIs'' section of
157 Other notable changes:
159 * The `liblttng-ust` C{nbsp}API offers the new man:lttng_ust_vtracef(3)
160 and man:lttng_ust_vtracelog(3) macros which are to
161 man:lttng_ust_tracef(3) and man:lttng_ust_tracelog(3) what
162 man:vprintf(3) is to man:printf(3).
164 * LTTng-UST now only depends on https://liburcu.org/[`liburcu`] at build
165 time, not at run time.
169 * The preferred display base of event record integer fields which
170 contain memory addresses is now hexadecimal instead of decimal.
172 * The `pid` field is removed from `lttng_statedump_file_descriptor`
173 event records and the `file_table_address` field is added.
175 This new field is the address of the `files_struct` structure which
176 contains the file descriptor.
179 ``https://github.com/lttng/lttng-modules/commit/e7a0ca7205fd4be7c829d171baa8823fe4784c90[statedump: introduce `file_table_address`]''
182 * The `flags` field of `syscall_entry_clone` event records is now a
183 structure containing two enumerations (exit signal and options).
185 This change makes the flag values more readable and meaningful.
188 ``https://github.com/lttng/lttng-modules/commit/d775625e2ba4825b73b5897e7701ad6e2bdba115[syscalls: Make `clone()`'s `flags` field a 2 enum struct]''
191 * The memory footprint of the kernel tracer is improved: the latter only
192 generates metadata for the specific system call recording event rules
193 that you <<enabling-disabling-events,create>>.
199 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
200 generation_ is a modern toolkit for tracing Linux systems and
201 applications. So your first question might be:
208 As the history of software engineering progressed and led to what
209 we now take for granted--complex, numerous and
210 interdependent software applications running in parallel on
211 sophisticated operating systems like Linux--the authors of such
212 components, software developers, began feeling a natural
213 urge to have tools that would ensure the robustness and good performance
214 of their masterpieces.
216 One major achievement in this field is, inarguably, the
217 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
218 an essential tool for developers to find and fix bugs. But even the best
219 debugger won't help make your software run faster, and nowadays, faster
220 software means either more work done by the same hardware, or cheaper
221 hardware for the same work.
223 A _profiler_ is often the tool of choice to identify performance
224 bottlenecks. Profiling is suitable to identify _where_ performance is
225 lost in a given piece of software. The profiler outputs a profile, a
226 statistical summary of observed events, which you may use to discover
227 which functions took the most time to execute. However, a profiler won't
228 report _why_ some identified functions are the bottleneck. Bottlenecks
229 might only occur when specific conditions are met, conditions that are
230 sometimes impossible to capture by a statistical profiler, or impossible
231 to reproduce with an application altered by the overhead of an
232 event-based profiler. For a thorough investigation of software
233 performance issues, a history of execution is essential, with the
234 recorded values of variables and context fields you choose, and with as
235 little influence as possible on the instrumented application. This is
236 where tracing comes in handy.
238 _Tracing_ is a technique used to understand what goes on in a running
239 software system. The piece of software used for tracing is called a
240 _tracer_, which is conceptually similar to a tape recorder. When
241 recording, specific instrumentation points placed in the software source
242 code generate events that are saved on a giant tape: a _trace_ file. You
243 can record user application and operating system events at the same
244 time, opening the possibility of resolving a wide range of problems that
245 would otherwise be extremely challenging.
247 Tracing is often compared to _logging_. However, tracers and loggers are
248 two different tools, serving two different purposes. Tracers are
249 designed to record much lower-level events that occur much more
250 frequently than log messages, often in the range of thousands per
251 second, with very little execution overhead. Logging is more appropriate
252 for a very high-level analysis of less frequent events: user accesses,
253 exceptional conditions (errors and warnings, for example), database
254 transactions, instant messaging communications, and such. Simply put,
255 logging is one of the many use cases that can be satisfied with tracing.
257 The list of recorded events inside a trace file can be read manually
258 like a log file for the maximum level of detail, but it's generally
259 much more interesting to perform application-specific analyses to
260 produce reduced statistics and graphs that are useful to resolve a
261 given problem. Trace viewers and analyzers are specialized tools
264 In the end, this is what LTTng is: a powerful, open source set of
265 tools to trace the Linux kernel and user applications at the same time.
266 LTTng is composed of several components actively maintained and
267 developed by its link:/community/#where[community].
270 [[lttng-alternatives]]
271 === Alternatives to noch:{LTTng}
273 Excluding proprietary solutions, a few competing software tracers
276 https://github.com/dtrace4linux/linux[dtrace4linux]::
277 A port of Sun Microsystems' DTrace to Linux.
279 The cmd:dtrace tool interprets user scripts and is responsible for
280 loading code into the Linux kernel for further execution and collecting
283 https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF]::
284 A subsystem in the Linux kernel in which a virtual machine can
285 execute programs passed from the user space to the kernel.
287 You can attach such programs to tracepoints and kprobes thanks to a
288 system call, and they can output data to the user space when executed
289 thanks to different mechanisms (pipe, VM register values, and eBPF maps,
292 https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]::
293 The de facto function tracer of the Linux kernel.
295 Its user interface is a set of special files in sysfs.
297 https://perf.wiki.kernel.org/[perf]::
298 A performance analysis tool for Linux which supports hardware
299 performance counters, tracepoints, as well as other counters and
302 The controlling utility of perf is the cmd:perf command line/text UI
305 https://linux.die.net/man/1/strace[strace]::
306 A command-line utility which records system calls made by a
307 user process, as well as signal deliveries and changes of process
310 strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace] to
311 fulfill its function.
313 https://www.sysdig.org/[sysdig]::
314 Like SystemTap, uses scripts to analyze Linux kernel events.
316 You write scripts, or _chisels_ in the jargon of sysdig, in Lua and
317 sysdig executes them while it traces the system or afterwards. The
318 interface of sysdig is the cmd:sysdig command-line tool as well as the
319 text UI-based cmd:csysdig tool.
321 https://sourceware.org/systemtap/[SystemTap]::
322 A Linux kernel and user space tracer which uses custom user scripts
323 to produce plain text traces.
325 SystemTap converts the scripts to the C language, and then compiles them
326 as Linux kernel modules which are loaded to produce trace data. The
327 primary user interface of SystemTap is the cmd:stap command-line tool.
329 The main distinctive features of LTTng is that it produces correlated
330 kernel and user space traces, as well as doing so with the lowest
331 overhead amongst other solutions. It produces trace files in the
332 https://diamon.org/ctf[CTF] format, a file format optimized
333 for the production and analyses of multi-gigabyte data.
335 LTTng is the result of more than 10{nbsp}years of active open source
336 development by a community of passionate developers. LTTng is currently
337 available on major desktop and server Linux distributions.
339 The main interface for tracing control is a single command-line tool
340 named cmd:lttng. The latter can create several recording sessions, enable
341 and disable recording event rules on the fly, filter events efficiently
342 with custom user expressions, start and stop tracing, and much more.
343 LTTng can write the traces on the file system or send them over the
344 network, and keep them totally or partially. You can make LTTng execute
345 user-defined actions when LTTng emits an event. You can view the traces
346 once tracing becomes inactive or as LTTng records events.
348 <<installing-lttng,Install LTTng now>> and
349 <<getting-started,start tracing>>!
355 **LTTng** is a set of software <<plumbing,components>> which interact to
356 <<instrumenting,instrument>> the Linux kernel and user applications, and
357 to <<controlling-tracing,control tracing>> (start and stop
358 recording, create recording event rules, and the rest). Those
359 components are bundled into the following packages:
362 Libraries and command-line interface to control tracing.
365 Linux kernel modules to instrument and trace the kernel.
368 Libraries and Java/Python packages to instrument and trace user
371 Most distributions mark the LTTng-modules and LTTng-UST packages as
372 optional when installing LTTng-tools (which is always required). In the
373 following sections, we always provide the steps to install all three,
376 * You only need to install LTTng-modules if you intend to use
377 the Linux kernel LTTng tracer.
379 * You only need to install LTTng-UST if you intend to use the user
384 As of 10{nbsp}June{nbsp}2021, LTTng{nbsp}{revision} is not yet available
385 in any major non-enterprise Linux distribution.
387 For https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
388 packages, see https://packages.efficios.com/[EfficiOS Enterprise
391 For other distributions, <<building-from-source,build LTTng from
396 [[building-from-source]]
397 === Build from source
399 To build and install LTTng{nbsp}{revision} from source:
401 . Using the package manager of your distribution, or from source,
402 install the following dependencies of LTTng-tools and LTTng-UST:
405 * https://sourceforge.net/projects/libuuid/[libuuid]
406 * https://directory.fsf.org/wiki/Popt[popt]
407 * https://liburcu.org/[Userspace RCU]
408 * http://www.xmlsoft.org/[libxml2]
409 * **Optional**: https://github.com/numactl/numactl[numactl]
412 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
418 wget https://lttng.org/files/lttng-modules/lttng-modules-latest-2.13.tar.bz2 &&
419 tar -xf lttng-modules-latest-2.13.tar.bz2 &&
420 cd lttng-modules-2.13.* &&
422 sudo make modules_install &&
427 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
433 wget https://lttng.org/files/lttng-ust/lttng-ust-latest-2.13.tar.bz2 &&
434 tar -xf lttng-ust-latest-2.13.tar.bz2 &&
435 cd lttng-ust-2.13.* &&
443 Add `--disable-numa` to `./configure` if you don't have
444 https://github.com/numactl/numactl[numactl].
448 .Java and Python application tracing
450 If you need to instrument and have LTTng trace <<java-application,Java
451 applications>>, pass the `--enable-java-agent-jul`,
452 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
453 `configure` script, depending on which Java logging framework you use.
455 If you need to instrument and have LTTng trace
456 <<python-application,Python applications>>, pass the
457 `--enable-python-agent` option to the `configure` script. You can set
458 the env:PYTHON environment variable to the path to the Python interpreter
459 for which to install the LTTng-UST Python agent package.
466 By default, LTTng-UST libraries are installed to
467 dir:{/usr/local/lib}, which is the de facto directory in which to
468 keep self-compiled and third-party libraries.
470 When <<building-tracepoint-providers-and-user-application,linking an
471 instrumented user application with `liblttng-ust`>>:
473 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
476 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
477 man:gcc(1), man:g++(1), or man:clang(1).
481 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
487 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
488 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
489 cd lttng-tools-2.13.* &&
497 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
498 previous steps automatically for a given version of LTTng and confine
499 the installed files to a specific directory. This can be useful to try
500 LTTng without installing it on your system.
506 This is a short guide to get started quickly with LTTng kernel and user
509 Before you follow this guide, make sure to <<installing-lttng,install>>
512 This tutorial walks you through the steps to:
514 . <<tracing-the-linux-kernel,Record Linux kernel events>>.
516 . <<tracing-your-own-user-application,Record the events of a user
517 application>> written in C.
519 . <<viewing-and-analyzing-your-traces,View and analyze the
523 [[tracing-the-linux-kernel]]
524 === Record Linux kernel events
526 NOTE: The following command lines start with the `#` prompt because you
527 need root privileges to control the Linux kernel LTTng tracer. You can
528 also control the kernel tracer as a regular user if your Unix user is a
529 member of the <<tracing-group,tracing group>>.
531 . Create a <<tracing-session,recording session>> to write LTTng traces
532 to dir:{/tmp/my-kernel-trace}:
537 # lttng create my-kernel-session --output=/tmp/my-kernel-trace
541 . List the available kernel tracepoints and system calls:
546 # lttng list --kernel
547 # lttng list --kernel --syscall
551 . Create <<event,recording event rules>> which match events having
552 the desired names, for example the `sched_switch` and
553 `sched_process_fork` tracepoints, and the man:open(2) and man:close(2)
559 # lttng enable-event --kernel sched_switch,sched_process_fork
560 # lttng enable-event --kernel --syscall open,close
564 Create a recording event rule which matches _all_ the Linux kernel
565 tracepoint events with the opt:lttng-enable-event(1):--all option
566 (recording with such a recording event rule generates a lot of data):
571 # lttng enable-event --kernel --all
575 . <<basic-tracing-session-control,Start recording>>:
584 . Do some operation on your system for a few seconds. For example,
585 load a website, or list the files of a directory.
587 . <<creating-destroying-tracing-sessions,Destroy>> the current
597 The man:lttng-destroy(1) command doesn't destroy the trace data; it
598 only destroys the state of the recording session.
600 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
601 implicitly (see ``<<basic-tracing-session-control,Start and stop a
602 recording session>>''). You need to stop recording to make LTTng flush
603 the remaining trace data and make the trace readable.
605 . For the sake of this example, make the recorded trace accessible to
611 # chown -R $(whoami) /tmp/my-kernel-trace
615 See ``<<viewing-and-analyzing-your-traces,View and analyze the
616 recorded events>>'' to view the recorded events.
619 [[tracing-your-own-user-application]]
620 === Record user application events
622 This section walks you through a simple example to record the events of
623 a _Hello world_ program written in{nbsp}C.
625 To create the traceable user application:
627 . Create the tracepoint provider header file, which defines the
628 tracepoints and the events they can generate:
634 #undef LTTNG_UST_TRACEPOINT_PROVIDER
635 #define LTTNG_UST_TRACEPOINT_PROVIDER hello_world
637 #undef LTTNG_UST_TRACEPOINT_INCLUDE
638 #define LTTNG_UST_TRACEPOINT_INCLUDE "./hello-tp.h"
640 #if !defined(_HELLO_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ)
643 #include <lttng/tracepoint.h>
645 LTTNG_UST_TRACEPOINT_EVENT(
650 char *, my_string_arg
653 lttng_ust_field_string(my_string_field, my_string_arg)
654 lttng_ust_field_integer(int, my_integer_field, my_integer_arg)
658 #endif /* _HELLO_TP_H */
660 #include <lttng/tracepoint-event.h>
664 . Create the tracepoint provider package source file:
670 #define LTTNG_UST_TRACEPOINT_CREATE_PROBES
671 #define LTTNG_UST_TRACEPOINT_DEFINE
673 #include "hello-tp.h"
677 . Build the tracepoint provider package:
682 $ gcc -c -I. hello-tp.c
686 . Create the _Hello World_ application source file:
693 #include "hello-tp.h"
695 int main(int argc, char *argv[])
699 puts("Hello, World!\nPress Enter to continue...");
702 * The following getchar() call only exists for the purpose of this
703 * demonstration, to pause the application in order for you to have
704 * time to list its tracepoints. You don't need it otherwise.
709 * An lttng_ust_tracepoint() call.
711 * Arguments, as defined in `hello-tp.h`:
713 * 1. Tracepoint provider name (required)
714 * 2. Tracepoint name (required)
715 * 3. `my_integer_arg` (first user-defined argument)
716 * 4. `my_string_arg` (second user-defined argument)
718 * Notice the tracepoint provider and tracepoint names are
719 * C identifiers, NOT strings: they're in fact parts of variables
720 * that the macros in `hello-tp.h` create.
722 lttng_ust_tracepoint(hello_world, my_first_tracepoint, 23,
725 for (i = 0; i < argc; i++) {
726 lttng_ust_tracepoint(hello_world, my_first_tracepoint,
730 puts("Quitting now!");
731 lttng_ust_tracepoint(hello_world, my_first_tracepoint,
738 . Build the application:
747 . Link the application with the tracepoint provider package,
748 `liblttng-ust` and `libdl`:
753 $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
757 Here's the whole build process:
760 .Build steps of the user space tracing tutorial.
761 image::ust-flow.png[]
763 To record the events of the user application:
765 . Run the application with a few arguments:
770 $ ./hello world and beyond
779 Press Enter to continue...
783 . Start an LTTng <<lttng-sessiond,session daemon>>:
788 $ lttng-sessiond --daemonize
792 NOTE: A session daemon might already be running, for example as a
793 service that the service manager of your distribution started.
795 . List the available user space tracepoints:
800 $ lttng list --userspace
804 You see the `hello_world:my_first_tracepoint` tracepoint listed
805 under the `./hello` process.
807 . Create a <<tracing-session,recording session>>:
812 $ lttng create my-user-space-session
816 . Create a <<event,recording event rule>> which matches user space
817 tracepoint events named `hello_world:my_first_tracepoint`:
822 $ lttng enable-event --userspace hello_world:my_first_tracepoint
826 . <<basic-tracing-session-control,Start recording>>:
835 . Go back to the running `hello` application and press **Enter**.
837 The program executes all `lttng_ust_tracepoint()` instrumentation
838 points, emitting events as the event rule you created in step{nbsp}5
842 . <<creating-destroying-tracing-sessions,Destroy>> the current
852 The man:lttng-destroy(1) command doesn't destroy the trace data; it
853 only destroys the state of the recording session.
855 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
856 implicitly (see ``<<basic-tracing-session-control,Start and stop a
857 recording session>>''). You need to stop recording to make LTTng flush
858 the remaining trace data and make the trace readable.
860 By default, LTTng saves the traces to the
861 +$LTTNG_HOME/lttng-traces/__NAME__-__DATE__-__TIME__+ directory, where
862 +__NAME__+ is the recording session name. The env:LTTNG_HOME environment
863 variable defaults to `$HOME` if not set.
866 [[viewing-and-analyzing-your-traces]]
867 === View and analyze the recorded events
869 Once you have completed the <<tracing-the-linux-kernel,Record Linux
870 kernel events>> and <<tracing-your-own-user-application,Record user
871 application events>> tutorials, you can inspect the recorded events.
873 There are many tools you can use to read LTTng traces:
875 https://babeltrace.org/[Babeltrace{nbsp}2]::
876 A rich, flexible trace manipulation toolkit which includes
877 a versatile command-line interface
878 (man:babeltrace2(1)),
879 a https://babeltrace.org/docs/v2.0/libbabeltrace2/[C{nbsp}library],
880 and https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings]
881 so that you can easily process or convert an LTTng trace with
884 The Babeltrace{nbsp}2 project ships with a plugin
885 (man:babeltrace2-plugin-ctf(7)) which supports the format of the traces
886 which LTTng produces, https://diamon.org/ctf/[CTF].
888 http://tracecompass.org/[Trace Compass]::
889 A graphical user interface for viewing and analyzing any type of
890 logs or traces, including those of LTTng.
892 https://github.com/lttng/lttng-analyses[LTTng analyses]::
893 An experimental project which includes many high-level analyses of
894 LTTng kernel traces, like scheduling statistics, interrupt
895 frequency distribution, top CPU usage, and more.
897 NOTE: This section assumes that LTTng wrote the traces it recorded
898 during the previous tutorials to their default location, in the
899 dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
900 environment variable defaults to `$HOME` if not set.
903 [[viewing-and-analyzing-your-traces-bt]]
904 ==== Use the cmd:babeltrace2 command-line tool
906 The simplest way to list all the recorded events of an LTTng trace is to
907 pass its path to man:babeltrace2(1), without options:
911 $ babeltrace2 ~/lttng-traces/my-user-space-session*
914 The cmd:babeltrace2 command finds all traces recursively within the
915 given path and prints all their events, sorting them chronologically.
917 Pipe the output of cmd:babeltrace2 into a tool like man:grep(1) for
922 $ babeltrace2 /tmp/my-kernel-trace | grep _switch
925 Pipe the output of cmd:babeltrace2 into a tool like man:wc(1) to count
930 $ babeltrace2 /tmp/my-kernel-trace | grep _open | wc --lines
934 [[viewing-and-analyzing-your-traces-bt-python]]
935 ==== Use the Babeltrace{nbsp}2 Python bindings
937 The <<viewing-and-analyzing-your-traces-bt,text output of
938 cmd:babeltrace2>> is useful to isolate event records by simple matching
939 using man:grep(1) and similar utilities. However, more elaborate
940 filters, such as keeping only event records with a field value falling
941 within a specific range, are not trivial to write using a shell.
942 Moreover, reductions and even the most basic computations involving
943 multiple event records are virtually impossible to implement.
945 Fortunately, Babeltrace{nbsp}2 ships with
946 https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings]
947 which make it easy to read the event records of an LTTng trace
948 sequentially and compute the desired information.
950 The following script accepts an LTTng Linux kernel trace path as its
951 first argument and prints the short names of the top five running
952 processes on CPU{nbsp}0 during the whole trace:
963 # Get the trace path from the first command-line argument
964 it = bt2.TraceCollectionMessageIterator(sys.argv[1])
966 # This counter dictionary will hold execution times:
968 # Task command name -> Total execution time (ns)
969 exec_times = collections.Counter()
971 # This holds the last `sched_switch` timestamp
975 # We only care about event messages
976 if type(msg) is not bt2._EventMessageConst:
979 # Event of the event message
982 # Keep only `sched_switch` events
983 if event.cls.name != 'sched_switch':
986 # Keep only records of events which LTTng emitted from CPU 0
987 if event.packet.context_field['cpu_id'] != 0:
990 # Event timestamp (ns)
991 cur_ts = msg.default_clock_snapshot.ns_from_origin
997 # (Short) name of the previous task command
998 prev_comm = str(event.payload_field['prev_comm'])
1000 # Initialize an entry in our dictionary if not done yet
1001 if prev_comm not in exec_times:
1002 exec_times[prev_comm] = 0
1004 # Compute previous command execution time
1005 diff = cur_ts - last_ts
1007 # Update execution time of this command
1008 exec_times[prev_comm] += diff
1010 # Update last timestamp
1014 for name, ns in exec_times.most_common(5):
1015 print('{:20}{} s'.format(name, ns / 1e9))
1018 if __name__ == '__main__':
1026 $ python3 top5proc.py /tmp/my-kernel-trace/kernel
1032 swapper/0 48.607245889 s
1033 chromium 7.192738188 s
1034 pavucontrol 0.709894415 s
1035 Compositor 0.660867933 s
1036 Xorg.bin 0.616753786 s
1039 Note that `swapper/0` is the ``idle'' process of CPU{nbsp}0 on Linux;
1040 since we weren't using the CPU that much when recording, its first
1041 position in the list makes sense.
1045 == [[understanding-lttng]]Core concepts
1047 From a user's perspective, the LTTng system is built on a few concepts,
1048 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1049 operates by sending commands to the <<lttng-sessiond,session daemon>>
1050 (through <<liblttng-ctl-lttng,`liblttng-ctl`>>).
1052 Understanding how those objects relate to each other is key to master
1055 The core concepts of LTTng are:
1057 * <<"event-rule","Instrumentation point, event rule, and event">>
1058 * <<trigger,Trigger>>
1059 * <<tracing-session,Recording session>>
1060 * <<domain,Tracing domain>>
1061 * <<channel,Channel and ring buffer>>
1062 * <<event,Recording event rule and event record>>
1064 NOTE: The man:lttng-concepts(7) manual page also documents the core
1065 concepts of LTTng, with more links to other LTTng-tools manual pages.
1069 === Instrumentation point, event rule, and event
1071 An _instrumentation point_ is a point, within a piece of software,
1072 which, when executed, creates an LTTng _event_.
1074 LTTng offers various <<instrumentation-point-types,types of
1077 An _event rule_ is a set of conditions to match a set of events.
1079 When LTTng creates an event{nbsp}__E__, an event rule{nbsp}__ER__ is
1080 said to __match__{nbsp}__E__ when{nbsp}__E__ satisfies _all_ the
1081 conditions of{nbsp}__ER__. This concept is similar to a
1082 https://en.wikipedia.org/wiki/Regular_expression[regular expression]
1083 which matches a set of strings.
1085 When an event rule matches an event, LTTng _emits_ the event, therefore
1086 attempting to execute one or more actions.
1090 [[event-creation-emission-opti]]The event creation and emission
1091 processes are documentation concepts to help understand the journey from
1092 an instrumentation point to the execution of actions.
1094 The actual creation of an event can be costly because LTTng needs to
1095 evaluate the arguments of the instrumentation point.
1097 In practice, LTTng implements various optimizations for the Linux kernel
1098 and user space <<domain,tracing domains>> to avoid actually creating an
1099 event when the tracer knows, thanks to properties which are independent
1100 from the event payload and current context, that it would never emit
1101 such an event. Those properties are:
1103 * The <<instrumentation-point-types,instrumentation point type>>.
1105 * The instrumentation point name.
1107 * The instrumentation point log level.
1109 * For a <<event,recording event rule>>:
1110 ** The status of the rule itself.
1111 ** The status of the <<channel,channel>>.
1112 ** The activity of the <<tracing-session,recording session>>.
1113 ** Whether or not the process for which LTTng would create the event is
1114 <<pid-tracking,allowed to record events>>.
1116 In other words: if, for a given instrumentation point{nbsp}__IP__, the
1117 LTTng tracer knows that it would never emit an event,
1118 executing{nbsp}__IP__ represents a simple boolean variable check and,
1119 for a Linux kernel recording event rule, a few process attribute checks.
1122 As of LTTng{nbsp}{revision}, there are two places where you can find an
1125 <<event,Recording event rule>>::
1126 A specific type of event rule of which the action is to record the
1127 matched event as an event record.
1129 See ``<<enabling-disabling-events,Create and enable a recording event
1130 rule>>'' to learn more.
1132 ``Event rule matches'' <<trigger,trigger>> condition (since LTTng{nbsp}2.13)::
1133 When the event rule of the trigger condition matches an event, LTTng
1134 can execute user-defined actions such as sending an LTTng
1136 <<basic-tracing-session-control,starting a recording session>>,
1139 See “<<add-event-rule-matches-trigger,Add an ``event rule matches''
1140 trigger to a session daemon>>” to learn more.
1142 For LTTng to emit an event{nbsp}__E__,{nbsp}__E__ must satisfy _all_ the
1143 basic conditions of an event rule{nbsp}__ER__, that is:
1145 * The instrumentation point from which LTTng
1146 creates{nbsp}__E__ has a specific
1147 <<instrumentation-point-types,type>>.
1149 * A pattern matches the name of{nbsp}__E__ while another pattern
1152 * The log level of the instrumentation point from which LTTng
1153 creates{nbsp}__E__ is at least as severe as some value, or is exactly
1156 * The fields of the payload of{nbsp}__E__ and the current context fields
1157 satisfy a filter expression.
1159 A <<event,recording event rule>> has additional, implicit conditions to
1163 [[instrumentation-point-types]]
1164 ==== Instrumentation point types
1166 As of LTTng{nbsp}{revision}, the available instrumentation point
1167 types are, depending on the <<domain,tracing domain>>:
1171 A statically defined point in the source code of the kernel
1172 image or of a kernel module using the
1173 <<lttng-modules,LTTng-modules>> macros.
1175 Linux kernel system call:::
1176 Entry, exit, or both of a Linux kernel system call.
1178 Linux https://www.kernel.org/doc/html/latest/trace/kprobes.html[kprobe]:::
1179 A single probe dynamically placed in the compiled kernel code.
1181 When you create such an instrumentation point, you set its memory
1182 address or symbol name.
1184 Linux user space probe:::
1185 A single probe dynamically placed at the entry of a compiled
1186 user space application/library function through the kernel.
1188 When you create such an instrumentation point, you set:
1191 With the ELF method::
1192 Its application/library path and its symbol name.
1194 With the USDT method::
1195 Its application/library path, its provider name, and its probe name.
1197 ``USDT'' stands for _SystemTap User-level Statically Defined Tracing_,
1198 a http://dtrace.org/blogs/about/[DTrace]-style marker.
1201 As of LTTng{nbsp}{revision}, LTTng only supports USDT probes which
1202 are _not_ reference-counted.
1204 Linux https://www.kernel.org/doc/html/latest/trace/kprobes.html[kretprobe]:::
1205 Entry, exit, or both of a Linux kernel function.
1207 When you create such an instrumentation point, you set the memory
1208 address or symbol name of its function.
1212 A statically defined point in the source code of a C/$$C++$$
1213 application/library using the
1214 <<lttng-ust,LTTng-UST>> macros.
1216 `java.util.logging`, Apache log4j, and Python::
1217 Java or Python logging statement:::
1218 A method call on a Java or Python logger attached to an
1221 See ``<<list-instrumentation-points,List the available instrumentation
1222 points>>'' to learn how to list available Linux kernel, user space, and
1223 logging instrumentation points.
1229 A _trigger_ associates a condition to one or more actions.
1231 When the condition of a trigger is satisfied, LTTng attempts to execute
1234 As of LTTng{nbsp}{revision}, the available trigger conditions and
1239 * The consumed buffer size of a given <<tracing-session,recording
1240 session>> becomes greater than some value.
1242 * The buffer usage of a given <<channel,channel>> becomes greater than
1245 * The buffer usage of a given channel becomes less than some value.
1247 * There's an ongoing <<session-rotation,recording session rotation>>.
1249 * A recording session rotation becomes completed.
1251 * An <<add-event-rule-matches-trigger,event rule matches>> an event.
1255 * <<trigger-event-notif,Send a notification>> to a user application.
1256 * <<basic-tracing-session-control,Start>> a given recording session.
1257 * <<basic-tracing-session-control,Stop>> a given recording session.
1258 * <<session-rotation,Archive the current trace chunk>> of a given
1259 recording session (rotate).
1260 * <<taking-a-snapshot,Take a snapshot>> of a given recording session.
1262 A trigger belongs to a <<lttng-sessiond,session daemon>>, not to a
1263 specific recording session. For a given session daemon, each Unix user has
1264 its own, private triggers. Note, however, that the `root` Unix user may,
1265 for the root session daemon:
1267 * Add a trigger as another Unix user.
1269 * List all the triggers, regardless of their owner.
1271 * Remove a trigger which belongs to another Unix user.
1273 For a given session daemon and Unix user, a trigger has a unique name.
1277 === Recording session
1279 A _recording session_ (named ``tracing session'' prior to
1280 LTTng{nbsp}2.13) is a stateful dialogue between you and a
1281 <<lttng-sessiond,session daemon>> for everything related to
1282 <<event,event recording>>.
1284 Everything that you do when you control LTTng tracers to record events
1285 happens within a recording session. In particular, a recording session:
1287 * Has its own name, unique for a given session daemon.
1289 * Has its own set of trace files, if any.
1291 * Has its own state of activity (started or stopped).
1293 An active recording session is an implicit <<event,recording event rule>>
1296 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
1299 * Has its own <<channel,channels>> to which are attached their own
1300 recording event rules.
1302 * Has its own <<pid-tracking,process attribute inclusion sets>>.
1305 .A _recording session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,recording event rules>>.
1306 image::concepts.png[]
1308 Those attributes and objects are completely isolated between different
1311 A recording session is like an
1312 https://en.wikipedia.org/wiki/Automated_teller_machine[ATM] session: the
1313 operations you do on the banking system through the ATM don't alter the
1314 data of other users of the same system. In the case of the ATM, a
1315 session lasts as long as your bank card is inside. In the case of LTTng,
1316 a recording session lasts from the man:lttng-create(1) command to the
1317 man:lttng-destroy(1) command.
1320 .Each Unix user has its own set of recording sessions.
1321 image::many-sessions.png[]
1323 A recording session belongs to a <<lttng-sessiond,session daemon>>. For a
1324 given session daemon, each Unix user has its own, private recording
1325 sessions. Note, however, that the `root` Unix user may operate on or
1326 destroy another user's recording session.
1329 [[tracing-session-mode]]
1330 ==== Recording session mode
1332 LTTng offers four recording session modes:
1334 [[local-mode]]Local mode::
1335 Write the trace data to the local file system.
1337 [[net-streaming-mode]]Network streaming mode::
1338 Send the trace data over the network to a listening
1339 <<lttng-relayd,relay daemon>>.
1341 [[snapshot-mode]]Snapshot mode::
1342 Only write the trace data to the local file system or send it to a
1343 listening relay daemon when LTTng <<taking-a-snapshot,takes a
1346 LTTng forces all the <<channel,channels>>
1347 to be created to be configured to be snapshot-ready.
1349 LTTng takes a snapshot of such a recording session when:
1352 * You run the man:lttng-snapshot(1) command.
1354 * LTTng executes a `snapshot-session` <<trigger,trigger>> action.
1357 [[live-mode]]Live mode::
1358 Send the trace data over the network to a listening relay daemon
1359 for <<lttng-live,live reading>>.
1361 An LTTng live reader (for example, man:babeltrace2(1)) can connect to
1362 the same relay daemon to receive trace data while the recording session is
1369 A _tracing domain_ identifies a type of LTTng tracer.
1371 A tracing domain has its own properties and features.
1373 There are currently five available tracing domains:
1377 * `java.util.logging` (JUL)
1381 You must specify a tracing domain to target a type of LTTng tracer when
1382 using some <<lttng-cli,cmd:lttng>> commands to avoid ambiguity. For
1383 example, because the Linux kernel and user space tracing domains support
1384 named tracepoints as <<event-rule,instrumentation points>>, you need to
1385 specify a tracing domain when you <<enabling-disabling-events,create
1386 an event rule>> because both tracing domains could have tracepoints
1387 sharing the same name.
1389 You can create <<channel,channels>> in the Linux kernel and user space
1390 tracing domains. The other tracing domains have a single, default
1395 === Channel and ring buffer
1397 A _channel_ is an object which is responsible for a set of
1400 Each ring buffer is divided into multiple _sub-buffers_. When a
1401 <<event,recording event rule>>
1402 matches an event, LTTng can record it to one or more sub-buffers of one
1405 When you <<enabling-disabling-channels,create a channel>>, you set its
1406 final attributes, that is:
1408 * Its <<channel-buffering-schemes,buffering scheme>>.
1410 * What to do <<channel-overwrite-mode-vs-discard-mode,when there's no
1411 space left>> for a new event record because all sub-buffers are full.
1413 * The <<channel-subbuf-size-vs-subbuf-count,size of each ring buffer and
1414 how many sub-buffers>> a ring buffer has.
1416 * The <<tracefile-rotation,size of each trace file LTTng writes for this
1417 channel and the maximum count>> of trace files.
1419 * The periods of its <<channel-read-timer,read>>,
1420 <<channel-switch-timer,switch>>, and <<channel-monitor-timer,monitor>>
1423 * For a Linux kernel channel: its output type.
1425 See the opt:lttng-enable-channel(1):--output option of the
1426 man:lttng-enable-channel(1) command.
1428 * For a user space channel: the value of its
1429 <<blocking-timeout-example,blocking timeout>>.
1431 A channel is always associated to a <<domain,tracing domain>>. The
1432 `java.util.logging` (JUL), log4j, and Python tracing domains each have a
1433 default channel which you can't configure.
1435 A channel owns <<event,recording event rules>>.
1438 [[channel-buffering-schemes]]
1439 ==== Buffering scheme
1441 A channel has at least one ring buffer _per CPU_. LTTng always records
1442 an event to the ring buffer dedicated to the CPU which emits it.
1444 The buffering scheme of a user space channel determines what has its own
1445 set of per-CPU ring buffers:
1447 Per-user buffering::
1448 Allocate one set of ring buffers--one per CPU--shared by all the
1449 instrumented processes of:
1450 If your Unix user is `root`:::
1455 .Per-user buffering scheme (recording session belongs to the `root` Unix user).
1456 image::per-user-buffering-root.png[]
1464 .Per-user buffering scheme (recording session belongs to the `Bob` Unix user).
1465 image::per-user-buffering.png[]
1468 Per-process buffering::
1469 Allocate one set of ring buffers--one per CPU--for each
1470 instrumented process of:
1471 If your Unix user is `root`:::
1476 .Per-process buffering scheme (recording session belongs to the `root` Unix user).
1477 image::per-process-buffering-root.png[]
1485 .Per-process buffering scheme (recording session belongs to the `Bob` Unix user).
1486 image::per-process-buffering.png[]
1489 The per-process buffering scheme tends to consume more memory than the
1490 per-user option because systems generally have more instrumented
1491 processes than Unix users running instrumented processes. However, the
1492 per-process buffering scheme ensures that one process having a high
1493 event throughput won't fill all the shared sub-buffers of the same Unix
1496 The buffering scheme of a Linux kernel channel is always to allocate a
1497 single set of ring buffers for the whole system. This scheme is similar
1498 to the per-user option, but with a single, global user ``running'' the
1502 [[channel-overwrite-mode-vs-discard-mode]]
1503 ==== Event record loss mode
1505 When LTTng emits an event, LTTng can record it to a specific, available
1506 sub-buffer within the ring buffers of specific channels. When there's no
1507 space left in a sub-buffer, the tracer marks it as consumable and
1508 another, available sub-buffer starts receiving the following event
1509 records. An LTTng <<lttng-consumerd,consumer daemon>> eventually
1510 consumes the marked sub-buffer, which returns to the available state.
1513 [role="docsvg-channel-subbuf-anim"]
1518 In an ideal world, sub-buffers are consumed faster than they're filled.
1519 In the real world, however, all sub-buffers can be full at some point,
1520 leaving no space to record the following events.
1522 In an ideal world, sub-buffers are consumed faster than they're filled,
1523 as it's the case in the previous animation. In the real world,
1524 however, all sub-buffers can be full at some point, leaving no space to
1525 record the following events.
1527 By default, <<lttng-modules,LTTng-modules>> and <<lttng-ust,LTTng-UST>>
1528 are _non-blocking_ tracers: when there's no available sub-buffer to
1529 record an event, it's acceptable to lose event records when the
1530 alternative would be to cause substantial delays in the execution of the
1531 instrumented application. LTTng privileges performance over integrity;
1532 it aims at perturbing the instrumented application as little as possible
1533 in order to make the detection of subtle race conditions and rare
1534 interrupt cascades possible.
1536 Since LTTng{nbsp}2.10, the LTTng user space tracer, LTTng-UST, supports
1537 a _blocking mode_. See the <<blocking-timeout-example,blocking timeout
1538 example>> to learn how to use the blocking mode.
1540 When it comes to losing event records because there's no available
1541 sub-buffer, or because the blocking timeout of
1542 the channel is reached, the _event record loss mode_ of the channel
1543 determines what to do. The available event record loss modes are:
1545 [[discard-mode]]Discard mode::
1546 Drop the newest event records until a sub-buffer becomes available.
1548 This is the only available mode when you specify a blocking timeout.
1550 With this mode, LTTng increments a count of lost event records when an
1551 event record is lost and saves this count to the trace. A trace reader
1552 can use the saved discarded event record count of the trace to decide
1553 whether or not to perform some analysis even if trace data is known to
1556 [[overwrite-mode]]Overwrite mode::
1557 Clear the sub-buffer containing the oldest event records and start
1558 writing the newest event records there.
1560 This mode is sometimes called _flight recorder mode_ because it's
1561 similar to a https://en.wikipedia.org/wiki/Flight_recorder[flight
1562 recorder]: always keep a fixed amount of the latest data. It's also
1563 similar to the roll mode of an oscilloscope.
1565 Since LTTng{nbsp}2.8, with this mode, LTTng writes to a given sub-buffer
1566 its sequence number within its data stream. With a <<local-mode,local>>,
1567 <<net-streaming-mode,network streaming>>, or <<live-mode,live>> recording
1568 session, a trace reader can use such sequence numbers to report lost
1569 packets. A trace reader can use the saved discarded sub-buffer (packet)
1570 count of the trace to decide whether or not to perform some analysis
1571 even if trace data is known to be missing.
1573 With this mode, LTTng doesn't write to the trace the exact number of
1574 lost event records in the lost sub-buffers.
1576 Which mechanism you should choose depends on your context: prioritize
1577 the newest or the oldest event records in the ring buffer?
1579 Beware that, in overwrite mode, the tracer abandons a _whole sub-buffer_
1580 as soon as a there's no space left for a new event record, whereas in
1581 discard mode, the tracer only discards the event record that doesn't
1584 There are a few ways to decrease your probability of losing event
1585 records. The ``<<channel-subbuf-size-vs-subbuf-count,Sub-buffer size and
1586 count>>'' section shows how to fine-tune the sub-buffer size and count
1587 of a channel to virtually stop losing event records, though at the cost
1588 of greater memory usage.
1591 [[channel-subbuf-size-vs-subbuf-count]]
1592 ==== Sub-buffer size and count
1594 A channel has one or more ring buffer for each CPU of the target system.
1596 See the ``<<channel-buffering-schemes,Buffering scheme>>'' section to
1597 learn how many ring buffers of a given channel are dedicated to each CPU
1598 depending on its buffering scheme.
1600 Set the size of each sub-buffer the ring buffers of a channel contain
1601 and how many there are
1602 when you <<enabling-disabling-channels,create it>>.
1604 Note that LTTng switching the current sub-buffer of a ring buffer
1605 (marking a full one as consumable and switching to an available one for
1606 LTTng to record the next events) introduces noticeable CPU overhead.
1607 Knowing this, the following list presents a few practical situations
1608 along with how to configure the sub-buffer size and count for them:
1610 High event throughput::
1611 In general, prefer large sub-buffers to lower the risk of losing
1614 Having larger sub-buffers also ensures a lower sub-buffer switching
1617 The sub-buffer count is only meaningful if you create the channel in
1618 <<overwrite-mode,overwrite mode>>: in this case, if LTTng overwrites a
1619 sub-buffer, then the other sub-buffers are left unaltered.
1621 Low event throughput::
1622 In general, prefer smaller sub-buffers since the risk of losing
1623 event records is low.
1625 Because LTTng emits events less frequently, the sub-buffer switching
1626 frequency should remain low and therefore the overhead of the tracer
1627 shouldn't be a problem.
1630 If your target system has a low memory limit, prefer fewer first,
1631 then smaller sub-buffers.
1633 Even if the system is limited in memory, you want to keep the
1634 sub-buffers as large as possible to avoid a high sub-buffer switching
1637 Note that LTTng uses https://diamon.org/ctf/[CTF] as its trace format,
1638 which means event record data is very compact. For example, the average
1639 LTTng kernel event record weights about 32{nbsp}bytes. Therefore, a
1640 sub-buffer size of 1{nbsp}MiB is considered large.
1642 The previous scenarios highlight the major trade-off between a few large
1643 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1644 frequency vs. how many event records are lost in overwrite mode.
1645 Assuming a constant event throughput and using the overwrite mode, the
1646 two following configurations have the same ring buffer total size:
1649 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1654 Two sub-buffers of 4{nbsp}MiB each::
1655 Expect a very low sub-buffer switching frequency, but if LTTng
1656 ever needs to overwrite a sub-buffer, half of the event records so
1657 far (4{nbsp}MiB) are definitely lost.
1659 Eight sub-buffers of 1{nbsp}MiB each::
1660 Expect four times the tracer overhead of the configuration above,
1661 but if LTTng needs to overwrite a sub-buffer, only the eighth of
1662 event records so far (1{nbsp}MiB) are definitely lost.
1664 In <<discard-mode,discard mode>>, the sub-buffer count parameter is
1665 pointless: use two sub-buffers and set their size according to your
1669 [[tracefile-rotation]]
1670 ==== Maximum trace file size and count (trace file rotation)
1672 By default, trace files can grow as large as needed.
1674 Set the maximum size of each trace file that LTTng writes of a given
1675 channel when you <<enabling-disabling-channels,create it>>.
1677 When the size of a trace file reaches the fixed maximum size of the
1678 channel, LTTng creates another file to contain the next event records.
1679 LTTng appends a file count to each trace file name in this case.
1681 If you set the trace file size attribute when you create a channel, the
1682 maximum number of trace files that LTTng creates is _unlimited_ by
1683 default. To limit them, set a maximum number of trace files. When the
1684 number of trace files reaches the fixed maximum count of the channel,
1685 LTTng overwrites the oldest trace file. This mechanism is called _trace
1690 Even if you don't limit the trace file count, always assume that LTTng
1691 manages all the trace files of the recording session.
1693 In other words, there's no safe way to know if LTTng still holds a given
1694 trace file open with the trace file rotation feature.
1696 The only way to obtain an unmanaged, self-contained LTTng trace before
1697 you <<creating-destroying-tracing-sessions,destroy the recording session>>
1698 is with the <<session-rotation,recording session rotation>> feature, which
1699 is available since LTTng{nbsp}2.11.
1706 Each channel can have up to three optional timers:
1708 [[channel-switch-timer]]Switch timer::
1709 When this timer expires, a sub-buffer switch happens: for each ring
1710 buffer of the channel, LTTng marks the current sub-buffer as
1711 consumable and _switches_ to an available one to record the next
1715 [role="docsvg-channel-switch-timer"]
1720 A switch timer is useful to ensure that LTTng consumes and commits trace
1721 data to trace files or to a distant <<lttng-relayd,relay daemon>>
1722 periodically in case of a low event throughput.
1724 Such a timer is also convenient when you use large
1725 <<channel-subbuf-size-vs-subbuf-count,sub-buffers>> to cope with a
1726 sporadic high event throughput, even if the throughput is otherwise low.
1728 Set the period of the switch timer of a channel when you
1729 <<enabling-disabling-channels,create it>> with
1730 the opt:lttng-enable-channel(1):--switch-timer option.
1732 [[channel-read-timer]]Read timer::
1733 When this timer expires, LTTng checks for full, consumable
1736 By default, the LTTng tracers use an asynchronous message mechanism to
1737 signal a full sub-buffer so that a <<lttng-consumerd,consumer daemon>>
1740 When such messages must be avoided, for example in real-time
1741 applications, use this timer instead.
1743 Set the period of the read timer of a channel when you
1744 <<enabling-disabling-channels,create it>> with the
1745 opt:lttng-enable-channel(1):--read-timer option.
1747 [[channel-monitor-timer]]Monitor timer::
1748 When this timer expires, the consumer daemon samples some channel
1749 statistics to evaluate the following <<trigger,trigger>>
1753 . The consumed buffer size of a given <<tracing-session,recording
1754 session>> becomes greater than some value.
1755 . The buffer usage of a given channel becomes greater than some value.
1756 . The buffer usage of a given channel becomes less than some value.
1759 If you disable the monitor timer of a channel{nbsp}__C__:
1762 * The consumed buffer size value of the recording session of{nbsp}__C__
1763 could be wrong for trigger condition type{nbsp}1: the consumed buffer
1764 size of{nbsp}__C__ won't be part of the grand total.
1766 * The buffer usage trigger conditions (types{nbsp}2 and{nbsp}3)
1767 for{nbsp}__C__ will never be satisfied.
1770 Set the period of the monitor timer of a channel when you
1771 <<enabling-disabling-channels,create it>> with the
1772 opt:lttng-enable-channel(1):--monitor-timer option.
1776 === Recording event rule and event record
1778 A _recording event rule_ is a specific type of <<event-rule,event rule>>
1779 of which the action is to serialize and record the matched event as an
1782 Set the explicit conditions of a recording event rule when you
1783 <<enabling-disabling-events,create it>>. A recording event rule also has
1784 the following implicit conditions:
1786 * The recording event rule itself is enabled.
1788 A recording event rule is enabled on creation.
1790 * The <<channel,channel>> to which the recording event rule is attached
1793 A channel is enabled on creation.
1795 * The <<tracing-session,recording session>> of the recording event rule is
1796 <<basic-tracing-session-control,active>> (started).
1798 A recording session is inactive (stopped) on creation.
1800 * The process for which LTTng creates an event to match is
1801 <<pid-tracking,allowed to record events>>.
1803 All processes are allowed to record events on recording session
1806 You always attach a recording event rule to a channel, which belongs to
1807 a recording session, when you create it.
1809 When a recording event rule{nbsp}__ER__ matches an event{nbsp}__E__,
1810 LTTng attempts to serialize and record{nbsp}__E__ to one of the
1811 available sub-buffers of the channel to which{nbsp}__E__ is attached.
1813 When multiple matching recording event rules are attached to the same
1814 channel, LTTng attempts to serialize and record the matched event
1815 _once_. In the following example, the second recording event rule is
1816 redundant when both are enabled:
1820 $ lttng enable-event --userspace hello:world
1821 $ lttng enable-event --userspace hello:world --loglevel=INFO
1825 .Logical path from an instrumentation point to an event record.
1826 image::event-rule.png[]
1828 As of LTTng{nbsp}{revision}, you cannot remove a recording event
1829 rule: it exists as long as its recording session exists.
1833 == Components of noch:{LTTng}
1835 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1836 to call LTTng a simple _tool_ since it's composed of multiple
1837 interacting components.
1839 This section describes those components, explains their respective
1840 roles, and shows how they connect together to form the LTTng ecosystem.
1842 The following diagram shows how the most important components of LTTng
1843 interact with user applications, the Linux kernel, and you:
1846 .Control and trace data paths between LTTng components.
1847 image::plumbing.png[]
1849 The LTTng project integrates:
1852 Libraries and command-line interface to control recording sessions:
1854 * <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1855 * <<lttng-consumerd,Consumer daemon>> (cmd:lttng-consumerd).
1856 * <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1857 * <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1858 * <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1859 * <<persistent-memory-file-systems,`lttng-crash` command-line tool>>
1860 (man:lttng-crash(1)).
1863 Libraries and Java/Python packages to instrument and trace user
1866 * <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1867 headers to instrument and trace any native user application.
1868 * <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1869 ** `liblttng-ust-libc-wrapper`
1870 ** `liblttng-ust-pthread-wrapper`
1871 ** `liblttng-ust-cyg-profile`
1872 ** `liblttng-ust-cyg-profile-fast`
1873 ** `liblttng-ust-dl`
1874 * <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1875 Java applications using `java.util.logging` or
1876 Apache log4j{nbsp}1.2 logging.
1877 * <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1878 Python applications using the standard `logging` package.
1881 <<lttng-modules,Linux kernel modules>> to instrument and trace the
1884 * LTTng kernel tracer module.
1885 * Recording ring buffer kernel modules.
1886 * Probe kernel modules.
1887 * LTTng logger kernel module.
1891 === Tracing control command-line interface
1893 The _man:lttng(1) command-line tool_ is the standard user interface to
1894 control LTTng <<tracing-session,recording sessions>>.
1896 The cmd:lttng tool is part of LTTng-tools.
1898 The cmd:lttng tool is linked with
1899 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1900 one or more <<lttng-sessiond,session daemons>> behind the scenes.
1902 The cmd:lttng tool has a Git-like interface:
1906 $ lttng [GENERAL OPTIONS] <COMMAND> [COMMAND OPTIONS]
1909 The ``<<controlling-tracing,Tracing control>>'' section explores the
1910 available features of LTTng through its cmd:lttng tool.
1913 [[liblttng-ctl-lttng]]
1914 === Tracing control library
1917 .The tracing control library.
1918 image::plumbing-liblttng-ctl.png[]
1920 The _LTTng control library_, `liblttng-ctl`, is used to communicate with
1921 a <<lttng-sessiond,session daemon>> using a C{nbsp}API that hides the
1922 underlying details of the protocol.
1924 `liblttng-ctl` is part of LTTng-tools.
1926 The <<lttng-cli,cmd:lttng command-line tool>> is linked with
1929 Use `liblttng-ctl` in C or $$C++$$ source code by including its
1934 #include <lttng/lttng.h>
1937 As of LTTng{nbsp}{revision}, the best available developer documentation
1938 for `liblttng-ctl` is its installed header files. Functions and
1939 structures are documented with header comments.
1943 === User space tracing library
1946 .The user space tracing library.
1947 image::plumbing-liblttng-ust.png[]
1949 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1950 is the LTTng user space tracer.
1952 `liblttng-ust` receives commands from a <<lttng-sessiond,session
1953 daemon>>, for example to allow specific instrumentation points to emit
1954 LTTng <<event-rule,events>>, and writes event records to <<channel,ring
1955 buffers>> shared with a <<lttng-consumerd,consumer daemon>>.
1957 `liblttng-ust` is part of LTTng-UST.
1959 `liblttng-ust` can also send asynchronous messages to the session daemon
1960 when it emits an event. This supports the ``event rule matches''
1961 <<trigger,trigger>> condition feature (see
1962 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
1963 to a session daemon>>”).
1965 Public C{nbsp}header files are installed beside `liblttng-ust` to
1966 instrument any <<c-application,C or $$C++$$ application>>.
1968 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1969 packages, use their own <<tracepoint-provider,tracepoint provider
1970 package>> which is linked with `liblttng-ust`.
1972 An application or library doesn't have to initialize `liblttng-ust`
1973 manually: its constructor does the necessary tasks to register the
1974 application to a session daemon. The initialization phase also
1975 configures instrumentation points depending on the <<event-rule,event
1976 rules>> that you already created.
1979 [[lttng-ust-agents]]
1980 === User space tracing agents
1983 .The user space tracing agents.
1984 image::plumbing-lttng-ust-agents.png[]
1986 The _LTTng-UST Java and Python agents_ are regular Java and Python
1987 packages which add LTTng tracing capabilities to the
1988 native logging frameworks.
1990 The LTTng-UST agents are part of LTTng-UST.
1992 In the case of Java, the
1993 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1994 core logging facilities] and
1995 https://logging.apache.org/log4j/1.2/[Apache log4j{nbsp}1.2] are supported.
1996 Note that Apache Log4j{nbsp}2 isn't supported.
1998 In the case of Python, the standard
1999 https://docs.python.org/3/library/logging.html[`logging`] package
2000 is supported. Both Python{nbsp}2 and Python{nbsp}3 modules can import the
2001 LTTng-UST Python agent package.
2003 The applications using the LTTng-UST agents are in the
2004 `java.util.logging` (JUL), log4j, and Python <<domain,tracing domains>>.
2006 Both agents use the same mechanism to convert log statements to LTTng
2007 events. When an agent initializes, it creates a log handler that
2008 attaches to the root logger. The agent also registers to a
2009 <<lttng-sessiond,session daemon>>. When the user application executes a
2010 log statement, the root logger passes it to the log handler of the
2011 agent. The custom log handler of the agent calls a native function in a
2012 tracepoint provider package shared library linked with
2013 <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
2014 other fields, like its logger name and its log level. This native
2015 function contains a user space instrumentation point, therefore tracing
2018 The log level condition of a <<event,recording event rule>> is
2019 considered when tracing a Java or a Python application, and it's
2020 compatible with the standard `java.util.logging`, log4j, and Python log
2025 === LTTng kernel modules
2028 .The LTTng kernel modules.
2029 image::plumbing-lttng-modules.png[]
2031 The _LTTng kernel modules_ are a set of Linux kernel modules
2032 which implement the kernel tracer of the LTTng project.
2034 The LTTng kernel modules are part of LTTng-modules.
2036 The LTTng kernel modules include:
2038 * A set of _probe_ modules.
2040 Each module attaches to a specific subsystem
2041 of the Linux kernel using its tracepoint instrument points.
2043 There are also modules to attach to the entry and return points of the
2044 Linux system call functions.
2046 * _Ring buffer_ modules.
2048 A ring buffer implementation is provided as kernel modules. The LTTng
2049 kernel tracer writes to ring buffers; a
2050 <<lttng-consumerd,consumer daemon>> reads from ring buffers.
2052 * The _LTTng kernel tracer_ module.
2053 * The <<proc-lttng-logger-abi,_LTTng logger_>> module.
2055 The LTTng logger module implements the special path:{/proc/lttng-logger}
2056 (and path:{/dev/lttng-logger}, since LTTng{nbsp}2.11) files so that any
2057 executable can generate LTTng events by opening those files and
2060 The LTTng kernel tracer can also send asynchronous messages to the
2061 <<lttng-sessiond,session daemon>> when it emits an event.
2062 This supports the ``event rule matches''
2063 <<trigger,trigger>> condition feature (see
2064 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
2065 to a session daemon>>”).
2067 Generally, you don't have to load the LTTng kernel modules manually
2068 (using man:modprobe(8), for example): a root session daemon loads the
2069 necessary modules when starting. If you have extra probe modules, you
2070 can specify to load them to the session daemon on the command line
2071 (see the opt:lttng-sessiond(8):--extra-kmod-probes option).
2073 The LTTng kernel modules are installed in
2074 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
2075 the kernel release (output of `uname --kernel-release`).
2082 .The session daemon.
2083 image::plumbing-sessiond.png[]
2085 The _session daemon_, man:lttng-sessiond(8), is a
2086 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] which:
2088 * Manages <<tracing-session,recording sessions>>.
2090 * Controls the various components (like tracers and
2091 <<lttng-consumerd,consumer daemons>>) of LTTng.
2093 * Sends <<notif-trigger-api,asynchronous notifications>> to user
2096 The session daemon is part of LTTng-tools.
2098 The session daemon sends control requests to and receives control
2101 * The <<lttng-ust,user space tracing library>>.
2103 Any instance of the user space tracing library first registers to
2104 a session daemon. Then, the session daemon can send requests to
2105 this instance, such as:
2108 ** Get the list of tracepoints.
2109 ** Share a <<event,recording event rule>> so that the user space tracing
2110 library can decide whether or not a given tracepoint can emit events.
2111 Amongst the possible conditions of a recording event rule is a filter
2112 expression which `liblttng-ust` evaluates before it emits an event.
2113 ** Share <<channel,channel>> attributes and ring buffer locations.
2116 The session daemon and the user space tracing library use a Unix
2117 domain socket to communicate.
2119 * The <<lttng-ust-agents,user space tracing agents>>.
2121 Any instance of a user space tracing agent first registers to
2122 a session daemon. Then, the session daemon can send requests to
2123 this instance, such as:
2126 ** Get the list of loggers.
2127 ** Enable or disable a specific logger.
2130 The session daemon and the user space tracing agent use a TCP connection
2133 * The <<lttng-modules,LTTng kernel tracer>>.
2134 * The <<lttng-consumerd,consumer daemon>>.
2136 The session daemon sends requests to the consumer daemon to instruct
2137 it where to send the trace data streams, amongst other information.
2139 * The <<lttng-relayd,relay daemon>>.
2141 The session daemon receives commands from the
2142 <<liblttng-ctl-lttng,tracing control library>>.
2144 The session daemon can receive asynchronous messages from the
2145 <<lttng-ust,user space>> and <<lttng-modules,kernel>> tracers
2146 when they emit events. This supports the ``event rule matches''
2147 <<trigger,trigger>> condition feature (see
2148 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
2149 to a session daemon>>”).
2151 The root session daemon loads the appropriate
2152 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
2153 one or more <<lttng-consumerd,consumer daemons>> as soon as you create
2154 a <<event,recording event rule>>.
2156 The session daemon doesn't send and receive trace data: this is the
2157 role of the <<lttng-consumerd,consumer daemon>> and
2158 <<lttng-relayd,relay daemon>>. It does, however, generate the
2159 https://diamon.org/ctf/[CTF] metadata stream.
2161 Each Unix user can have its own session daemon instance. The
2162 recording sessions which different session daemons manage are completely
2165 The root user's session daemon is the only one which is
2166 allowed to control the LTTng kernel tracer, and its spawned consumer
2167 daemon is the only one which is allowed to consume trace data from the
2168 LTTng kernel tracer. Note, however, that any Unix user which is a member
2169 of the <<tracing-group,tracing group>> is allowed
2170 to create <<channel,channels>> in the
2171 Linux kernel <<domain,tracing domain>>, and therefore to use the Linux
2172 kernel LTTng tracer.
2174 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
2175 session daemon when using its `create` command if none is currently
2176 running. You can also start the session daemon manually.
2183 .The consumer daemon.
2184 image::plumbing-consumerd.png[]
2186 The _consumer daemon_, cmd:lttng-consumerd, is a
2187 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] which shares
2188 ring buffers with user applications or with the LTTng kernel modules to
2189 collect trace data and send it to some location (file system or to a
2190 <<lttng-relayd,relay daemon>> over the network).
2192 The consumer daemon is part of LTTng-tools.
2194 You don't start a consumer daemon manually: a consumer daemon is always
2195 spawned by a <<lttng-sessiond,session daemon>> as soon as you create a
2196 <<event,recording event rule>>, that is, before you start recording. When
2197 you kill its owner session daemon, the consumer daemon also exits
2198 because it's the child process of the session daemon. Command-line
2199 options of man:lttng-sessiond(8) target the consumer daemon process.
2201 There are up to two running consumer daemons per Unix user, whereas only
2202 one session daemon can run per user. This is because each process can be
2203 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
2204 and 64-bit processes, it's more efficient to have separate
2205 corresponding 32-bit and 64-bit consumer daemons. The root user is an
2206 exception: it can have up to _three_ running consumer daemons: 32-bit
2207 and 64-bit instances for its user applications, and one more
2208 reserved for collecting kernel trace data.
2216 image::plumbing-relayd.png[]
2218 The _relay daemon_, man:lttng-relayd(8), is a
2219 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] acting as a bridge
2220 between remote session and consumer daemons, local trace files, and a
2221 remote live trace reader.
2223 The relay daemon is part of LTTng-tools.
2225 The main purpose of the relay daemon is to implement a receiver of
2226 <<sending-trace-data-over-the-network,trace data over the network>>.
2227 This is useful when the target system doesn't have much file system
2228 space to write trace files locally.
2230 The relay daemon is also a server to which a
2231 <<lttng-live,live trace reader>> can
2232 connect. The live trace reader sends requests to the relay daemon to
2233 receive trace data as the target system records events. The
2234 communication protocol is named _LTTng live_; it's used over TCP
2237 Note that you can start the relay daemon on the target system directly.
2238 This is the setup of choice when the use case is to view/analyze events
2239 as the target system records them without the need of a remote system.
2243 == [[using-lttng]]Instrumentation
2245 There are many examples of tracing and monitoring in our everyday life:
2247 * You have access to real-time and historical weather reports and
2248 forecasts thanks to weather stations installed around the country.
2249 * You know your heart is safe thanks to an electrocardiogram.
2250 * You make sure not to drive your car too fast and to have enough fuel
2251 to reach your destination thanks to gauges visible on your dashboard.
2253 All the previous examples have something in common: they rely on
2254 **instruments**. Without the electrodes attached to the surface of your
2255 body skin, cardiac monitoring is futile.
2257 LTTng, as a tracer, is no different from those real life examples. If
2258 you're about to trace a software system or, in other words, record its
2259 history of execution, you better have **instrumentation points** in the
2260 subject you're tracing, that is, the actual software system.
2262 <<instrumentation-point-types,Various ways>> were developed to
2263 instrument a piece of software for LTTng tracing. The most
2264 straightforward one is to manually place static instrumentation points,
2265 called _tracepoints_, in the source code of the application. The Linux
2266 kernel <<domain,tracing domain>> also makes it possible to dynamically
2267 add instrumentation points.
2269 If you're only interested in tracing the Linux kernel, your
2270 instrumentation needs are probably already covered by the built-in
2271 <<lttng-modules,Linux kernel instrumentation points>> of LTTng. You may
2272 also wish to have LTTng trace a user application which is already
2273 instrumented for LTTng tracing. In such cases, skip this whole section
2274 and read the topics of the ``<<controlling-tracing,Tracing control>>''
2277 Many methods are available to instrument a piece of software for LTTng
2280 * <<c-application,Instrument a C/$$C++$$ user application>>.
2281 * <<prebuilt-ust-helpers,Load a prebuilt user space tracing helper>>.
2282 * <<java-application,Instrument a Java application>>.
2283 * <<python-application,Instrument a Python application>>.
2284 * <<proc-lttng-logger-abi,Use the LTTng logger>>.
2285 * <<instrumenting-linux-kernel,Instrument a Linux kernel image or module>>.
2289 === [[cxx-application]]Instrument a C/$$C++$$ user application
2291 The high level procedure to instrument a C or $$C++$$ user application
2292 with the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`,
2295 . <<tracepoint-provider,Create the source files of a tracepoint provider
2298 . <<probing-the-application-source-code,Add tracepoints to
2299 the source code of the application>>.
2301 . <<building-tracepoint-providers-and-user-application,Build and link
2302 a tracepoint provider package and the user application>>.
2304 If you need quick, man:printf(3)-like instrumentation, skip those steps
2305 and use <<tracef,`lttng_ust_tracef()`>> or
2306 <<tracelog,`lttng_ust_tracelog()`>> instead.
2308 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
2309 instrument a user application with `liblttng-ust`.
2312 [[tracepoint-provider]]
2313 ==== Create the source files of a tracepoint provider package
2315 A _tracepoint provider_ is a set of compiled functions which provide
2316 **tracepoints** to an application, the type of instrumentation point
2317 which LTTng-UST provides.
2319 Those functions can make LTTng emit events with user-defined fields and
2320 serialize those events as event records to one or more LTTng-UST
2321 <<channel,channel>> sub-buffers. The `lttng_ust_tracepoint()` macro,
2322 which you <<probing-the-application-source-code,insert in the source
2323 code of a user application>>, calls those functions.
2325 A _tracepoint provider package_ is an object file (`.o`) or a shared
2326 library (`.so`) which contains one or more tracepoint providers. Its
2329 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
2330 * A <<tpp-source,tracepoint provider package source>> (`.c`).
2332 A tracepoint provider package is dynamically linked with `liblttng-ust`,
2333 the LTTng user space tracer, at run time.
2336 .User application linked with `liblttng-ust` and containing a tracepoint provider.
2337 image::ust-app.png[]
2339 NOTE: If you need quick, man:printf(3)-like instrumentation, skip
2340 creating and using a tracepoint provider and use
2341 <<tracef,`lttng_ust_tracef()`>> or <<tracelog,`lttng_ust_tracelog()`>>
2346 ===== Create a tracepoint provider header file template
2348 A _tracepoint provider header file_ contains the tracepoint definitions
2349 of a tracepoint provider.
2351 To create a tracepoint provider header file:
2353 . Start from this template:
2357 .Tracepoint provider header file template (`.h` file extension).
2359 #undef LTTNG_UST_TRACEPOINT_PROVIDER
2360 #define LTTNG_UST_TRACEPOINT_PROVIDER provider_name
2362 #undef LTTNG_UST_TRACEPOINT_INCLUDE
2363 #define LTTNG_UST_TRACEPOINT_INCLUDE "./tp.h"
2365 #if !defined(_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ)
2368 #include <lttng/tracepoint.h>
2371 * Use LTTNG_UST_TRACEPOINT_EVENT(), LTTNG_UST_TRACEPOINT_EVENT_CLASS(),
2372 * LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(), and
2373 * LTTNG_UST_TRACEPOINT_LOGLEVEL() here.
2378 #include <lttng/tracepoint-event.h>
2384 * +__provider_name__+ with the name of your tracepoint provider.
2385 * `"tp.h"` with the name of your tracepoint provider header file.
2387 . Below the `#include <lttng/tracepoint.h>` line, put your
2388 <<defining-tracepoints,tracepoint definitions>>.
2390 Your tracepoint provider name must be unique amongst all the possible
2391 tracepoint provider names used on the same target system. We suggest to
2392 include the name of your project or company in the name, for example,
2393 `org_lttng_my_project_tpp`.
2396 [[defining-tracepoints]]
2397 ===== Create a tracepoint definition
2399 A _tracepoint definition_ defines, for a given tracepoint:
2401 * Its **input arguments**.
2403 They're the macro parameters that the `lttng_ust_tracepoint()` macro
2404 accepts for this particular tracepoint in the source code of the user
2407 * Its **output event fields**.
2409 They're the sources of event fields that form the payload of any event
2410 that the execution of the `lttng_ust_tracepoint()` macro emits for this
2411 particular tracepoint.
2413 Create a tracepoint definition with the
2414 `LTTNG_UST_TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2416 <<tpp-header,tracepoint provider header file template>>.
2418 The syntax of the `LTTNG_UST_TRACEPOINT_EVENT()` macro is:
2421 .`LTTNG_UST_TRACEPOINT_EVENT()` macro syntax.
2423 LTTNG_UST_TRACEPOINT_EVENT(
2424 /* Tracepoint provider name */
2427 /* Tracepoint name */
2430 /* Input arguments */
2435 /* Output event fields */
2436 LTTNG_UST_TP_FIELDS(
2444 * +__provider_name__+ with your tracepoint provider name.
2445 * +__tracepoint_name__+ with your tracepoint name.
2446 * +__arguments__+ with the <<tpp-def-input-args,input arguments>>.
2447 * +__fields__+ with the <<tpp-def-output-fields,output event field>>
2450 The full name of this tracepoint is `provider_name:tracepoint_name`.
2453 .Event name length limitation
2455 The concatenation of the tracepoint provider name and the tracepoint
2456 name must not exceed **254{nbsp}characters**. If it does, the
2457 instrumented application compiles and runs, but LTTng throws multiple
2458 warnings and you could experience serious issues.
2461 [[tpp-def-input-args]]The syntax of the `LTTNG_UST_TP_ARGS()` macro is:
2464 .`LTTNG_UST_TP_ARGS()` macro syntax.
2473 * +__type__+ with the C{nbsp}type of the argument.
2474 * +__arg_name__+ with the argument name.
2476 You can repeat +__type__+ and +__arg_name__+ up to 10{nbsp}times to have
2477 more than one argument.
2479 .`LTTNG_UST_TP_ARGS()` usage with three arguments.
2491 The `LTTNG_UST_TP_ARGS()` and `LTTNG_UST_TP_ARGS(void)` forms are valid
2492 to create a tracepoint definition with no input arguments.
2494 [[tpp-def-output-fields]]The `LTTNG_UST_TP_FIELDS()` macro contains a
2495 list of `lttng_ust_field_*()` macros. Each `lttng_ust_field_*()` macro
2496 defines one event field. See man:lttng-ust(3) for a complete description
2497 of the available `lttng_ust_field_*()` macros. A `lttng_ust_field_*()`
2498 macro specifies the type, size, and byte order of one event field.
2500 Each `lttng_ust_field_*()` macro takes an _argument expression_
2501 parameter. This is a C{nbsp}expression that the tracer evaluates at the
2502 `lttng_ust_tracepoint()` macro site in the source code of the
2503 application. This expression provides the source of data of a field. The
2504 argument expression can include input argument names listed in the
2505 `LTTNG_UST_TP_ARGS()` macro.
2507 Each `lttng_ust_field_*()` macro also takes a _field name_ parameter.
2508 Field names must be unique within a given tracepoint definition.
2510 Here's a complete tracepoint definition example:
2512 .Tracepoint definition.
2514 The following tracepoint definition defines a tracepoint which takes
2515 three input arguments and has four output event fields.
2519 #include "my-custom-structure.h"
2521 LTTNG_UST_TRACEPOINT_EVENT(
2525 const struct my_custom_structure *, my_custom_structure,
2529 LTTNG_UST_TP_FIELDS(
2530 lttng_ust_field_string(query_field, query)
2531 lttng_ust_field_float(double, ratio_field, ratio)
2532 lttng_ust_field_integer(int, recv_size,
2533 my_custom_structure->recv_size)
2534 lttng_ust_field_integer(int, send_size,
2535 my_custom_structure->send_size)
2540 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
2541 macro in the source code of your application like this:
2545 lttng_ust_tracepoint(my_provider, my_tracepoint,
2546 my_structure, some_ratio, the_query);
2550 NOTE: The LTTng-UST tracer only evaluates the arguments of a tracepoint
2551 at run time when such a tracepoint _could_ emit an event. See
2552 <<event-creation-emission-opti,this note>> to learn more.
2555 [[using-tracepoint-classes]]
2556 ===== Use a tracepoint class
2558 A _tracepoint class_ is a class of tracepoints which share the same
2559 output event field definitions. A _tracepoint instance_ is one
2560 instance of such a defined tracepoint class, with its own tracepoint
2563 The <<defining-tracepoints,`LTTNG_UST_TRACEPOINT_EVENT()` macro>> is
2564 actually a shorthand which defines both a tracepoint class and a
2565 tracepoint instance at the same time.
2567 When you build a tracepoint provider package, the C or $$C++$$ compiler
2568 creates one serialization function for each **tracepoint class**. A
2569 serialization function is responsible for serializing the event fields
2570 of a tracepoint to a sub-buffer when recording.
2572 For various performance reasons, when your situation requires multiple
2573 tracepoint definitions with different names, but with the same event
2574 fields, we recommend that you manually create a tracepoint class and
2575 instantiate as many tracepoint instances as needed. One positive effect
2576 of such a design, amongst other advantages, is that all tracepoint
2577 instances of the same tracepoint class reuse the same serialization
2578 function, thus reducing
2579 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2581 .Use a tracepoint class and tracepoint instances.
2583 Consider the following three tracepoint definitions:
2587 LTTNG_UST_TRACEPOINT_EVENT(
2594 LTTNG_UST_TP_FIELDS(
2595 lttng_ust_field_integer(int, userid, userid)
2596 lttng_ust_field_integer(size_t, len, len)
2600 LTTNG_UST_TRACEPOINT_EVENT(
2607 LTTNG_UST_TP_FIELDS(
2608 lttng_ust_field_integer(int, userid, userid)
2609 lttng_ust_field_integer(size_t, len, len)
2613 LTTNG_UST_TRACEPOINT_EVENT(
2620 LTTNG_UST_TP_FIELDS(
2621 lttng_ust_field_integer(int, userid, userid)
2622 lttng_ust_field_integer(size_t, len, len)
2627 In this case, we create three tracepoint classes, with one implicit
2628 tracepoint instance for each of them: `get_account`, `get_settings`, and
2629 `get_transaction`. However, they all share the same event field names
2630 and types. Hence three identical, yet independent serialization
2631 functions are created when you build the tracepoint provider package.
2633 A better design choice is to define a single tracepoint class and three
2634 tracepoint instances:
2638 /* The tracepoint class */
2639 LTTNG_UST_TRACEPOINT_EVENT_CLASS(
2640 /* Tracepoint class provider name */
2643 /* Tracepoint class name */
2646 /* Input arguments */
2652 /* Output event fields */
2653 LTTNG_UST_TP_FIELDS(
2654 lttng_ust_field_integer(int, userid, userid)
2655 lttng_ust_field_integer(size_t, len, len)
2659 /* The tracepoint instances */
2660 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
2661 /* Tracepoint class provider name */
2664 /* Tracepoint class name */
2667 /* Instance provider name */
2670 /* Tracepoint name */
2673 /* Input arguments */
2679 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
2688 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
2700 The tracepoint class and instance provider names must be the same if the
2701 `LTTNG_UST_TRACEPOINT_EVENT_CLASS()` and
2702 `LTTNG_UST_TRACEPOINT_EVENT_INSTANCE()` expansions are part of the same
2703 translation unit. See man:lttng-ust(3) to learn more.
2706 [[assigning-log-levels]]
2707 ===== Assign a log level to a tracepoint definition
2709 Assign a _log level_ to a <<defining-tracepoints,tracepoint definition>>
2710 with the `LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro.
2712 Assigning different levels of severity to tracepoint definitions can be
2713 useful: when you <<enabling-disabling-events,create a recording event
2714 rule>>, you can target tracepoints having a log level at least as severe
2715 as a specific value.
2717 The concept of LTTng-UST log levels is similar to the levels found
2718 in typical logging frameworks:
2720 * In a logging framework, the log level is given by the function
2721 or method name you use at the log statement site: `debug()`,
2722 `info()`, `warn()`, `error()`, and so on.
2724 * In LTTng-UST, you statically assign the log level to a tracepoint
2725 definition; any `lttng_ust_tracepoint()` macro invocation which refers
2726 to this definition has this log level.
2728 You must use `LTTNG_UST_TRACEPOINT_LOGLEVEL()` _after_ the
2729 <<defining-tracepoints,`LTTNG_UST_TRACEPOINT_EVENT()`>> or
2730 <<using-tracepoint-classes,`LTTNG_UST_TRACEPOINT_INSTANCE()`>> macro for
2733 The syntax of the `LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro is:
2736 .`LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro syntax.
2738 LTTNG_UST_TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2743 * +__provider_name__+ with the tracepoint provider name.
2744 * +__tracepoint_name__+ with the tracepoint name.
2745 * +__log_level__+ with the log level to assign to the tracepoint
2746 definition named +__tracepoint_name__+ in the +__provider_name__+
2747 tracepoint provider.
2749 See man:lttng-ust(3) for a list of available log level names.
2751 .Assign the `LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_UNIT` log level to a
2752 tracepoint definition.
2756 /* Tracepoint definition */
2757 LTTNG_UST_TRACEPOINT_EVENT(
2764 LTTNG_UST_TP_FIELDS(
2765 lttng_ust_field_integer(int, userid, userid)
2766 lttng_ust_field_integer(size_t, len, len)
2770 /* Log level assignment */
2771 LTTNG_UST_TRACEPOINT_LOGLEVEL(my_app, get_transaction,
2772 LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_UNIT)
2778 ===== Create a tracepoint provider package source file
2780 A _tracepoint provider package source file_ is a C source file which
2781 includes a <<tpp-header,tracepoint provider header file>> to expand its
2782 macros into event serialization and other functions.
2784 Use the following tracepoint provider package source file template:
2787 .Tracepoint provider package source file template.
2789 #define LTTNG_UST_TRACEPOINT_CREATE_PROBES
2794 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2795 header file>> name. You may also include more than one tracepoint
2796 provider header file here to create a tracepoint provider package
2797 holding more than one tracepoint providers.
2800 [[probing-the-application-source-code]]
2801 ==== Add tracepoints to the source code of an application
2803 Once you <<tpp-header,create a tracepoint provider header file>>, use
2804 the `lttng_ust_tracepoint()` macro in the source code of your
2805 application to insert the tracepoints that this header
2806 <<defining-tracepoints,defines>>.
2808 The `lttng_ust_tracepoint()` macro takes at least two parameters: the
2809 tracepoint provider name and the tracepoint name. The corresponding
2810 tracepoint definition defines the other parameters.
2812 .`lttng_ust_tracepoint()` usage.
2814 The following <<defining-tracepoints,tracepoint definition>> defines a
2815 tracepoint which takes two input arguments and has two output event
2819 .Tracepoint provider header file.
2821 #include "my-custom-structure.h"
2823 LTTNG_UST_TRACEPOINT_EVENT(
2828 const char *, cmd_name
2830 LTTNG_UST_TP_FIELDS(
2831 lttng_ust_field_string(cmd_name, cmd_name)
2832 lttng_ust_field_integer(int, number_of_args, argc)
2837 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
2838 macro in the source code of your application like this:
2841 .Application source file.
2845 int main(int argc, char* argv[])
2847 lttng_ust_tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2852 Note how the source code of the application includes
2853 the tracepoint provider header file containing the tracepoint
2854 definitions to use, path:{tp.h}.
2857 .`lttng_ust_tracepoint()` usage with a complex tracepoint definition.
2859 Consider this complex tracepoint definition, where multiple event
2860 fields refer to the same input arguments in their argument expression
2864 .Tracepoint provider header file.
2866 /* For `struct stat` */
2867 #include <sys/types.h>
2868 #include <sys/stat.h>
2871 LTTNG_UST_TRACEPOINT_EVENT(
2879 LTTNG_UST_TP_FIELDS(
2880 lttng_ust_field_integer(int, my_constant_field, 23 + 17)
2881 lttng_ust_field_integer(int, my_int_arg_field, my_int_arg)
2882 lttng_ust_field_integer(int, my_int_arg_field2,
2883 my_int_arg * my_int_arg)
2884 lttng_ust_field_integer(int, sum4_field,
2885 my_str_arg[0] + my_str_arg[1] +
2886 my_str_arg[2] + my_str_arg[3])
2887 lttng_ust_field_string(my_str_arg_field, my_str_arg)
2888 lttng_ust_field_integer_hex(off_t, size_field, st->st_size)
2889 lttng_ust_field_float(double, size_dbl_field, (double) st->st_size)
2890 lttng_ust_field_sequence_text(char, half_my_str_arg_field,
2892 strlen(my_str_arg) / 2)
2897 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
2898 macro in the source code of your application like this:
2901 .Application source file.
2903 #define LTTNG_UST_TRACEPOINT_DEFINE
2910 stat("/etc/fstab", &s);
2911 lttng_ust_tracepoint(my_provider, my_tracepoint, 23,
2912 "Hello, World!", &s);
2918 If you look at the event record that LTTng writes when recording this
2919 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2920 it should look like this:
2922 .Event record fields
2924 |Field name |Field value
2925 |`my_constant_field` |40
2926 |`my_int_arg_field` |23
2927 |`my_int_arg_field2` |529
2929 |`my_str_arg_field` |`Hello, World!`
2930 |`size_field` |0x12d
2931 |`size_dbl_field` |301.0
2932 |`half_my_str_arg_field` |`Hello,`
2936 Sometimes, the arguments you pass to `lttng_ust_tracepoint()` are
2937 expensive to evaluate--they use the call stack, for example. To avoid
2938 this computation when LTTng wouldn't emit any event anyway, use the
2939 `lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros.
2941 The syntax of the `lttng_ust_tracepoint_enabled()` and
2942 `lttng_ust_do_tracepoint()` macros is:
2945 .`lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros syntax.
2947 lttng_ust_tracepoint_enabled(provider_name, tracepoint_name)
2949 lttng_ust_do_tracepoint(provider_name, tracepoint_name, ...)
2954 * +__provider_name__+ with the tracepoint provider name.
2955 * +__tracepoint_name__+ with the tracepoint name.
2957 `lttng_ust_tracepoint_enabled()` returns a non-zero value if executing
2958 the tracepoint named `tracepoint_name` from the provider named
2959 `provider_name` _could_ make LTTng emit an event, depending on the
2960 payload of said event.
2962 `lttng_ust_do_tracepoint()` is like `lttng_ust_tracepoint()`, except
2963 that it doesn't check what `lttng_ust_tracepoint_enabled()` checks.
2964 Using `lttng_ust_tracepoint()` with `lttng_ust_tracepoint_enabled()` is
2965 dangerous because `lttng_ust_tracepoint()` also contains the
2966 `lttng_ust_tracepoint_enabled()` check; therefore, a race condition is
2967 possible in this situation:
2970 .Possible race condition when using `lttng_ust_tracepoint_enabled()` with `lttng_ust_tracepoint()`.
2972 if (lttng_ust_tracepoint_enabled(my_provider, my_tracepoint)) {
2973 stuff = prepare_stuff();
2976 lttng_ust_tracepoint(my_provider, my_tracepoint, stuff);
2979 If `lttng_ust_tracepoint_enabled()` is false, but would be true after
2980 the conditional block, then `stuff` isn't prepared: the emitted event
2981 will either contain wrong data, or the whole application could crash
2982 (with a segmentation fault, for example).
2984 NOTE: Neither `lttng_ust_tracepoint_enabled()` nor
2985 `lttng_ust_do_tracepoint()` have an `STAP_PROBEV()` call. If you need
2986 it, you must emit this call yourself.
2989 [[building-tracepoint-providers-and-user-application]]
2990 ==== Build and link a tracepoint provider package and an application
2992 Once you have one or more <<tpp-header,tracepoint provider header
2993 files>> and a <<tpp-source,tracepoint provider package source file>>,
2994 create the tracepoint provider package by compiling its source
2995 file. From here, multiple build and run scenarios are possible. The
2996 following table shows common application and library configurations
2997 along with the required command lines to achieve them.
2999 In the following diagrams, we use the following file names:
3002 Executable application.
3005 Application object file.
3008 Tracepoint provider package object file.
3011 Tracepoint provider package archive file.
3014 Tracepoint provider package shared object file.
3017 User library object file.
3020 User library shared object file.
3022 We use the following symbols in the diagrams of table below:
3025 .Symbols used in the build scenario diagrams.
3026 image::ust-sit-symbols.png[]
3028 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
3029 variable in the following instructions.
3031 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
3032 .Common tracepoint provider package scenarios.
3034 |Scenario |Instructions
3037 The instrumented application is statically linked with
3038 the tracepoint provider package object.
3040 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
3043 include::../common/ust-sit-step-tp-o.txt[]
3045 To build the instrumented application:
3047 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3052 #define LTTNG_UST_TRACEPOINT_DEFINE
3056 . Compile the application source file:
3065 . Build the application:
3070 $ gcc -o app app.o tpp.o -llttng-ust -ldl
3074 To run the instrumented application:
3076 * Start the application:
3086 The instrumented application is statically linked with the
3087 tracepoint provider package archive file.
3089 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
3092 To create the tracepoint provider package archive file:
3094 . Compile the <<tpp-source,tracepoint provider package source file>>:
3103 . Create the tracepoint provider package archive file:
3108 $ ar rcs tpp.a tpp.o
3112 To build the instrumented application:
3114 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3119 #define LTTNG_UST_TRACEPOINT_DEFINE
3123 . Compile the application source file:
3132 . Build the application:
3137 $ gcc -o app app.o tpp.a -llttng-ust -ldl
3141 To run the instrumented application:
3143 * Start the application:
3153 The instrumented application is linked with the tracepoint provider
3154 package shared object.
3156 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
3159 include::../common/ust-sit-step-tp-so.txt[]
3161 To build the instrumented application:
3163 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3168 #define LTTNG_UST_TRACEPOINT_DEFINE
3172 . Compile the application source file:
3181 . Build the application:
3186 $ gcc -o app app.o -ldl -L. -ltpp
3190 To run the instrumented application:
3192 * Start the application:
3202 The tracepoint provider package shared object is preloaded before the
3203 instrumented application starts.
3205 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
3208 include::../common/ust-sit-step-tp-so.txt[]
3210 To build the instrumented application:
3212 . In path:{app.c}, before including path:{tpp.h}, add the
3218 #define LTTNG_UST_TRACEPOINT_DEFINE
3219 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3223 . Compile the application source file:
3232 . Build the application:
3237 $ gcc -o app app.o -ldl
3241 To run the instrumented application with tracing support:
3243 * Preload the tracepoint provider package shared object and
3244 start the application:
3249 $ LD_PRELOAD=./libtpp.so ./app
3253 To run the instrumented application without tracing support:
3255 * Start the application:
3265 The instrumented application dynamically loads the tracepoint provider
3266 package shared object.
3268 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
3271 include::../common/ust-sit-step-tp-so.txt[]
3273 To build the instrumented application:
3275 . In path:{app.c}, before including path:{tpp.h}, add the
3281 #define LTTNG_UST_TRACEPOINT_DEFINE
3282 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3286 . Compile the application source file:
3295 . Build the application:
3300 $ gcc -o app app.o -ldl
3304 To run the instrumented application:
3306 * Start the application:
3316 The application is linked with the instrumented user library.
3318 The instrumented user library is statically linked with the tracepoint
3319 provider package object file.
3321 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
3324 include::../common/ust-sit-step-tp-o-fpic.txt[]
3326 To build the instrumented user library:
3328 . In path:{emon.c}, before including path:{tpp.h}, add the
3334 #define LTTNG_UST_TRACEPOINT_DEFINE
3338 . Compile the user library source file:
3343 $ gcc -I. -fpic -c emon.c
3347 . Build the user library shared object:
3352 $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
3356 To build the application:
3358 . Compile the application source file:
3367 . Build the application:
3372 $ gcc -o app app.o -L. -lemon
3376 To run the application:
3378 * Start the application:
3388 The application is linked with the instrumented user library.
3390 The instrumented user library is linked with the tracepoint provider
3391 package shared object.
3393 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3396 include::../common/ust-sit-step-tp-so.txt[]
3398 To build the instrumented user library:
3400 . In path:{emon.c}, before including path:{tpp.h}, add the
3406 #define LTTNG_UST_TRACEPOINT_DEFINE
3410 . Compile the user library source file:
3415 $ gcc -I. -fpic -c emon.c
3419 . Build the user library shared object:
3424 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3428 To build the application:
3430 . Compile the application source file:
3439 . Build the application:
3444 $ gcc -o app app.o -L. -lemon
3448 To run the application:
3450 * Start the application:
3460 The tracepoint provider package shared object is preloaded before the
3463 The application is linked with the instrumented user library.
3465 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3468 include::../common/ust-sit-step-tp-so.txt[]
3470 To build the instrumented user library:
3472 . In path:{emon.c}, before including path:{tpp.h}, add the
3478 #define LTTNG_UST_TRACEPOINT_DEFINE
3479 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3483 . Compile the user library source file:
3488 $ gcc -I. -fpic -c emon.c
3492 . Build the user library shared object:
3497 $ gcc -shared -o libemon.so emon.o -ldl
3501 To build the application:
3503 . Compile the application source file:
3512 . Build the application:
3517 $ gcc -o app app.o -L. -lemon
3521 To run the application with tracing support:
3523 * Preload the tracepoint provider package shared object and
3524 start the application:
3529 $ LD_PRELOAD=./libtpp.so ./app
3533 To run the application without tracing support:
3535 * Start the application:
3545 The application is linked with the instrumented user library.
3547 The instrumented user library dynamically loads the tracepoint provider
3548 package shared object.
3550 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3553 include::../common/ust-sit-step-tp-so.txt[]
3555 To build the instrumented user library:
3557 . In path:{emon.c}, before including path:{tpp.h}, add the
3563 #define LTTNG_UST_TRACEPOINT_DEFINE
3564 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3568 . Compile the user library source file:
3573 $ gcc -I. -fpic -c emon.c
3577 . Build the user library shared object:
3582 $ gcc -shared -o libemon.so emon.o -ldl
3586 To build the application:
3588 . Compile the application source file:
3597 . Build the application:
3602 $ gcc -o app app.o -L. -lemon
3606 To run the application:
3608 * Start the application:
3618 The application dynamically loads the instrumented user library.
3620 The instrumented user library is linked with the tracepoint provider
3621 package shared object.
3623 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3626 include::../common/ust-sit-step-tp-so.txt[]
3628 To build the instrumented user library:
3630 . In path:{emon.c}, before including path:{tpp.h}, add the
3636 #define LTTNG_UST_TRACEPOINT_DEFINE
3640 . Compile the user library source file:
3645 $ gcc -I. -fpic -c emon.c
3649 . Build the user library shared object:
3654 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3658 To build the application:
3660 . Compile the application source file:
3669 . Build the application:
3674 $ gcc -o app app.o -ldl -L. -lemon
3678 To run the application:
3680 * Start the application:
3690 The application dynamically loads the instrumented user library.
3692 The instrumented user library dynamically loads the tracepoint provider
3693 package shared object.
3695 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3698 include::../common/ust-sit-step-tp-so.txt[]
3700 To build the instrumented user library:
3702 . In path:{emon.c}, before including path:{tpp.h}, add the
3708 #define LTTNG_UST_TRACEPOINT_DEFINE
3709 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3713 . Compile the user library source file:
3718 $ gcc -I. -fpic -c emon.c
3722 . Build the user library shared object:
3727 $ gcc -shared -o libemon.so emon.o -ldl
3731 To build the application:
3733 . Compile the application source file:
3742 . Build the application:
3747 $ gcc -o app app.o -ldl -L. -lemon
3751 To run the application:
3753 * Start the application:
3763 The tracepoint provider package shared object is preloaded before the
3766 The application dynamically loads the instrumented user library.
3768 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3771 include::../common/ust-sit-step-tp-so.txt[]
3773 To build the instrumented user library:
3775 . In path:{emon.c}, before including path:{tpp.h}, add the
3781 #define LTTNG_UST_TRACEPOINT_DEFINE
3782 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3786 . Compile the user library source file:
3791 $ gcc -I. -fpic -c emon.c
3795 . Build the user library shared object:
3800 $ gcc -shared -o libemon.so emon.o -ldl
3804 To build the application:
3806 . Compile the application source file:
3815 . Build the application:
3820 $ gcc -o app app.o -L. -lemon
3824 To run the application with tracing support:
3826 * Preload the tracepoint provider package shared object and
3827 start the application:
3832 $ LD_PRELOAD=./libtpp.so ./app
3836 To run the application without tracing support:
3838 * Start the application:
3848 The application is statically linked with the tracepoint provider
3849 package object file.
3851 The application is linked with the instrumented user library.
3853 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3856 include::../common/ust-sit-step-tp-o.txt[]
3858 To build the instrumented user library:
3860 . In path:{emon.c}, before including path:{tpp.h}, add the
3866 #define LTTNG_UST_TRACEPOINT_DEFINE
3870 . Compile the user library source file:
3875 $ gcc -I. -fpic -c emon.c
3879 . Build the user library shared object:
3884 $ gcc -shared -o libemon.so emon.o
3888 To build the application:
3890 . Compile the application source file:
3899 . Build the application:
3904 $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3908 To run the instrumented application:
3910 * Start the application:
3920 The application is statically linked with the tracepoint provider
3921 package object file.
3923 The application dynamically loads the instrumented user library.
3925 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3928 include::../common/ust-sit-step-tp-o.txt[]
3930 To build the application:
3932 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3937 #define LTTNG_UST_TRACEPOINT_DEFINE
3941 . Compile the application source file:
3950 . Build the application:
3955 $ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3960 The `--export-dynamic` option passed to the linker is necessary for the
3961 dynamically loaded library to ``see'' the tracepoint symbols defined in
3964 To build the instrumented user library:
3966 . Compile the user library source file:
3971 $ gcc -I. -fpic -c emon.c
3975 . Build the user library shared object:
3980 $ gcc -shared -o libemon.so emon.o
3984 To run the application:
3986 * Start the application:
3997 [[using-lttng-ust-with-daemons]]
3998 ===== Use noch:{LTTng-UST} with daemons
4000 If your instrumented application calls man:fork(2), man:clone(2),
4001 or BSD's man:rfork(2), without a following man:exec(3)-family
4002 system call, you must preload the path:{liblttng-ust-fork.so} shared
4003 object when you start the application.
4007 $ LD_PRELOAD=liblttng-ust-fork.so ./my-app
4010 If your tracepoint provider package is
4011 a shared library which you also preload, you must put both
4012 shared objects in env:LD_PRELOAD:
4016 $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
4022 ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
4024 If your instrumented application closes one or more file descriptors
4025 which it did not open itself, you must preload the
4026 path:{liblttng-ust-fd.so} shared object when you start the application:
4030 $ LD_PRELOAD=liblttng-ust-fd.so ./my-app
4033 Typical use cases include closing all the file descriptors after
4034 man:fork(2) or man:rfork(2) and buggy applications doing
4038 [[lttng-ust-pkg-config]]
4039 ===== Use noch:{pkg-config}
4041 On some distributions, LTTng-UST ships with a
4042 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
4043 metadata file. If this is your case, then use cmd:pkg-config to
4044 build an application on the command line:
4048 $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
4052 [[instrumenting-32-bit-app-on-64-bit-system]]
4053 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
4055 In order to trace a 32-bit application running on a 64-bit system,
4056 LTTng must use a dedicated 32-bit
4057 <<lttng-consumerd,consumer daemon>>.
4059 The following steps show how to build and install a 32-bit consumer
4060 daemon, which is _not_ part of the default 64-bit LTTng build, how to
4061 build and install the 32-bit LTTng-UST libraries, and how to build and
4062 link an instrumented 32-bit application in that context.
4064 To build a 32-bit instrumented application for a 64-bit target system,
4065 assuming you have a fresh target system with no installed Userspace RCU
4068 . Download, build, and install a 32-bit version of Userspace RCU:
4073 $ cd $(mktemp -d) &&
4074 wget https://lttng.org/files/urcu/userspace-rcu-latest-0.13.tar.bz2 &&
4075 tar -xf userspace-rcu-latest-0.13.tar.bz2 &&
4076 cd userspace-rcu-0.13.* &&
4077 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
4079 sudo make install &&
4084 . Using the package manager of your distribution, or from source,
4085 install the 32-bit versions of the following dependencies of
4086 LTTng-tools and LTTng-UST:
4089 * https://sourceforge.net/projects/libuuid/[libuuid]
4090 * https://directory.fsf.org/wiki/Popt[popt]
4091 * https://www.xmlsoft.org/[libxml2]
4092 * **Optional**: https://github.com/numactl/numactl[numactl]
4095 . Download, build, and install a 32-bit version of the latest
4096 LTTng-UST{nbsp}{revision}:
4101 $ cd $(mktemp -d) &&
4102 wget https://lttng.org/files/lttng-ust/lttng-ust-latest-2.13.tar.bz2 &&
4103 tar -xf lttng-ust-latest-2.13.tar.bz2 &&
4104 cd lttng-ust-2.13.* &&
4105 ./configure --libdir=/usr/local/lib32 \
4106 CFLAGS=-m32 CXXFLAGS=-m32 \
4107 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
4109 sudo make install &&
4114 Add `--disable-numa` to `./configure` if you don't have
4115 https://github.com/numactl/numactl[numactl].
4119 Depending on your distribution, 32-bit libraries could be installed at a
4120 different location than `/usr/lib32`. For example, Debian is known to
4121 install some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
4123 In this case, make sure to set `LDFLAGS` to all the
4124 relevant 32-bit library paths, for example:
4128 $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
4132 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
4133 the 32-bit consumer daemon:
4138 $ cd $(mktemp -d) &&
4139 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
4140 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
4141 cd lttng-tools-2.13.* &&
4142 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
4143 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
4144 --disable-bin-lttng --disable-bin-lttng-crash \
4145 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
4147 cd src/bin/lttng-consumerd &&
4148 sudo make install &&
4153 . From your distribution or from source, <<installing-lttng,install>>
4154 the 64-bit versions of LTTng-UST and Userspace RCU.
4156 . Download, build, and install the 64-bit version of the
4157 latest LTTng-tools{nbsp}{revision}:
4162 $ cd $(mktemp -d) &&
4163 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
4164 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
4165 cd lttng-tools-2.13.* &&
4166 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
4167 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
4169 sudo make install &&
4174 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
4175 when linking your 32-bit application:
4178 -m32 -L/usr/lib32 -L/usr/local/lib32 \
4179 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
4182 For example, let's rebuild the quick start example in
4183 ``<<tracing-your-own-user-application,Record user application events>>''
4184 as an instrumented 32-bit application:
4189 $ gcc -m32 -c -I. hello-tp.c
4190 $ gcc -m32 -c hello.c
4191 $ gcc -m32 -o hello hello.o hello-tp.o \
4192 -L/usr/lib32 -L/usr/local/lib32 \
4193 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
4198 No special action is required to execute the 32-bit application and
4199 for LTTng to trace it: use the command-line man:lttng(1) tool as usual.
4204 ==== Use `lttng_ust_tracef()`
4206 man:lttng_ust_tracef(3) is a small LTTng-UST API designed for quick,
4207 man:printf(3)-like instrumentation without the burden of
4208 <<tracepoint-provider,creating>> and
4209 <<building-tracepoint-providers-and-user-application,building>>
4210 a tracepoint provider package.
4212 To use `lttng_ust_tracef()` in your application:
4214 . In the C or $$C++$$ source files where you need to use
4215 `lttng_ust_tracef()`, include `<lttng/tracef.h>`:
4220 #include <lttng/tracef.h>
4224 . In the source code of the application, use `lttng_ust_tracef()` like
4225 you would use man:printf(3):
4232 lttng_ust_tracef("my message: %d (%s)", my_integer, my_string);
4238 . Link your application with `liblttng-ust`:
4243 $ gcc -o app app.c -llttng-ust
4247 To record the events that `lttng_ust_tracef()` calls emit:
4249 * <<enabling-disabling-events,Create a recording event rule>> which
4250 matches user space events named `lttng_ust_tracef:*`:
4255 $ lttng enable-event --userspace 'lttng_ust_tracef:*'
4260 .Limitations of `lttng_ust_tracef()`
4262 The `lttng_ust_tracef()` utility function was developed to make user
4263 space tracing super simple, albeit with notable disadvantages compared
4264 to <<defining-tracepoints,user-defined tracepoints>>:
4266 * All the created events have the same tracepoint provider and
4267 tracepoint names, respectively `lttng_ust_tracef` and `event`.
4268 * There's no static type checking.
4269 * The only event record field you actually get, named `msg`, is a string
4270 potentially containing the values you passed to `lttng_ust_tracef()`
4271 using your own format string. This also means that you can't filter
4272 events with a custom expression at run time because there are no
4274 * Since `lttng_ust_tracef()` uses the man:vasprintf(3) function of the
4275 C{nbsp}standard library behind the scenes to format the strings at run
4276 time, its expected performance is lower than with user-defined
4277 tracepoints, which don't require a conversion to a string.
4279 Taking this into consideration, `lttng_ust_tracef()` is useful for some
4280 quick prototyping and debugging, but you shouldn't consider it for any
4281 permanent and serious applicative instrumentation.
4287 ==== Use `lttng_ust_tracelog()`
4289 The man:tracelog(3) API is very similar to
4290 <<tracef,`lttng_ust_tracef()`>>, with the difference that it accepts an
4291 additional log level parameter.
4293 The goal of `lttng_ust_tracelog()` is to ease the migration from logging
4296 To use `lttng_ust_tracelog()` in your application:
4298 . In the C or $$C++$$ source files where you need to use `tracelog()`,
4299 include `<lttng/tracelog.h>`:
4304 #include <lttng/tracelog.h>
4308 . In the source code of the application, use `lttng_ust_tracelog()` like
4309 you would use man:printf(3), except for the first parameter which is
4317 tracelog(LTTNG_UST_TRACEPOINT_LOGLEVEL_WARNING,
4318 "my message: %d (%s)", my_integer, my_string);
4324 See man:lttng-ust(3) for a list of available log level names.
4326 . Link your application with `liblttng-ust`:
4331 $ gcc -o app app.c -llttng-ust
4335 To record the events that `lttng_ust_tracelog()` calls emit with a log
4336 level _at least as severe as_ a specific log level:
4338 * <<enabling-disabling-events,Create a recording event rule>> which
4339 matches user space tracepoint events named `lttng_ust_tracelog:*` and
4340 with some minimum level of severity:
4345 $ lttng enable-event --userspace 'lttng_ust_tracelog:*' \
4350 To record the events that `lttng_ust_tracelog()` calls emit with a
4351 _specific log level_:
4353 * Create a recording event rule which matches tracepoint events named
4354 `lttng_ust_tracelog:*` and with a specific log level:
4359 $ lttng enable-event --userspace 'lttng_ust_tracelog:*' \
4360 --loglevel-only=INFO
4365 [[prebuilt-ust-helpers]]
4366 === Load a prebuilt user space tracing helper
4368 The LTTng-UST package provides a few helpers in the form of preloadable
4369 shared objects which automatically instrument system functions and
4372 The helper shared objects are normally found in dir:{/usr/lib}. If you
4373 built LTTng-UST <<building-from-source,from source>>, they're probably
4374 located in dir:{/usr/local/lib}.
4376 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4379 path:{liblttng-ust-libc-wrapper.so}::
4380 path:{liblttng-ust-pthread-wrapper.so}::
4381 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4382 memory and POSIX threads function tracing>>.
4384 path:{liblttng-ust-cyg-profile.so}::
4385 path:{liblttng-ust-cyg-profile-fast.so}::
4386 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4388 path:{liblttng-ust-dl.so}::
4389 <<liblttng-ust-dl,Dynamic linker tracing>>.
4391 To use a user space tracing helper with any user application:
4393 * Preload the helper shared object when you start the application:
4398 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4402 You can preload more than one helper:
4407 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4413 [[liblttng-ust-libc-pthread-wrapper]]
4414 ==== Instrument C standard library memory and POSIX threads functions
4416 The path:{liblttng-ust-libc-wrapper.so} and
4417 path:{liblttng-ust-pthread-wrapper.so} helpers
4418 add instrumentation to some C standard library and POSIX
4422 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4424 |TP provider name |TP name |Instrumented function
4426 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4427 |`calloc` |man:calloc(3)
4428 |`realloc` |man:realloc(3)
4429 |`free` |man:free(3)
4430 |`memalign` |man:memalign(3)
4431 |`posix_memalign` |man:posix_memalign(3)
4435 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4437 |TP provider name |TP name |Instrumented function
4439 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4440 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4441 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4442 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4445 When you preload the shared object, it replaces the functions listed
4446 in the previous tables by wrappers which contain tracepoints and call
4447 the replaced functions.
4450 [[liblttng-ust-cyg-profile]]
4451 ==== Instrument function entry and exit
4453 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4454 to the entry and exit points of functions.
4456 man:gcc(1) and man:clang(1) have an option named
4457 https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
4458 which generates instrumentation calls for entry and exit to functions.
4459 The LTTng-UST function tracing helpers,
4460 path:{liblttng-ust-cyg-profile.so} and
4461 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4462 to add tracepoints to the two generated functions (which contain
4463 `cyg_profile` in their names, hence the name of the helper).
4465 To use the LTTng-UST function tracing helper, the source files to
4466 instrument must be built using the `-finstrument-functions` compiler
4469 There are two versions of the LTTng-UST function tracing helper:
4471 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4472 that you should only use when it can be _guaranteed_ that the
4473 complete event stream is recorded without any lost event record.
4474 Any kind of duplicate information is left out.
4476 Assuming no event record is lost, having only the function addresses on
4477 entry is enough to create a call graph, since an event record always
4478 contains the ID of the CPU that generated it.
4480 Use a tool like man:addr2line(1) to convert function addresses back to
4481 source file names and line numbers.
4483 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4484 which also works in use cases where event records might get discarded or
4485 not recorded from application startup.
4486 In these cases, the trace analyzer needs more information to be
4487 able to reconstruct the program flow.
4489 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4490 points of this helper.
4492 All the tracepoints that this helper provides have the log level
4493 `LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4495 TIP: It's sometimes a good idea to limit the number of source files that
4496 you compile with the `-finstrument-functions` option to prevent LTTng
4497 from writing an excessive amount of trace data at run time. When using
4499 `-finstrument-functions-exclude-function-list` option to avoid
4500 instrument entries and exits of specific function names.
4505 ==== Instrument the dynamic linker
4507 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4508 man:dlopen(3) and man:dlclose(3) function calls.
4510 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4515 [[java-application]]
4516 === Instrument a Java application
4518 You can instrument any Java application which uses one of the following
4521 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4522 (JUL) core logging facilities.
4524 * https://logging.apache.org/log4j/1.2/[**Apache log4j{nbsp}1.2**], since
4525 LTTng{nbsp}2.6. Note that Apache Log4j{nbsp}2 isn't supported.
4528 .LTTng-UST Java agent imported by a Java application.
4529 image::java-app.png[]
4531 Note that the methods described below are new in LTTng{nbsp}2.8.
4532 Previous LTTng versions use another technique.
4534 NOTE: We use https://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4535 and https://ci.lttng.org/[continuous integration], thus this version is
4536 directly supported. However, the LTTng-UST Java agent is also tested
4537 with OpenJDK{nbsp}7.
4542 ==== Use the LTTng-UST Java agent for `java.util.logging`
4544 To use the LTTng-UST Java agent in a Java application which uses
4545 `java.util.logging` (JUL):
4547 . In the source code of the Java application, import the LTTng-UST log
4548 handler package for `java.util.logging`:
4553 import org.lttng.ust.agent.jul.LttngLogHandler;
4557 . Create an LTTng-UST `java.util.logging` log handler:
4562 Handler lttngUstLogHandler = new LttngLogHandler();
4566 . Add this handler to the `java.util.logging` loggers which should emit
4572 Logger myLogger = Logger.getLogger("some-logger");
4574 myLogger.addHandler(lttngUstLogHandler);
4578 . Use `java.util.logging` log statements and configuration as usual.
4579 The loggers with an attached LTTng-UST log handler can emit
4582 . Before exiting the application, remove the LTTng-UST log handler from
4583 the loggers attached to it and call its `close()` method:
4588 myLogger.removeHandler(lttngUstLogHandler);
4589 lttngUstLogHandler.close();
4593 This isn't strictly necessary, but it's recommended for a clean
4594 disposal of the resources of the handler.
4596 . Include the common and JUL-specific JAR files of the LTTng-UST Java agent,
4597 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4599 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4600 path] when you build the Java application.
4602 The JAR files are typically located in dir:{/usr/share/java}.
4604 IMPORTANT: The LTTng-UST Java agent must be
4605 <<installing-lttng,installed>> for the logging framework your
4608 .Use the LTTng-UST Java agent for `java.util.logging`.
4613 import java.io.IOException;
4614 import java.util.logging.Handler;
4615 import java.util.logging.Logger;
4616 import org.lttng.ust.agent.jul.LttngLogHandler;
4620 private static final int answer = 42;
4622 public static void main(String[] argv) throws Exception
4625 Logger logger = Logger.getLogger("jello");
4627 // Create an LTTng-UST log handler
4628 Handler lttngUstLogHandler = new LttngLogHandler();
4630 // Add the LTTng-UST log handler to our logger
4631 logger.addHandler(lttngUstLogHandler);
4634 logger.info("some info");
4635 logger.warning("some warning");
4637 logger.finer("finer information; the answer is " + answer);
4639 logger.severe("error!");
4641 // Not mandatory, but cleaner
4642 logger.removeHandler(lttngUstLogHandler);
4643 lttngUstLogHandler.close();
4652 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4655 <<creating-destroying-tracing-sessions,Create a recording session>>,
4656 <<enabling-disabling-events,create a recording event rule>> matching JUL
4657 events named `jello`, and <<basic-tracing-session-control,start
4663 $ lttng enable-event --jul jello
4667 Run the compiled class:
4671 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4674 <<basic-tracing-session-control,Stop recording>> and inspect the
4684 In the resulting trace, an <<event,event record>> which a Java
4685 application using `java.util.logging` generated is named
4686 `lttng_jul:event` and has the following fields:
4695 Name of the class in which the log statement was executed.
4698 Name of the method in which the log statement was executed.
4701 Logging time (timestamp in milliseconds).
4704 Log level integer value.
4707 ID of the thread in which the log statement was executed.
4709 Use the opt:lttng-enable-event(1):--loglevel or
4710 opt:lttng-enable-event(1):--loglevel-only option of the
4711 man:lttng-enable-event(1) command to target a range of
4712 `java.util.logging` log levels or a specific `java.util.logging` log
4718 ==== Use the LTTng-UST Java agent for Apache log4j
4720 To use the LTTng-UST Java agent in a Java application which uses
4721 Apache log4j{nbsp}1.2:
4723 . In the source code of the Java application, import the LTTng-UST log
4724 appender package for Apache log4j:
4729 import org.lttng.ust.agent.log4j.LttngLogAppender;
4733 . Create an LTTng-UST log4j log appender:
4738 Appender lttngUstLogAppender = new LttngLogAppender();
4742 . Add this appender to the log4j loggers which should emit LTTng events:
4747 Logger myLogger = Logger.getLogger("some-logger");
4749 myLogger.addAppender(lttngUstLogAppender);
4753 . Use Apache log4j log statements and configuration as usual. The
4754 loggers with an attached LTTng-UST log appender can emit LTTng events.
4756 . Before exiting the application, remove the LTTng-UST log appender from
4757 the loggers attached to it and call its `close()` method:
4762 myLogger.removeAppender(lttngUstLogAppender);
4763 lttngUstLogAppender.close();
4767 This isn't strictly necessary, but it's recommended for a clean
4768 disposal of the resources of the appender.
4770 . Include the common and log4j-specific JAR
4771 files of the LTTng-UST Java agent, path:{lttng-ust-agent-common.jar} and
4772 path:{lttng-ust-agent-log4j.jar}, in the
4773 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4774 path] when you build the Java application.
4776 The JAR files are typically located in dir:{/usr/share/java}.
4778 IMPORTANT: The LTTng-UST Java agent must be
4779 <<installing-lttng,installed>> for the logging framework your
4782 .Use the LTTng-UST Java agent for Apache log4j.
4787 import org.apache.log4j.Appender;
4788 import org.apache.log4j.Logger;
4789 import org.lttng.ust.agent.log4j.LttngLogAppender;
4793 private static final int answer = 42;
4795 public static void main(String[] argv) throws Exception
4798 Logger logger = Logger.getLogger("jello");
4800 // Create an LTTng-UST log appender
4801 Appender lttngUstLogAppender = new LttngLogAppender();
4803 // Add the LTTng-UST log appender to our logger
4804 logger.addAppender(lttngUstLogAppender);
4807 logger.info("some info");
4808 logger.warn("some warning");
4810 logger.debug("debug information; the answer is " + answer);
4812 logger.fatal("error!");
4814 // Not mandatory, but cleaner
4815 logger.removeAppender(lttngUstLogAppender);
4816 lttngUstLogAppender.close();
4822 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4827 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4830 <<creating-destroying-tracing-sessions,Create a recording session>>,
4831 <<enabling-disabling-events,create a recording event rule>> matching
4832 log4j events named `jello`, and <<basic-tracing-session-control,start
4838 $ lttng enable-event --log4j jello
4842 Run the compiled class:
4846 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4849 <<basic-tracing-session-control,Stop recording>> and inspect the
4859 In the resulting trace, an <<event,event record>> which a Java
4860 application using log4j generated is named `lttng_log4j:event` and
4861 has the following fields:
4870 Name of the class in which the log statement was executed.
4873 Name of the method in which the log statement was executed.
4876 Name of the file in which the executed log statement is located.
4879 Line number at which the log statement was executed.
4885 Log level integer value.
4888 Name of the Java thread in which the log statement was executed.
4890 Use the opt:lttng-enable-event(1):--loglevel or
4891 opt:lttng-enable-event(1):--loglevel-only option of the
4892 man:lttng-enable-event(1) command to target a range of Apache log4j
4893 log levels or a specific log4j log level.
4897 [[java-application-context]]
4898 ==== Provide application-specific context fields in a Java application
4900 A Java application-specific context field is a piece of state which
4901 the Java application provides. You can <<adding-context,add>> such
4902 a context field to be recorded, using the
4903 man:lttng-add-context(1) command, to each <<event,event record>>
4904 which the log statements of this application produce.
4906 For example, a given object might have a current request ID variable.
4907 You can create a context information retriever for this object and
4908 assign a name to this current request ID. You can then, using the
4909 man:lttng-add-context(1) command, add this context field by name so that
4910 LTTng writes it to the event records of a given `java.util.logging` or
4911 log4j <<channel,channel>>.
4913 To provide application-specific context fields in a Java application:
4915 . In the source code of the Java application, import the LTTng-UST
4916 Java agent context classes and interfaces:
4921 import org.lttng.ust.agent.context.ContextInfoManager;
4922 import org.lttng.ust.agent.context.IContextInfoRetriever;
4926 . Create a context information retriever class, that is, a class which
4927 implements the `IContextInfoRetriever` interface:
4932 class MyContextInfoRetriever implements IContextInfoRetriever
4935 public Object retrieveContextInfo(String key)
4937 if (key.equals("intCtx")) {
4939 } else if (key.equals("strContext")) {
4940 return "context value!";
4949 This `retrieveContextInfo()` method is the only member of the
4950 `IContextInfoRetriever` interface. Its role is to return the current
4951 value of a state by name to create a context field. The names of the
4952 context fields and which state variables they return depends on your
4955 All primitive types and objects are supported as context fields.
4956 When `retrieveContextInfo()` returns an object, the context field
4957 serializer calls its `toString()` method to add a string field to
4958 event records. The method can also return `null`, which means that
4959 no context field is available for the required name.
4961 . Register an instance of your context information retriever class to
4962 the context information manager singleton:
4967 IContextInfoRetriever cir = new MyContextInfoRetriever();
4968 ContextInfoManager cim = ContextInfoManager.getInstance();
4969 cim.registerContextInfoRetriever("retrieverName", cir);
4973 . Before exiting the application, remove your context information
4974 retriever from the context information manager singleton:
4979 ContextInfoManager cim = ContextInfoManager.getInstance();
4980 cim.unregisterContextInfoRetriever("retrieverName");
4984 This isn't strictly necessary, but it's recommended for a clean
4985 disposal of some resources of the manager.
4987 . Build your Java application with LTTng-UST Java agent support as
4988 usual, following the procedure for either the
4989 <<jul,`java.util.logging`>> or <<log4j,Apache log4j>> framework.
4991 .Provide application-specific context fields in a Java application.
4996 import java.util.logging.Handler;
4997 import java.util.logging.Logger;
4998 import org.lttng.ust.agent.jul.LttngLogHandler;
4999 import org.lttng.ust.agent.context.ContextInfoManager;
5000 import org.lttng.ust.agent.context.IContextInfoRetriever;
5004 // Our context information retriever class
5005 private static class MyContextInfoRetriever
5006 implements IContextInfoRetriever
5009 public Object retrieveContextInfo(String key) {
5010 if (key.equals("intCtx")) {
5012 } else if (key.equals("strContext")) {
5013 return "context value!";
5020 private static final int answer = 42;
5022 public static void main(String args[]) throws Exception
5024 // Get the context information manager instance
5025 ContextInfoManager cim = ContextInfoManager.getInstance();
5027 // Create and register our context information retriever
5028 IContextInfoRetriever cir = new MyContextInfoRetriever();
5029 cim.registerContextInfoRetriever("myRetriever", cir);
5032 Logger logger = Logger.getLogger("jello");
5034 // Create an LTTng-UST log handler
5035 Handler lttngUstLogHandler = new LttngLogHandler();
5037 // Add the LTTng-UST log handler to our logger
5038 logger.addHandler(lttngUstLogHandler);
5041 logger.info("some info");
5042 logger.warning("some warning");
5044 logger.finer("finer information; the answer is " + answer);
5046 logger.severe("error!");
5048 // Not mandatory, but cleaner
5049 logger.removeHandler(lttngUstLogHandler);
5050 lttngUstLogHandler.close();
5051 cim.unregisterContextInfoRetriever("myRetriever");
5060 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
5063 <<creating-destroying-tracing-sessions,Create a recording session>> and
5064 <<enabling-disabling-events,create a recording event rule>> matching
5065 `java.util.logging` events named `jello`:
5070 $ lttng enable-event --jul jello
5073 <<adding-context,Add the application-specific context fields>> to be
5074 recorded to the event records of the `java.util.logging` channel:
5078 $ lttng add-context --jul --type='$app.myRetriever:intCtx'
5079 $ lttng add-context --jul --type='$app.myRetriever:strContext'
5082 <<basic-tracing-session-control,Start recording>>:
5089 Run the compiled class:
5093 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
5096 <<basic-tracing-session-control,Stop recording>> and inspect the
5108 [[python-application]]
5109 === Instrument a Python application
5111 You can instrument a Python{nbsp}2 or Python{nbsp}3 application which
5113 https://docs.python.org/3/library/logging.html[`logging`] package.
5115 Each log statement creates an LTTng event once the application module
5116 imports the <<lttng-ust-agents,LTTng-UST Python agent>> package.
5119 .A Python application importing the LTTng-UST Python agent.
5120 image::python-app.png[]
5122 To use the LTTng-UST Python agent:
5124 . In the source code of the Python application, import the LTTng-UST
5134 The LTTng-UST Python agent automatically adds its logging handler to the
5135 root logger at import time.
5137 A log statement that the application executes before this import doesn't
5138 create an LTTng event.
5140 IMPORTANT: The LTTng-UST Python agent must be
5141 <<installing-lttng,installed>>.
5143 . Use log statements and logging configuration as usual.
5144 Since the LTTng-UST Python agent adds a handler to the _root_
5145 logger, any log statement from any logger can emit an LTTng event.
5147 .Use the LTTng-UST Python agent.
5158 logging.basicConfig()
5159 logger = logging.getLogger('my-logger')
5162 logger.debug('debug message')
5163 logger.info('info message')
5164 logger.warn('warn message')
5165 logger.error('error message')
5166 logger.critical('critical message')
5170 if __name__ == '__main__':
5174 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
5175 logging handler which prints to the standard error stream, isn't
5176 strictly required for LTTng-UST tracing to work, but in versions of
5177 Python preceding{nbsp}3.2, you could see a warning message which
5178 indicates that no handler exists for the logger `my-logger`.
5180 <<creating-destroying-tracing-sessions,Create a recording session>>,
5181 <<enabling-disabling-events,create a recording event rule>> matching
5182 Python logging events named `my-logger`, and
5183 <<basic-tracing-session-control,start recording>>:
5188 $ lttng enable-event --python my-logger
5192 Run the Python script:
5199 <<basic-tracing-session-control,Stop recording>> and inspect the
5209 In the resulting trace, an <<event,event record>> which a Python
5210 application generated is named `lttng_python:event` and has the
5214 Logging time (string).
5223 Name of the function in which the log statement was executed.
5226 Line number at which the log statement was executed.
5229 Log level integer value.
5232 ID of the Python thread in which the log statement was executed.
5235 Name of the Python thread in which the log statement was executed.
5237 Use the opt:lttng-enable-event(1):--loglevel or
5238 opt:lttng-enable-event(1):--loglevel-only option of the
5239 man:lttng-enable-event(1) command to target a range of Python log levels
5240 or a specific Python log level.
5242 When an application imports the LTTng-UST Python agent, the agent tries
5243 to register to a <<lttng-sessiond,session daemon>>. Note that you must
5244 <<start-sessiond,start the session daemon>> _before_ you run the Python
5245 application. If a session daemon is found, the agent tries to register
5246 to it during five seconds, after which the application continues
5247 without LTTng tracing support. Override this timeout value with
5248 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
5251 If the session daemon stops while a Python application with an imported
5252 LTTng-UST Python agent runs, the agent retries to connect and to
5253 register to a session daemon every three seconds. Override this
5254 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
5259 [[proc-lttng-logger-abi]]
5260 === Use the LTTng logger
5262 The `lttng-tracer` Linux kernel module, part of
5263 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger files
5264 path:{/proc/lttng-logger} and path:{/dev/lttng-logger} (since
5265 LTTng{nbsp}2.11) when it's loaded. Any application can write text data
5266 to any of those files to create one or more LTTng events.
5269 .An application writes to the LTTng logger file to create one or more LTTng events.
5270 image::lttng-logger.png[]
5272 The LTTng logger is the quickest method--not the most efficient,
5273 however--to add instrumentation to an application. It's designed
5274 mostly to instrument shell scripts:
5278 $ echo "Some message, some $variable" > /dev/lttng-logger
5281 Any event that the LTTng logger creates is named `lttng_logger` and
5282 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
5283 other instrumentation points in the kernel tracing domain, **any Unix
5284 user** can <<enabling-disabling-events,create a recording event rule>>
5285 which matches events named `lttng_logger`, not only the root user or
5286 users in the <<tracing-group,tracing group>>.
5288 To use the LTTng logger:
5290 * From any application, write text data to the path:{/dev/lttng-logger}
5293 The `msg` field of `lttng_logger` event records contains the
5296 NOTE: The maximum message length of an LTTng logger event is
5297 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
5298 than one event to contain the remaining data.
5300 You shouldn't use the LTTng logger to trace a user application which you
5301 can instrument in a more efficient way, namely:
5303 * <<c-application,C and $$C++$$ applications>>.
5304 * <<java-application,Java applications>>.
5305 * <<python-application,Python applications>>.
5307 .Use the LTTng logger.
5312 echo 'Hello, World!' > /dev/lttng-logger
5314 df --human-readable --print-type / > /dev/lttng-logger
5317 <<creating-destroying-tracing-sessions,Create a recording session>>,
5318 <<enabling-disabling-events,create a recording event rule>> matching
5319 Linux kernel tracepoint events named `lttng_logger`, and
5320 <<basic-tracing-session-control,start recording>>:
5325 $ lttng enable-event --kernel lttng_logger
5329 Run the Bash script:
5336 <<basic-tracing-session-control,Stop recording>> and inspect the recorded
5347 [[instrumenting-linux-kernel]]
5348 === Instrument a Linux kernel image or module
5350 NOTE: This section shows how to _add_ instrumentation points to the
5351 Linux kernel. The subsystems of the kernel are already thoroughly
5352 instrumented at strategic points for LTTng when you
5353 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
5357 [[linux-add-lttng-layer]]
5358 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5360 This section shows how to add an LTTng layer to existing ftrace
5361 instrumentation using the `TRACE_EVENT()` API.
5363 This section doesn't document the `TRACE_EVENT()` macro. Read the
5364 following articles to learn more about this API:
5366 * https://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part{nbsp}1)]
5367 * https://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part{nbsp}2)]
5368 * https://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part{nbsp}3)]
5370 The following procedure assumes that your ftrace tracepoints are
5371 correctly defined in their own header and that they're created in
5372 one source file using the `CREATE_TRACE_POINTS` definition.
5374 To add an LTTng layer over an existing ftrace tracepoint:
5376 . Make sure the following kernel configuration options are
5382 * `CONFIG_HIGH_RES_TIMERS`
5383 * `CONFIG_TRACEPOINTS`
5386 . Build the Linux source tree with your custom ftrace tracepoints.
5387 . Boot the resulting Linux image on your target system.
5389 Confirm that the tracepoints exist by looking for their names in the
5390 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5391 is your subsystem name.
5393 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
5398 $ cd $(mktemp -d) &&
5399 wget https://lttng.org/files/lttng-modules/lttng-modules-latest-2.13.tar.bz2 &&
5400 tar -xf lttng-modules-latest-2.13.tar.bz2 &&
5401 cd lttng-modules-2.13.*
5405 . In dir:{instrumentation/events/lttng-module}, relative to the root
5406 of the LTTng-modules source tree, create a header file named
5407 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5408 LTTng-modules tracepoint definitions using the LTTng-modules
5411 Start with this template:
5415 .path:{instrumentation/events/lttng-module/my_subsys.h}
5418 #define TRACE_SYSTEM my_subsys
5420 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5421 #define _LTTNG_MY_SUBSYS_H
5423 #include "../../../probes/lttng-tracepoint-event.h"
5424 #include <linux/tracepoint.h>
5426 LTTNG_TRACEPOINT_EVENT(
5428 * Format is identical to the TRACE_EVENT() version for the three
5429 * following macro parameters:
5432 TP_PROTO(int my_int, const char *my_string),
5433 TP_ARGS(my_int, my_string),
5435 /* LTTng-modules specific macros */
5437 ctf_integer(int, my_int_field, my_int)
5438 ctf_string(my_bar_field, my_bar)
5442 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5444 #include "../../../probes/define_trace.h"
5448 The entries in the `TP_FIELDS()` section are the list of fields for the
5449 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5450 the `TRACE_EVENT()` ftrace macro.
5452 See ``<<lttng-modules-tp-fields,Tracepoint fields macros>>'' for a
5453 complete description of the available `ctf_*()` macros.
5455 . Create the kernel module C{nbsp}source file of the LTTng-modules
5456 probe, +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5461 .path:{probes/lttng-probe-my-subsys.c}
5463 #include <linux/module.h>
5464 #include "../lttng-tracer.h"
5467 * Build-time verification of mismatch between mainline
5468 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5469 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5471 #include <trace/events/my_subsys.h>
5473 /* Create LTTng tracepoint probes */
5474 #define LTTNG_PACKAGE_BUILD
5475 #define CREATE_TRACE_POINTS
5476 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5478 #include "../instrumentation/events/lttng-module/my_subsys.h"
5480 MODULE_LICENSE("GPL and additional rights");
5481 MODULE_AUTHOR("Your name <your-email>");
5482 MODULE_DESCRIPTION("LTTng my_subsys probes");
5483 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5484 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5485 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5486 LTTNG_MODULES_EXTRAVERSION);
5490 . Edit path:{probes/KBuild} and add your new kernel module object
5491 next to the existing ones:
5495 .path:{probes/KBuild}
5499 obj-m += lttng-probe-module.o
5500 obj-m += lttng-probe-power.o
5502 obj-m += lttng-probe-my-subsys.o
5508 . Build and install the LTTng kernel modules:
5513 $ make KERNELDIR=/path/to/linux
5514 # make modules_install && depmod -a
5518 Replace `/path/to/linux` with the path to the Linux source tree where
5519 you defined and used tracepoints with the `TRACE_EVENT()` ftrace macro.
5521 Note that you can also use the
5522 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5523 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5524 C{nbsp}code that need to be executed before LTTng records the event
5527 The best way to learn how to use the previous LTTng-modules macros is to
5528 inspect the existing LTTng-modules tracepoint definitions in the
5529 dir:{instrumentation/events/lttng-module} header files. Compare them
5530 with the Linux kernel mainline versions in the
5531 dir:{include/trace/events} directory of the Linux source tree.
5535 [[lttng-tracepoint-event-code]]
5536 ===== Use custom C code to access the data for tracepoint fields
5538 Although we recommended to always use the
5539 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5540 the arguments and fields of an LTTng-modules tracepoint when possible,
5541 sometimes you need a more complex process to access the data that the
5542 tracer records as event record fields. In other words, you need local
5543 variables and multiple C{nbsp}statements instead of simple
5544 argument-based expressions that you pass to the
5545 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5547 Use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5548 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5549 a block of C{nbsp}code to be executed before LTTng records the fields.
5550 The structure of this macro is:
5553 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5555 LTTNG_TRACEPOINT_EVENT_CODE(
5557 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5558 * version for the following three macro parameters:
5561 TP_PROTO(int my_int, const char *my_string),
5562 TP_ARGS(my_int, my_string),
5564 /* Declarations of custom local variables */
5567 unsigned long b = 0;
5568 const char *name = "(undefined)";
5569 struct my_struct *my_struct;
5573 * Custom code which uses both tracepoint arguments
5574 * (in TP_ARGS()) and local variables (in TP_locvar()).
5576 * Local variables are actually members of a structure pointed
5577 * to by the special variable tp_locvar.
5581 tp_locvar->a = my_int + 17;
5582 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5583 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5584 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5585 put_my_struct(tp_locvar->my_struct);
5594 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5595 * version for this, except that tp_locvar members can be
5596 * used in the argument expression parameters of
5597 * the ctf_*() macros.
5600 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5601 ctf_integer(int, my_struct_a, tp_locvar->a)
5602 ctf_string(my_string_field, my_string)
5603 ctf_string(my_struct_name, tp_locvar->name)
5608 IMPORTANT: The C code defined in `TP_code()` must not have any side
5609 effects when executed. In particular, the code must not allocate
5610 memory or get resources without deallocating this memory or putting
5611 those resources afterwards.
5614 [[instrumenting-linux-kernel-tracing]]
5615 ==== Load and unload a custom probe kernel module
5617 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5618 kernel module>> in the kernel before it can emit LTTng events.
5620 To load the default probe kernel modules and a custom probe kernel
5623 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5624 probe modules to load when starting a root <<lttng-sessiond,session
5628 .Load the `my_subsys`, `usb`, and the default probe modules.
5632 # lttng-sessiond --extra-kmod-probes=my_subsys,usb
5637 You only need to pass the subsystem name, not the whole kernel module
5640 To load _only_ a given custom probe kernel module:
5642 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5643 modules to load when starting a root session daemon:
5646 .Load only the `my_subsys` and `usb` probe modules.
5650 # lttng-sessiond --kmod-probes=my_subsys,usb
5655 To confirm that a probe module is loaded:
5662 $ lsmod | grep lttng_probe_usb
5666 To unload the loaded probe modules:
5668 * Kill the session daemon with `SIGTERM`:
5673 # pkill lttng-sessiond
5677 You can also use the `--remove` option of man:modprobe(8) if the session
5678 daemon terminates abnormally.
5681 [[controlling-tracing]]
5684 Once an application or a Linux kernel is <<instrumenting,instrumented>>
5685 for LTTng tracing, you can _trace_ it.
5687 In the LTTng context, _tracing_ means making sure that LTTng attempts to
5688 execute some action(s) when a CPU executes an instrumentation point.
5690 This section is divided in topics on how to use the various
5691 <<plumbing,components of LTTng>>, in particular the
5692 <<lttng-cli,cmd:lttng command-line tool>>, to _control_ the LTTng
5693 daemons and tracers.
5695 NOTE: In the following subsections, we refer to an man:lttng(1) command
5696 using its man page name. For example, instead of ``Run the `create`
5697 command to'', we write ``Run the man:lttng-create(1) command to''.
5701 === Start a session daemon
5703 In some situations, you need to run a <<lttng-sessiond,session daemon>>
5704 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5707 You will see the following error when you run a command while no session
5711 Error: No session daemon is available
5714 The only command that automatically runs a session daemon is
5715 man:lttng-create(1), which you use to
5716 <<creating-destroying-tracing-sessions,create a recording session>>. While
5717 this could be your most used first operation, sometimes it's not. Some
5720 * <<list-instrumentation-points,List the available instrumentation points>>.
5721 * <<saving-loading-tracing-session,Load a recording session configuration>>.
5722 * <<add-event-rule-matches-trigger,Add a trigger>>.
5724 All the examples above don't require a recording session to operate on.
5726 [[tracing-group]] Each Unix user can have its own running session daemon
5727 to use the user space LTTng tracer. The session daemon that the `root`
5728 user starts is the only one allowed to control the LTTng kernel tracer.
5729 Members of the Unix _tracing group_ may connect to and control the root
5730 session daemon, even for user space tracing. See the ``Session daemon
5731 connection'' section of man:lttng(1) to learn more about the Unix
5734 To start a user session daemon:
5736 * Run man:lttng-sessiond(8):
5741 $ lttng-sessiond --daemonize
5745 To start the root session daemon:
5747 * Run man:lttng-sessiond(8) as the `root` user:
5752 # lttng-sessiond --daemonize
5756 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5757 start the session daemon in foreground.
5759 To stop a session daemon, kill its process (see man:kill(1)) with the
5760 standard `TERM` signal.
5762 Note that some Linux distributions could manage the LTTng session daemon
5763 as a service. In this case, we suggest that you use the service manager
5764 to start, restart, and stop session daemons.
5767 [[creating-destroying-tracing-sessions]]
5768 === Create and destroy a recording session
5770 Many LTTng control operations happen in the scope of a
5771 <<tracing-session,recording session>>, which is the dialogue between the
5772 <<lttng-sessiond,session daemon>> and you for everything related to
5773 <<event,event recording>>.
5775 To create a recording session with a generated name:
5777 * Use the man:lttng-create(1) command:
5786 The name of the created recording session is `auto` followed by the
5789 To create a recording session with a specific name:
5791 * Use the optional argument of the man:lttng-create(1) command:
5796 $ lttng create SESSION
5800 Replace +__SESSION__+ with your specific recording session name.
5802 In <<local-mode,local mode>>, LTTng writes the traces of a recording
5803 session to the +$LTTNG_HOME/lttng-traces/__NAME__-__DATE__-__TIME__+
5804 directory by default, where +__NAME__+ is the name of the recording
5805 session. Note that the env:LTTNG_HOME environment variable defaults to
5808 To output LTTng traces to a non-default location:
5810 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1)
5816 $ lttng create my-session --output=/tmp/some-directory
5820 You may create as many recording sessions as you wish.
5822 To list all the existing recording sessions for your Unix user, or for
5823 all users if your Unix user is `root`:
5825 * Use the man:lttng-list(1) command:
5834 [[cur-tracing-session]]When you create a recording session, the
5835 man:lttng-create(1) command sets it as the _current recording session_.
5836 The following man:lttng(1) commands operate on the current recording
5837 session when you don't specify one:
5839 [role="list-3-cols"]
5840 * man:lttng-add-context(1)
5841 * man:lttng-clear(1)
5842 * man:lttng-destroy(1)
5843 * man:lttng-disable-channel(1)
5844 * man:lttng-disable-event(1)
5845 * man:lttng-disable-rotation(1)
5846 * man:lttng-enable-channel(1)
5847 * man:lttng-enable-event(1)
5848 * man:lttng-enable-rotation(1)
5850 * man:lttng-regenerate(1)
5851 * man:lttng-rotate(1)
5853 * man:lttng-snapshot(1)
5854 * man:lttng-start(1)
5855 * man:lttng-status(1)
5857 * man:lttng-track(1)
5858 * man:lttng-untrack(1)
5861 To change the current recording session:
5863 * Use the man:lttng-set-session(1) command:
5868 $ lttng set-session SESSION
5872 Replace +__SESSION__+ with the name of the new current recording session.
5874 When you're done recording in a given recording session, destroy it.
5875 This operation frees the resources taken by the recording session to
5876 destroy; it doesn't destroy the trace data that LTTng wrote for this
5877 recording session (see ``<<clear,Clear a recording session>>'' for one
5880 To destroy the current recording session:
5882 * Use the man:lttng-destroy(1) command:
5891 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
5892 implicitly (see ``<<basic-tracing-session-control,Start and stop a
5893 recording session>>''). You need to stop recording to make LTTng flush the
5894 remaining trace data and make the trace readable.
5897 [[list-instrumentation-points]]
5898 === List the available instrumentation points
5900 The <<lttng-sessiond,session daemon>> can query the running instrumented
5901 user applications and the Linux kernel to get a list of available
5902 instrumentation points:
5904 * LTTng tracepoints and system calls for the Linux kernel
5905 <<domain,tracing domain>>.
5907 * LTTng tracepoints for the user space tracing domain.
5909 To list the available instrumentation points:
5911 . <<start-sessiond,Make sure>> there's a running
5912 <<lttng-sessiond,session daemon>> to which your Unix user can
5915 . Use the man:lttng-list(1) command with the option of the requested
5916 tracing domain amongst:
5919 opt:lttng-list(1):--kernel::
5920 Linux kernel tracepoints.
5922 Your Unix user must be `root`, or it must be a member of the Unix
5923 <<tracing-group,tracing group>>.
5925 opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall::
5926 Linux kernel system calls.
5928 Your Unix user must be `root`, or it must be a member of the Unix
5929 <<tracing-group,tracing group>>.
5931 opt:lttng-list(1):--userspace::
5932 User space tracepoints.
5934 opt:lttng-list(1):--jul::
5935 `java.util.logging` loggers.
5937 opt:lttng-list(1):--log4j::
5938 Apache log4j loggers.
5940 opt:lttng-list(1):--python::
5944 .List the available user space tracepoints.
5948 $ lttng list --userspace
5952 .List the available Linux kernel system calls.
5956 $ lttng list --kernel --syscall
5961 [[enabling-disabling-events]]
5962 === Create and enable a recording event rule
5964 Once you <<creating-destroying-tracing-sessions,create a recording
5965 session>>, you can create <<event,recording event rules>> with the
5966 man:lttng-enable-event(1) command.
5968 The man:lttng-enable-event(1) command always attaches an event rule to a
5969 <<channel,channel>> on creation. The command can create a _default
5970 channel_, named `channel0`, for you. The man:lttng-enable-event(1)
5971 command reuses the default channel each time you run it for the same
5972 tracing domain and session.
5974 A recording event rule is always enabled at creation time.
5976 The following examples show how to combine the command-line arguments of
5977 the man:lttng-enable-event(1) command to create simple to more complex
5978 recording event rules within the <<cur-tracing-session,current recording
5981 .Create a recording event rule matching specific Linux kernel tracepoint events (default channel).
5985 # lttng enable-event --kernel sched_switch
5989 .Create a recording event rule matching Linux kernel system call events with four specific names (default channel).
5993 # lttng enable-event --kernel --syscall open,write,read,close
5997 .Create recording event rules matching tracepoint events which satisfy a filter expressions (default channel).
6001 # lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
6006 # lttng enable-event --kernel --all \
6007 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
6012 $ lttng enable-event --jul my_logger \
6013 --filter='$app.retriever:cur_msg_id > 3'
6016 IMPORTANT: Make sure to always single-quote the filter string when you
6017 run man:lttng(1) from a shell.
6019 See also ``<<pid-tracking,Allow specific processes to record events>>''
6020 which offers another, more efficient filtering mechanism for process ID,
6021 user ID, and group ID attributes.
6024 .Create a recording event rule matching any user space event from the `my_app` tracepoint provider and with a log level range (default channel).
6028 $ lttng enable-event --userspace my_app:'*' --loglevel=INFO
6031 IMPORTANT: Make sure to always single-quote the wildcard character when
6032 you run man:lttng(1) from a shell.
6035 .Create a recording event rule matching user space events named specifically, but with name exclusions (default channel).
6039 $ lttng enable-event --userspace my_app:'*' \
6040 --exclude=my_app:set_user,my_app:handle_sig
6044 .Create a recording event rule matching any Apache log4j event with a specific log level (default channel).
6048 $ lttng enable-event --log4j --all --loglevel-only=WARN
6052 .Create a recording event rule, attached to a specific channel, and matching user space tracepoint events named `my_app:my_tracepoint`.
6056 $ lttng enable-event --userspace my_app:my_tracepoint \
6057 --channel=my-channel
6061 .Create a recording event rule matching user space probe events for the `malloc` function entry in path:{/usr/lib/libc.so.6}:
6065 # lttng enable-event --kernel \
6066 --userspace-probe=/usr/lib/libc.so.6:malloc \
6071 .Create a recording event rule matching user space probe events for the `server`/`accept_request` https://www.sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps[USDT probe] in path:{/usr/bin/serv}:
6075 # lttng enable-event --kernel \
6076 --userspace-probe=sdt:serv:server:accept_request \
6077 server_accept_request
6081 The recording event rules of a given channel form a whitelist: as soon
6082 as an event rule matches an event, LTTng emits it _once_ and therefore
6083 <<channel-overwrite-mode-vs-discard-mode,can>> record it. For example,
6084 the following rules both match user space tracepoint events named
6085 `my_app:my_tracepoint` with an `INFO` log level:
6089 $ lttng enable-event --userspace my_app:my_tracepoint
6090 $ lttng enable-event --userspace my_app:my_tracepoint \
6094 The second recording event rule is redundant: the first one includes the
6098 [[disable-event-rule]]
6099 === Disable a recording event rule
6101 To disable a <<event,recording event rule>> that you
6102 <<enabling-disabling-events,created>> previously, use the
6103 man:lttng-disable-event(1) command.
6105 man:lttng-disable-event(1) can only find recording event rules to
6106 disable by their <<instrumentation-point-types,instrumentation point
6107 type>> and event name conditions. Therefore, you cannot disable
6108 recording event rules having a specific instrumentation point log level
6109 condition, for example.
6111 LTTng doesn't emit (and, therefore, won't record) an event which only
6112 _disabled_ recording event rules match.
6114 .Disable event rules matching Python logging events from the `my-logger` logger (default <<channel,channel>>, <<cur-tracing-session,current recording session>>).
6118 $ lttng disable-event --python my-logger
6122 .Disable event rules matching all `java.util.logging` events (default channel, recording session `my-session`).
6126 $ lttng disable-event --jul --session=my-session '*'
6130 .Disable _all_ the Linux kernel recording event rules (channel `my-chan`, current recording session).
6132 The opt:lttng-disable-event(1):--all-events option isn't, like the
6133 opt:lttng-enable-event(1):--all option of the man:lttng-enable-event(1)
6134 command, an alias for the event name globbing pattern `*`: it disables
6135 _all_ the recording event rules of a given channel.
6139 # lttng disable-event --kernel --channel=my-chan --all-events
6143 NOTE: You can't _remove_ a recording event rule once you create it.
6147 === Get the status of a recording session
6149 To get the status of the <<cur-tracing-session,current recording
6150 session>>, that is, its parameters, its channels, recording event rules,
6151 and their attributes:
6153 * Use the man:lttng-status(1) command:
6162 To get the status of any recording session:
6164 * Use the man:lttng-list(1) command with the name of the recording
6170 $ lttng list SESSION
6174 Replace +__SESSION__+ with the recording session name.
6177 [[basic-tracing-session-control]]
6178 === Start and stop a recording session
6180 Once you <<creating-destroying-tracing-sessions,create a recording
6181 session>> and <<enabling-disabling-events,create one or more recording
6182 event rules>>, you can start and stop the tracers for this recording
6185 To start the <<cur-tracing-session,current recording session>>:
6187 * Use the man:lttng-start(1) command:
6196 LTTng is flexible: you can launch user applications before or after you
6197 start the tracers. An LTTng tracer only <<event,records an event>> if a
6198 recording event rule matches it, which means the tracer is active.
6200 The `start-session` <<trigger,trigger>> action can also start a recording
6203 To stop the current recording session:
6205 * Use the man:lttng-stop(1) command:
6214 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
6215 records>> or lost sub-buffers since the last time you ran
6216 man:lttng-start(1), the man:lttng-stop(1) command prints corresponding
6219 IMPORTANT: You need to stop recording to make LTTng flush the remaining
6220 trace data and make the trace readable. Note that the
6221 man:lttng-destroy(1) command (see
6222 ``<<creating-destroying-tracing-sessions,Create and destroy a recording
6223 session>>'') also runs the man:lttng-stop(1) command implicitly.
6225 The `stop-session` <<trigger,trigger>> action can also stop a recording
6230 === Clear a recording session
6232 You might need to remove all the current tracing data of one or more
6233 <<tracing-session,recording sessions>> between multiple attempts to
6234 reproduce a problem without interrupting the LTTng recording activity.
6236 To clear the tracing data of the
6237 <<cur-tracing-session,current recording session>>:
6239 * Use the man:lttng-clear(1) command:
6248 To clear the tracing data of all the recording sessions:
6250 * Use the `lttng clear` command with its opt:lttng-clear(1):--all
6261 [[enabling-disabling-channels]]
6262 === Create a channel
6264 Once you <<creating-destroying-tracing-sessions,create a recording
6265 session>>, you can create a <<channel,channel>> with the
6266 man:lttng-enable-channel(1) command.
6268 Note that LTTng can automatically create a default channel when you
6269 <<enabling-disabling-events,create a recording event rule>>.
6270 Therefore, you only need to create a channel when you need non-default
6273 Specify each non-default channel attribute with a command-line
6274 option when you run the man:lttng-enable-channel(1) command.
6276 You can only create a custom channel in the Linux kernel and user space
6277 <<domain,tracing domains>>: the Java/Python logging tracing domains have
6278 their own default channel which LTTng automatically creates when you
6279 <<enabling-disabling-events,create a recording event rule>>.
6283 As of LTTng{nbsp}{revision}, you may _not_ perform the
6284 following operations with the man:lttng-enable-channel(1) command:
6286 * Change an attribute of an existing channel.
6288 * Enable a disabled channel once its recording session has been
6289 <<basic-tracing-session-control,active>> at least once.
6291 * Create a channel once its recording session has been active at
6294 * Create a user space channel with a given
6295 <<channel-buffering-schemes,buffering scheme>> and create a second
6296 user space channel with a different buffering scheme in the same
6300 The following examples show how to combine the command-line options of
6301 the man:lttng-enable-channel(1) command to create simple to more complex
6302 channels within the <<cur-tracing-session,current recording session>>.
6304 .Create a Linux kernel channel with default attributes.
6308 # lttng enable-channel --kernel my-channel
6312 .Create a user space channel with four sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6316 $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6317 --buffers-pid my-channel
6321 .[[blocking-timeout-example]]Create a default user space channel with an infinite blocking timeout.
6323 <<creating-destroying-tracing-sessions,Create a recording session>>,
6324 create the channel, <<enabling-disabling-events,create a recording event
6325 rule>>, and <<basic-tracing-session-control,start recording>>:
6330 $ lttng enable-channel --userspace --blocking-timeout=inf blocking-chan
6331 $ lttng enable-event --userspace --channel=blocking-chan --all
6335 Run an application instrumented with LTTng-UST tracepoints and allow it
6340 $ LTTNG_UST_ALLOW_BLOCKING=1 my-app
6344 .Create a Linux kernel channel which rotates eight trace files of 4{nbsp}MiB each for each stream.
6348 # lttng enable-channel --kernel --tracefile-count=8 \
6349 --tracefile-size=4194304 my-channel
6353 .Create a user space channel in <<overwrite-mode,overwrite>> (or ``flight recorder'') mode.
6357 $ lttng enable-channel --userspace --overwrite my-channel
6361 .<<enabling-disabling-events,Create>> the same <<event,recording event rule>> attached to two different channels.
6365 $ lttng enable-event --userspace --channel=my-channel app:tp
6366 $ lttng enable-event --userspace --channel=other-channel app:tp
6369 When a CPU executes the `app:tp` <<c-application,user space
6370 tracepoint>>, the two recording event rules above match the created
6371 event, making LTTng emit the event. Because the recording event rules
6372 are not attached to the same channel, LTTng records the event twice.
6377 === Disable a channel
6379 To disable a specific channel that you
6380 <<enabling-disabling-channels,created>> previously, use the
6381 man:lttng-disable-channel(1) command.
6383 .Disable a specific Linux kernel channel (<<cur-tracing-session,current recording session>>).
6387 # lttng disable-channel --kernel my-channel
6391 An enabled channel is an implicit <<event,recording event rule>>
6394 NOTE: As of LTTng{nbsp}{revision}, you may _not_ enable a disabled
6395 channel once its recording session has been
6396 <<basic-tracing-session-control,started>> at least once.
6400 === Add context fields to be recorded to the event records of a channel
6402 <<event,Event record>> fields in trace files provide important
6403 information about previously emitted events, but sometimes some external
6404 context may help you solve a problem faster.
6406 Examples of context fields are:
6408 * The **process ID**, **thread ID**, **process name**, and
6409 **process priority** of the thread from which LTTng emits the event.
6411 * The **hostname** of the system on which LTTng emits the event.
6413 * The Linux kernel and user call stacks (since LTTng{nbsp}2.11).
6415 * The current values of many possible **performance counters** using
6418 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6420 ** Branch instructions, misses, and loads.
6423 * Any state defined at the application level (supported for the
6424 `java.util.logging` and Apache log4j <<domain,tracing domains>>).
6426 To get the full list of available context fields:
6428 * Use the opt:lttng-add-context(1):--list option of the
6429 man:lttng-add-context(1) command:
6433 $ lttng add-context --list
6436 .Add context fields to be recorded to the event records of all the <<channel,channels>> of the <<cur-tracing-session,current recording session>>.
6438 The following command line adds the virtual process identifier and the
6439 per-thread CPU cycles count fields to all the user space channels of the
6440 current recording session.
6444 $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6448 .Add performance counter context fields by raw ID
6450 See man:lttng-add-context(1) for the exact format of the context field
6451 type, which is partly compatible with the format used in
6456 # lttng add-context --userspace --type=perf:thread:raw:r0110:test
6457 # lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6461 .Add context fields to be recorded to the event records of a specific channel.
6463 The following command line adds the thread identifier and user call
6464 stack context fields to the Linux kernel channel named `my-channel` of
6465 the <<cur-tracing-session,current recording session>>.
6469 # lttng add-context --kernel --channel=my-channel \
6470 --type=tid --type=callstack-user
6474 .Add an <<java-application-context,application-specific context field>> to be recorded to the event records of a specific channel.
6476 The following command line makes sure LTTng writes the `cur_msg_id`
6477 context field of the `retriever` context retriever to all the Java
6478 logging <<event,event records>> of the channel named `my-channel`:
6482 # lttng add-context --kernel --channel=my-channel \
6483 --type='$app:retriever:cur_msg_id'
6486 IMPORTANT: Make sure to always single-quote the `$` character when you
6487 run man:lttng-add-context(1) from a shell.
6490 NOTE: You can't undo what the man:lttng-add-context(1) command does.
6495 === Allow specific processes to record events
6497 It's often useful to only allow processes with specific attributes to
6498 record events. For example, you may wish to record all the system calls
6499 which a given process makes (à la man:strace(1)).
6501 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6502 purpose. Both commands operate on _inclusion sets_ of process
6503 attributes. The available process attribute types are:
6505 Linux kernel <<domain,tracing domain>>::
6509 * Virtual process ID (VPID).
6511 This is the PID as seen by the application.
6513 * Unix user ID (UID).
6515 * Virtual Unix user ID (VUID).
6517 This is the UID as seen by the application.
6519 * Unix group ID (GID).
6521 * Virtual Unix group ID (VGID).
6523 This is the GID as seen by the application.
6525 User space tracing domain::
6531 A <<tracing-session,recording session>> has nine process
6532 attribute inclusion sets: six for the Linux kernel <<domain,tracing domain>>
6533 and three for the user space tracing domain.
6535 For a given recording session, a process{nbsp}__P__ is allowed to record
6536 LTTng events for a given <<domain,tracing domain>>{nbsp}__D__ if _all_
6537 the attributes of{nbsp}__P__ are part of the inclusion sets
6540 Whether a process is allowed or not to record LTTng events is an
6541 implicit condition of all <<event,recording event rules>>. Therefore, if
6542 LTTng creates an event{nbsp}__E__ for a given process, but this process
6543 may not record events, then no recording event rule matches{nbsp}__E__,
6544 which means LTTng won't emit and record{nbsp}__E__.
6546 When you <<creating-destroying-tracing-sessions,create a recording
6547 session>>, all its process attribute inclusion sets contain all the
6548 possible values. In other words, all processes are allowed to record
6551 Add values to an inclusion set with the man:lttng-track(1) command and
6552 remove values with the man:lttng-untrack(1) command.
6556 The process attribute values are _numeric_.
6558 Should a process with a given ID (part of an inclusion set), for
6559 example, exit, and then a new process be given this same ID, then the
6560 latter would also be allowed to record events.
6562 With the man:lttng-track(1) command, you can add Unix user and group
6563 _names_ to the user and group inclusion sets: the
6564 <<lttng-sessiond,session daemon>> finds the corresponding UID, VUID,
6565 GID, or VGID once on _addition_ to the inclusion set. This means that if
6566 you rename the user or group after you run the man:lttng-track(1)
6567 command, its user/group ID remains part of the inclusion sets.
6570 .Allow processes to record events based on their virtual process ID (VPID).
6572 For the sake of the following example, assume the target system has
6573 16{nbsp}possible VPIDs.
6576 <<creating-destroying-tracing-sessions,create a recording session>>,
6577 the user space VPID inclusion set contains _all_ the possible VPIDs:
6580 .The VPID inclusion set is full.
6581 image::track-all.png[]
6583 When the inclusion set is full and you run the man:lttng-track(1)
6584 command to specify some VPIDs, LTTng:
6586 . Clears the inclusion set.
6587 . Adds the specific VPIDs to the inclusion set.
6593 $ lttng track --userspace --vpid=3,4,7,10,13
6596 the VPID inclusion set is:
6599 .The VPID inclusion set contains the VPIDs 3, 4, 7, 10, and 13.
6600 image::track-3-4-7-10-13.png[]
6602 Add more VPIDs to the inclusion set afterwards:
6606 $ lttng track --userspace --vpid=1,15,16
6612 .VPIDs 1, 15, and 16 are added to the inclusion set.
6613 image::track-1-3-4-7-10-13-15-16.png[]
6615 The man:lttng-untrack(1) command removes entries from process attribute
6616 inclusion sets. Given the previous example, the following command:
6620 $ lttng untrack --userspace --vpid=3,7,10,13
6623 leads to this VPID inclusion set:
6626 .VPIDs 3, 7, 10, and 13 are removed from the inclusion set.
6627 image::track-1-4-15-16.png[]
6629 You can make the VPID inclusion set full again with the
6630 opt:lttng-track(1):--all option:
6634 $ lttng track --userspace --vpid --all
6637 The result is, again:
6640 .The VPID inclusion set is full.
6641 image::track-all.png[]
6644 .Allow specific processes to record events based on their user ID (UID).
6646 A typical use case with process attribute inclusion sets is to start
6647 with an empty inclusion set, then <<basic-tracing-session-control,start
6648 the tracers>>, and finally add values manually while the tracers are
6651 Use the opt:lttng-untrack(1):--all option of the
6652 man:lttng-untrack(1) command to clear the inclusion set after you
6653 <<creating-destroying-tracing-sessions,create a recording session>>, for
6654 example (with UIDs):
6658 # lttng untrack --kernel --uid --all
6664 .The UID inclusion set is empty.
6665 image::untrack-all.png[]
6667 If the LTTng tracer runs with this inclusion set configuration, it
6668 records no events within the <<cur-tracing-session,current recording
6669 session>> because no processes is allowed to do so. Use the
6670 man:lttng-track(1) command as usual to add specific values to the UID
6671 inclusion set when you need to, for example:
6675 # lttng track --kernel --uid=http,11
6681 .UIDs 6 (`http`) and 11 are part of the UID inclusion set.
6682 image::track-6-11.png[]
6687 [[saving-loading-tracing-session]]
6688 === Save and load recording session configurations
6690 Configuring a <<tracing-session,recording session>> can be long. Some of
6691 the tasks involved are:
6693 * <<enabling-disabling-channels,Create channels>> with
6694 specific attributes.
6696 * <<adding-context,Add context fields>> to be recorded to the
6697 <<event,event records>> of specific channels.
6699 * <<enabling-disabling-events,Create recording event rules>> with
6700 specific log level, filter, and other conditions.
6702 If you use LTTng to solve real world problems, chances are you have to
6703 record events using the same recording session setup over and over,
6704 modifying a few variables each time in your instrumented program or
6707 To avoid constant recording session reconfiguration, the man:lttng(1)
6708 command-line tool can save and load recording session configurations
6711 To save a given recording session configuration:
6713 * Use the man:lttng-save(1) command:
6718 $ lttng save SESSION
6722 Replace +__SESSION__+ with the name of the recording session to save.
6724 LTTng saves recording session configurations to
6725 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6726 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. See
6727 man:lttng-save(1) to learn more about the recording session configuration
6730 LTTng saves all configuration parameters, for example:
6732 * The recording session name.
6733 * The trace data output path.
6734 * The <<channel,channels>>, with their state and all their attributes.
6735 * The context fields you added to channels.
6736 * The <<event,recording event rules>> with their state and conditions.
6738 To load a recording session:
6740 * Use the man:lttng-load(1) command:
6745 $ lttng load SESSION
6749 Replace +__SESSION__+ with the name of the recording session to load.
6751 When LTTng loads a configuration, it restores your saved recording session
6752 as if you just configured it manually.
6754 You can also save and load many sessions at a time; see
6755 man:lttng-save(1) and man:lttng-load(1) to learn more.
6758 [[sending-trace-data-over-the-network]]
6759 === Send trace data over the network
6761 LTTng can send the recorded trace data of a <<tracing-session,recording
6762 session>> to a remote system over the network instead of writing it to
6763 the local file system.
6765 To send the trace data over the network:
6767 . On the _remote_ system (which can also be the target system),
6768 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6777 . On the _target_ system, create a recording session
6778 <<net-streaming-mode,configured>> to send trace data over the network:
6783 $ lttng create my-session --set-url=net://remote-system
6787 Replace +__remote-system__+ with the host name or IP address of the
6788 remote system. See man:lttng-create(1) for the exact URL format.
6790 . On the target system, use the man:lttng(1) command-line tool as usual.
6792 When recording is <<basic-tracing-session-control,active>>, the
6793 <<lttng-consumerd,consumer daemon>> of the target sends the contents of
6794 <<channel,sub-buffers>> to the remote relay daemon instead of flushing
6795 them to the local file system. The relay daemon writes the received
6796 packets to its local file system.
6798 See the ``Output directory'' section of man:lttng-relayd(8) to learn
6799 where a relay daemon writes its received trace data.
6804 === View events as LTTng records them (noch:{LTTng} live)
6806 _LTTng live_ is a network protocol implemented by the
6807 <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)) to allow compatible
6808 trace readers to display or analyze <<event,event records>> as LTTng
6809 records events on the target system while recording is
6810 <<basic-tracing-session-control,active>>.
6812 The relay daemon creates a _tee_: it forwards the trace data to both the
6813 local file system and to connected live readers:
6816 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live reader.
6821 . On the _target system_, create a <<tracing-session,recording session>>
6827 $ lttng create my-session --live
6831 This operation spawns a local relay daemon.
6833 . Start the live reader and configure it to connect to the relay daemon.
6835 For example, with man:babeltrace2(1):
6840 $ babeltrace2 net://localhost/host/HOSTNAME/my-session
6844 Replace +__HOSTNAME__+ with the host name of the target system.
6846 . Configure the recording session as usual with the man:lttng(1)
6847 command-line tool, and <<basic-tracing-session-control,start recording>>.
6849 List the available live recording sessions with man:babeltrace2(1):
6853 $ babeltrace2 net://localhost
6856 You can start the relay daemon on another system. In this case, you need
6857 to specify the URL of the relay daemon when you
6858 <<creating-destroying-tracing-sessions,create the recording session>> with
6859 the opt:lttng-create(1):--set-url option of the man:lttng-create(1)
6860 command. You also need to replace +__localhost__+ in the procedure above
6861 with the host name of the system on which the relay daemon runs.
6865 [[taking-a-snapshot]]
6866 === Take a snapshot of the current sub-buffers of a recording session
6868 The normal behavior of LTTng is to append full sub-buffers to growing
6869 trace data files. This is ideal to keep a full history of the events
6870 which the target system emitted, but it can represent too much data in
6873 For example, you may wish to have LTTng record your application
6874 continuously until some critical situation happens, in which case you
6875 only need the latest few recorded events to perform the desired
6876 analysis, not multi-gigabyte trace files.
6878 With the man:lttng-snapshot(1) command, you can take a _snapshot_ of the
6879 current <<channel,sub-buffers>> of a given <<tracing-session,recording
6880 session>>. LTTng can write the snapshot to the local file system or send
6881 it over the network.
6884 .A snapshot is a copy of the current sub-buffers, which LTTng does _not_ clear after the operation.
6885 image::snapshot.png[]
6887 The snapshot feature of LTTng is similar to how a
6888 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder] or the
6889 ``roll'' mode of an oscilloscope work.
6891 TIP: If you wish to create unmanaged, self-contained, non-overlapping
6892 trace chunk archives instead of a simple copy of the current
6893 sub-buffers, see the <<session-rotation,recording session rotation>>
6894 feature (available since LTTng{nbsp}2.11).
6896 To take a snapshot of the <<cur-tracing-session,current recording
6899 . Create a recording session in <<snapshot-mode,snapshot mode>>:
6904 $ lttng create my-session --snapshot
6908 The <<channel-overwrite-mode-vs-discard-mode,event record loss mode>> of
6909 <<channel,channels>> created in this mode is automatically set to
6910 <<overwrite-mode,_overwrite_>>.
6912 . Configure the recording session as usual with the man:lttng(1)
6913 command-line tool, and <<basic-tracing-session-control,start
6916 . **Optional**: When you need to take a snapshot,
6917 <<basic-tracing-session-control,stop recording>>.
6919 You can take a snapshot when the tracers are active, but if you stop
6920 them first, you're guaranteed that the trace data in the sub-buffers
6921 doesn't change before you actually take the snapshot.
6928 $ lttng snapshot record --name=my-first-snapshot
6932 LTTng writes the current sub-buffers of all the channels of the
6933 <<cur-tracing-session,current recording session>> to
6934 trace files on the local file system. Those trace files have
6935 `my-first-snapshot` in their name.
6937 There's no difference between the format of a normal trace file and the
6938 format of a snapshot: LTTng trace readers also support LTTng snapshots.
6940 By default, LTTng writes snapshot files to the path shown by
6944 $ lttng snapshot list-output
6947 You can change this path or decide to send snapshots over the network
6950 . An output path or URL that you specify when you
6951 <<creating-destroying-tracing-sessions,create the recording session>>.
6953 . A snapshot output path or URL that you add using the
6954 `add-output` action of the man:lttng-snapshot(1) command.
6956 . An output path or URL that you provide directly to the
6957 `record` action of the man:lttng-snapshot(1) command.
6959 Method{nbsp}3 overrides method{nbsp}2, which overrides method 1. When
6960 you specify a URL, a <<lttng-relayd,relay daemon>> must listen on a
6961 remote system (see ``<<sending-trace-data-over-the-network,Send trace
6962 data over the network>>'').
6964 The `snapshot-session` <<trigger,trigger>> action can also take
6965 a recording session snapshot.
6969 [[session-rotation]]
6970 === Archive the current trace chunk (rotate a recording session)
6972 The <<taking-a-snapshot,snapshot user guide>> shows how to dump the
6973 current sub-buffers of a recording session to the file system or send them
6974 over the network. When you take a snapshot, LTTng doesn't clear the ring
6975 buffers of the recording session: if you take another snapshot immediately
6976 after, both snapshots could contain overlapping trace data.
6978 Inspired by https://en.wikipedia.org/wiki/Log_rotation[log rotation],
6979 _recording session rotation_ is a feature which appends the content of the
6980 ring buffers to what's already on the file system or sent over the
6981 network since the creation of the recording session or since the last
6982 rotation, and then clears those ring buffers to avoid trace data
6985 What LTTng is about to write when performing a recording session rotation
6986 is called the _current trace chunk_. When LTTng writes or sends over the
6987 network this current trace chunk, it becomes a _trace chunk archive_.
6988 Therefore, a recording session rotation operation _archives_ the current
6992 .A recording session rotation operation _archives_ the current trace chunk.
6993 image::rotation.png[]
6995 A trace chunk archive is a self-contained LTTng trace which LTTng
6996 doesn't manage anymore: you can read it, modify it, move it, or remove
6999 As of LTTng{nbsp}{revision}, there are three methods to perform a
7000 recording session rotation:
7002 * <<immediate-rotation,Immediately>>.
7004 * With a <<rotation-schedule,rotation schedule>>.
7006 * Through the execution of a `rotate-session` <<trigger,trigger>>
7009 [[immediate-rotation]]To perform an immediate rotation of the
7010 <<cur-tracing-session,current recording session>>:
7012 . <<creating-destroying-tracing-sessions,Create a recording session>> in
7013 <<local-mode,local mode>> or <<net-streaming-mode,network streaming
7014 mode>> (only those two recording session modes support recording session
7020 # lttng create my-session
7024 . <<enabling-disabling-events,Create one or more recording event rules>>
7025 and <<basic-tracing-session-control,start recording>>:
7030 # lttng enable-event --kernel sched_'*'
7035 . When needed, immediately rotate the current recording session:
7044 The man:lttng-rotate(1) command prints the path to the created trace
7045 chunk archive. See its manual page to learn about the format of trace
7046 chunk archive directory names.
7048 Perform other immediate rotations while the recording session is active.
7049 It's guaranteed that all the trace chunk archives don't contain
7050 overlapping trace data. You can also perform an immediate rotation once
7051 you have <<basic-tracing-session-control,stopped>> the recording session.
7053 . When you're done recording,
7054 <<creating-destroying-tracing-sessions,destroy the current recording
7064 The recording session destruction operation creates one last trace chunk
7065 archive from the current trace chunk.
7067 [[rotation-schedule]]A recording session rotation schedule is a planned
7068 rotation which LTTng performs automatically based on one of the
7069 following conditions:
7071 * A timer with a configured period expires.
7073 * The total size of the _flushed_ part of the current trace chunk
7074 becomes greater than or equal to a configured value.
7076 To schedule a rotation of the <<cur-tracing-session,current recording
7077 session>>, set a _rotation schedule_:
7079 . <<creating-destroying-tracing-sessions,Create a recording session>> in
7080 <<local-mode,local mode>> or <<net-streaming-mode,network streaming
7081 mode>> (only those two creation modes support recording session
7087 # lttng create my-session
7091 . <<enabling-disabling-events,Create one or more recording event rules>>:
7096 # lttng enable-event --kernel sched_'*'
7100 . Set a recording session rotation schedule:
7105 # lttng enable-rotation --timer=10s
7109 In this example, we set a rotation schedule so that LTTng performs a
7110 recording session rotation every ten seconds.
7112 See man:lttng-enable-rotation(1) to learn more about other ways to set a
7115 . <<basic-tracing-session-control,Start recording>>:
7124 LTTng performs recording session rotations automatically while the
7125 recording session is active thanks to the rotation schedule.
7127 . When you're done recording,
7128 <<creating-destroying-tracing-sessions,destroy the current recording
7138 The recording session destruction operation creates one last trace chunk
7139 archive from the current trace chunk.
7141 Unset a recording session rotation schedule with the
7142 man:lttng-disable-rotation(1) command.
7146 [[add-event-rule-matches-trigger]]
7147 === Add an ``event rule matches'' trigger to a session daemon
7149 With the man:lttng-add-trigger(1) command, you can add a
7150 <<trigger,trigger>> to a <<lttng-sessiond,session daemon>>.
7152 A trigger associates an LTTng tracing condition to one or more actions:
7153 when the condition is satisfied, LTTng attempts to execute the actions.
7155 A trigger doesn't need any <<tracing-session,recording session>> to exist:
7156 it belongs to a session daemon.
7158 As of LTTng{nbsp}{revision}, many condition types are available through
7159 the <<liblttng-ctl-lttng,`liblttng-ctl`>> C{nbsp}API, but the
7160 man:lttng-add-trigger(1) command only accepts the ``event rule matches''
7163 An ``event rule matches'' condition is satisfied when its event rule
7166 Unlike a <<event,recording event rule>>, the event rule of an
7167 ``event rule matches'' trigger condition has no implicit conditions,
7170 * It has no enabled/disabled state.
7171 * It has no attached <<channel,channel>>.
7172 * It doesn't belong to a <<tracing-session,recording session>>.
7174 Both the man:lttng-add-trigger(1) and man:lttng-enable-event(1) commands
7175 accept command-line arguments to specify an <<event-rule,event rule>>.
7176 That being said, the former is a more recent command and therefore
7177 follows the common event rule specification format (see
7178 man:lttng-event-rule(7)).
7180 .Start a <<tracing-session,recording session>> when an event rule matches.
7182 This example shows how to add the following trigger to the root
7183 <<lttng-sessiond,session daemon>>:
7186 An event rule matches a Linux kernel system call event of which the
7187 name starts with `exec` and `*/ls` matches the `filename` payload
7190 With such an event rule, LTTng emits an event when the cmd:ls program
7194 <<basic-tracing-session-control,Start the recording session>>
7197 To add such a trigger to the root session daemon:
7199 . **If there's no currently running LTTng root session daemon**, start
7204 # lttng-sessiond --daemonize
7207 . <<creating-destroying-tracing-sessions,Create a recording session>>
7209 <<enabling-disabling-events,create a recording event rule>> matching
7210 all the system call events:
7214 # lttng create pitou
7215 # lttng enable-event --kernel --syscall --all
7218 . Add the trigger to the root session daemon:
7222 # lttng add-trigger --condition=event-rule-matches \
7223 --type=syscall --name='exec*' \
7224 --filter='filename == "*/ls"' \
7225 --action=start-session pitou
7228 Confirm that the trigger exists with the man:lttng-list-triggers(1)
7233 # lttng list-triggers
7236 . Make sure the `pitou` recording session is still inactive (stopped):
7243 The first line should be something like:
7246 Recording session pitou: [inactive]
7249 Run the cmd:ls program to fire the LTTng trigger above:
7256 At this point, the `pitou` recording session should be active
7257 (started). Confirm this with the man:lttng-list(1) command again:
7264 The first line should now look like:
7267 Recording session pitou: [active]
7270 This line confirms that the LTTng trigger you added fired, therefore
7271 starting the `pitou` recording session.
7274 .[[trigger-event-notif]]Send a notification to a user application when an event rule matches.
7276 This example shows how to add the following trigger to the root
7277 <<lttng-sessiond,session daemon>>:
7280 An event rule matches a Linux kernel tracepoint event named
7281 `sched_switch` and of which the value of the `next_comm` payload
7284 With such an event rule, LTTng emits an event when Linux gives access to
7285 the processor to a process named `bash`.
7288 Send an LTTng notification to a user application.
7290 Moreover, we'll specify a _capture descriptor_ with the
7291 `event-rule-matches` trigger condition so that the user application can
7292 get the value of a specific `sched_switch` event payload field.
7294 First, write and build the user application:
7296 . Create the C{nbsp}source file of the application:
7304 #include <stdbool.h>
7307 #include <lttng/lttng.h>
7310 * Subscribes to notifications, through the notification channel
7311 * `notification_channel`, which match the condition of the trigger
7312 * named `trigger_name`.
7314 * Returns `true` on success.
7316 static bool subscribe(struct lttng_notification_channel *notification_channel,
7317 const char *trigger_name)
7319 const struct lttng_condition *condition = NULL;
7320 struct lttng_triggers *triggers = NULL;
7321 unsigned int trigger_count;
7323 enum lttng_error_code error_code;
7324 enum lttng_trigger_status trigger_status;
7327 /* Get all LTTng triggers */
7328 error_code = lttng_list_triggers(&triggers);
7329 assert(error_code == LTTNG_OK);
7331 /* Get the number of triggers */
7332 trigger_status = lttng_triggers_get_count(triggers, &trigger_count);
7333 assert(trigger_status == LTTNG_TRIGGER_STATUS_OK);
7335 /* Find the trigger named `trigger_name` */
7336 for (i = 0; i < trigger_count; i++) {
7337 const struct lttng_trigger *trigger;
7338 const char *this_trigger_name;
7340 trigger = lttng_triggers_get_at_index(triggers, i);
7341 trigger_status = lttng_trigger_get_name(trigger, &this_trigger_name);
7342 assert(trigger_status == LTTNG_TRIGGER_STATUS_OK);
7344 if (strcmp(this_trigger_name, trigger_name) == 0) {
7345 /* Trigger found: subscribe with its condition */
7346 enum lttng_notification_channel_status notification_channel_status;
7348 notification_channel_status = lttng_notification_channel_subscribe(
7349 notification_channel,
7350 lttng_trigger_get_const_condition(trigger));
7351 assert(notification_channel_status ==
7352 LTTNG_NOTIFICATION_CHANNEL_STATUS_OK);
7358 lttng_triggers_destroy(triggers);
7363 * Handles the evaluation `evaluation` of a single notification.
7365 static void handle_evaluation(const struct lttng_evaluation *evaluation)
7367 enum lttng_evaluation_status evaluation_status;
7368 const struct lttng_event_field_value *array_field_value;
7369 const struct lttng_event_field_value *string_field_value;
7370 enum lttng_event_field_value_status event_field_value_status;
7371 const char *string_field_string_value;
7373 /* Get the value of the first captured (string) field */
7374 evaluation_status = lttng_evaluation_event_rule_matches_get_captured_values(
7375 evaluation, &array_field_value);
7376 assert(evaluation_status == LTTNG_EVALUATION_STATUS_OK);
7377 event_field_value_status =
7378 lttng_event_field_value_array_get_element_at_index(
7379 array_field_value, 0, &string_field_value);
7380 assert(event_field_value_status == LTTNG_EVENT_FIELD_VALUE_STATUS_OK);
7381 assert(lttng_event_field_value_get_type(string_field_value) ==
7382 LTTNG_EVENT_FIELD_VALUE_TYPE_STRING);
7383 event_field_value_status = lttng_event_field_value_string_get_value(
7384 string_field_value, &string_field_string_value);
7385 assert(event_field_value_status == LTTNG_EVENT_FIELD_VALUE_STATUS_OK);
7387 /* Print the string value of the field */
7388 puts(string_field_string_value);
7391 int main(int argc, char *argv[])
7393 int exit_status = EXIT_SUCCESS;
7394 struct lttng_notification_channel *notification_channel;
7395 enum lttng_notification_channel_status notification_channel_status;
7396 const struct lttng_condition *condition;
7397 const char *trigger_name;
7401 trigger_name = argv[1];
7404 * Create a notification channel.
7406 * A notification channel connects the user application to the LTTng
7409 * You can use this notification channel to listen to various types
7412 notification_channel = lttng_notification_channel_create(
7413 lttng_session_daemon_notification_endpoint);
7414 assert(notification_channel);
7417 * Subscribe to notifications which match the condition of the
7418 * trigger named `trigger_name`.
7420 if (!subscribe(notification_channel, trigger_name)) {
7422 "Error: Failed to subscribe to notifications (trigger `%s`).\n",
7424 exit_status = EXIT_FAILURE;
7429 * Notification loop.
7431 * Put this in a dedicated thread to avoid blocking the main thread.
7434 struct lttng_notification *notification;
7435 enum lttng_notification_channel_status status;
7436 const struct lttng_evaluation *notification_evaluation;
7438 /* Receive the next notification */
7439 status = lttng_notification_channel_get_next_notification(
7440 notification_channel, ¬ification);
7443 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
7445 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
7447 * The session daemon can drop notifications if a receiving
7448 * application doesn't consume the notifications fast
7452 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
7454 * The session daemon closed the notification channel.
7456 * This is typically caused by a session daemon shutting
7461 /* Unhandled conditions or errors */
7462 exit_status = EXIT_FAILURE;
7467 * Handle the condition evaluation.
7469 * A notification provides, amongst other things:
7471 * * The condition that caused LTTng to send this notification.
7473 * * The condition evaluation, which provides more specific
7474 * information on the evaluation of the condition.
7476 handle_evaluation(lttng_notification_get_evaluation(notification));
7478 /* Destroy the notification object */
7479 lttng_notification_destroy(notification);
7483 lttng_notification_channel_destroy(notification_channel);
7489 This application prints the first captured string field value of the
7490 condition evaluation of each LTTng notification it receives.
7492 . Build the `notif-app` application,
7493 using https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
7494 to provide the right compiler and linker flags:
7499 $ gcc -o notif-app notif-app.c $(pkg-config --cflags --libs lttng-ctl)
7503 Now, to add the trigger to the root session daemon:
7506 . **If there's no currently running LTTng root session daemon**, start
7511 # lttng-sessiond --daemonize
7514 . Add the trigger, naming it `sched-switch-notif`, to the root
7519 # lttng add-trigger --name=sched-switch-notif \
7520 --condition=event-rule-matches \
7521 --type=kernel --name=sched_switch \
7522 --filter='next_comm == "bash"' --capture=prev_comm \
7526 Confirm that the `sched-switch-notif` trigger exists with the
7527 man:lttng-list-triggers(1) command:
7531 # lttng list-triggers
7534 Run the cmd:notif-app application, passing the name of the trigger
7535 of which to watch the notifications:
7539 # ./notif-app sched-switch-notif
7542 Now, in an interactive Bash, type a few keys to fire the
7543 `sched-switch-notif` trigger. Watch the `notif-app` application print
7544 the previous process names.
7549 === Use the machine interface
7551 With any command of the man:lttng(1) command-line tool, set the
7552 opt:lttng(1):--mi option to `xml` (before the command name) to get an
7553 XML machine interface output, for example:
7557 $ lttng --mi=xml list my-session
7560 A schema definition (XSD) is
7561 https://github.com/lttng/lttng-tools/blob/stable-{revision}/src/common/mi-lttng-4.0.xsd[available]
7562 to ease the integration with external tools as much as possible.
7566 [[metadata-regenerate]]
7567 === Regenerate the metadata of an LTTng trace
7569 An LTTng trace, which is a https://diamon.org/ctf[CTF] trace, has both
7570 data stream files and a metadata stream file. This metadata file
7571 contains, amongst other things, information about the offset of the
7572 clock sources which LTTng uses to assign timestamps to <<event,event
7573 records>> when recording.
7575 If, once a <<tracing-session,recording session>> is
7576 <<basic-tracing-session-control,started>>, a major
7577 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
7578 happens, the clock offset of the trace also needs to be updated. Use
7579 the `metadata` item of the man:lttng-regenerate(1) command to do so.
7581 The main use case of this command is to allow a system to boot with
7582 an incorrect wall time and have LTTng trace it before its wall time
7583 is corrected. Once the system is known to be in a state where its
7584 wall time is correct, you can run `lttng regenerate metadata`.
7586 To regenerate the metadata stream files of the
7587 <<cur-tracing-session,current recording session>>:
7589 * Use the `metadata` item of the man:lttng-regenerate(1) command:
7594 $ lttng regenerate metadata
7600 [[regenerate-statedump]]
7601 === Regenerate the state dump event records of a recording session
7603 The LTTng kernel and user space tracers generate state dump
7604 <<event,event records>> when the application starts or when you
7605 <<basic-tracing-session-control,start a recording session>>.
7607 An analysis can use the state dump event records to set an initial state
7608 before it builds the rest of the state from the subsequent event
7609 records. http://tracecompass.org/[Trace Compass] and
7610 https://github.com/lttng/lttng-analyses[LTTng analyses] are notable
7611 examples of applications which use the state dump of an LTTng trace.
7613 When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
7614 state dump event records aren't included in the snapshot trace files
7615 because they were recorded to a <<channel,sub-buffer>> that has been
7616 consumed or <<overwrite-mode,overwritten>> already.
7618 Use the `statedump` item of the man:lttng-regenerate(1) command to emit
7619 and record the state dump events again.
7621 To regenerate the state dump of the <<cur-tracing-session,current
7622 recording session>>, provided you created it in <<snapshot-mode,snapshot
7623 mode>>, before you take a snapshot:
7625 . Use the `statedump` item of the man:lttng-regenerate(1) command:
7630 $ lttng regenerate statedump
7634 . <<basic-tracing-session-control,Stop the recording session>>:
7643 . <<taking-a-snapshot,Take a snapshot>>:
7648 $ lttng snapshot record --name=my-snapshot
7652 Depending on the event throughput, you should run steps{nbsp}1
7653 and{nbsp}2 as closely as possible.
7657 To record the state dump events, you need to
7658 <<enabling-disabling-events,create recording event rules>> which enable
7661 * The names of LTTng-UST state dump tracepoints start with
7662 `lttng_ust_statedump:`.
7664 * The names of LTTng-modules state dump tracepoints start with
7670 [[persistent-memory-file-systems]]
7671 === Record trace data on persistent memory file systems
7673 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile
7674 random-access memory] (NVRAM) is random-access memory that retains its
7675 information when power is turned off (non-volatile). Systems with such
7676 memory can store data structures in RAM and retrieve them after a
7677 reboot, without flushing to typical _storage_.
7679 Linux supports NVRAM file systems thanks to either
7680 http://pramfs.sourceforge.net/[PRAMFS] or
7681 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
7682 (requires Linux{nbsp}4.1+).
7684 This section doesn't describe how to operate such file systems; we
7685 assume that you have a working persistent memory file system.
7687 When you <<creating-destroying-tracing-sessions,create a recording
7688 session>>, you can specify the path of the shared memory holding the
7689 sub-buffers. If you specify a location on an NVRAM file system, then you
7690 can retrieve the latest recorded trace data when the system reboots
7693 To record trace data on a persistent memory file system and retrieve the
7694 trace data after a system crash:
7696 . Create a recording session with a <<channel,sub-buffer>> shared memory
7697 path located on an NVRAM file system:
7702 $ lttng create my-session --shm-path=/path/to/shm/on/nvram
7706 . Configure the recording session as usual with the man:lttng(1)
7707 command-line tool, and <<basic-tracing-session-control,start
7710 . After a system crash, use the man:lttng-crash(1) command-line tool to
7711 read the trace data recorded on the NVRAM file system:
7716 $ lttng-crash /path/to/shm/on/nvram
7720 The binary layout of the ring buffer files isn't exactly the same as the
7721 trace files layout. This is why you need to use man:lttng-crash(1)
7722 instead of some standard LTTng trace reader.
7724 To convert the ring buffer files to LTTng trace files:
7726 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
7731 $ lttng-crash --extract=/path/to/trace /path/to/shm/on/nvram
7737 [[notif-trigger-api]]
7738 === Get notified when the buffer usage of a channel is too high or too low
7740 With the notification and <<trigger,trigger>> C{nbsp}API of
7741 <<liblttng-ctl-lttng,`liblttng-ctl`>>, LTTng can notify your user
7742 application when the buffer usage of one or more <<channel,channels>>
7743 becomes too low or too high.
7745 Use this API and enable or disable <<event,recording event rules>> while
7746 a recording session <<basic-tracing-session-control,is active>> to avoid
7747 <<channel-overwrite-mode-vs-discard-mode,discarded event records>>, for
7750 .Send a notification to a user application when the buffer usage of an LTTng channel is too high.
7752 In this example, we create and build an application which gets notified
7753 when the buffer usage of a specific LTTng channel is higher than
7756 We only print that it's the case in this example, but we could as well
7757 use the `liblttng-ctl` C{nbsp}API to <<enabling-disabling-events,disable
7758 recording event rules>> when this happens, for example.
7760 . Create the C{nbsp}source file of the application:
7769 #include <lttng/lttng.h>
7771 int main(int argc, char *argv[])
7773 int exit_status = EXIT_SUCCESS;
7774 struct lttng_notification_channel *notification_channel;
7775 struct lttng_condition *condition;
7776 struct lttng_action *action;
7777 struct lttng_trigger *trigger;
7778 const char *recording_session_name;
7779 const char *channel_name;
7782 recording_session_name = argv[1];
7783 channel_name = argv[2];
7786 * Create a notification channel.
7788 * A notification channel connects the user application to the LTTng
7791 * You can use this notification channel to listen to various types
7794 notification_channel = lttng_notification_channel_create(
7795 lttng_session_daemon_notification_endpoint);
7798 * Create a "buffer usage becomes greater than" condition.
7800 * In this case, the condition is satisfied when the buffer usage
7801 * becomes greater than or equal to 75 %.
7803 * We create the condition for a specific recording session name,
7804 * channel name, and for the user space tracing domain.
7806 * The following condition types also exist:
7808 * * The buffer usage of a channel becomes less than a given value.
7810 * * The consumed data size of a recording session becomes greater
7811 * than a given value.
7813 * * A recording session rotation becomes ongoing.
7815 * * A recording session rotation becomes completed.
7817 * * A given event rule matches an event.
7819 condition = lttng_condition_buffer_usage_high_create();
7820 lttng_condition_buffer_usage_set_threshold_ratio(condition, .75);
7821 lttng_condition_buffer_usage_set_session_name(condition,
7822 recording_session_name);
7823 lttng_condition_buffer_usage_set_channel_name(condition,
7825 lttng_condition_buffer_usage_set_domain_type(condition,
7829 * Create an action (receive a notification) to execute when the
7830 * condition created above is satisfied.
7832 action = lttng_action_notify_create();
7837 * A trigger associates a condition to an action: LTTng executes
7838 * the action when the condition is satisfied.
7840 trigger = lttng_trigger_create(condition, action);
7842 /* Register the trigger to the LTTng session daemon. */
7843 lttng_register_trigger(trigger);
7846 * Now that we have registered a trigger, LTTng will send a
7847 * notification every time its condition is met through a
7848 * notification channel.
7850 * To receive this notification, we must subscribe to notifications
7851 * which match the same condition.
7853 lttng_notification_channel_subscribe(notification_channel,
7857 * Notification loop.
7859 * Put this in a dedicated thread to avoid blocking the main thread.
7862 struct lttng_notification *notification;
7863 enum lttng_notification_channel_status status;
7864 const struct lttng_evaluation *notification_evaluation;
7865 const struct lttng_condition *notification_condition;
7866 double buffer_usage;
7868 /* Receive the next notification. */
7869 status = lttng_notification_channel_get_next_notification(
7870 notification_channel, ¬ification);
7873 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
7875 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
7877 * The session daemon can drop notifications if a monitoring
7878 * application isn't consuming the notifications fast
7882 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
7884 * The session daemon closed the notification channel.
7886 * This is typically caused by a session daemon shutting
7891 /* Unhandled conditions or errors. */
7892 exit_status = EXIT_FAILURE;
7897 * A notification provides, amongst other things:
7899 * * The condition that caused LTTng to send this notification.
7901 * * The condition evaluation, which provides more specific
7902 * information on the evaluation of the condition.
7904 * The condition evaluation provides the buffer usage
7905 * value at the moment the condition was satisfied.
7907 notification_condition = lttng_notification_get_condition(
7909 notification_evaluation = lttng_notification_get_evaluation(
7912 /* We're subscribed to only one condition. */
7913 assert(lttng_condition_get_type(notification_condition) ==
7914 LTTNG_CONDITION_TYPE_BUFFER_USAGE_HIGH);
7917 * Get the exact sampled buffer usage from the condition
7920 lttng_evaluation_buffer_usage_get_usage_ratio(
7921 notification_evaluation, &buffer_usage);
7924 * At this point, instead of printing a message, we could do
7925 * something to reduce the buffer usage of the channel, like
7926 * disable specific events, for example.
7928 printf("Buffer usage is %f %% in recording session \"%s\", "
7929 "user space channel \"%s\".\n", buffer_usage * 100,
7930 recording_session_name, channel_name);
7932 /* Destroy the notification object. */
7933 lttng_notification_destroy(notification);
7937 lttng_action_destroy(action);
7938 lttng_condition_destroy(condition);
7939 lttng_trigger_destroy(trigger);
7940 lttng_notification_channel_destroy(notification_channel);
7946 . Build the `notif-app` application, linking it with `liblttng-ctl`:
7951 $ gcc -o notif-app notif-app.c $(pkg-config --cflags --libs lttng-ctl)
7955 . <<creating-destroying-tracing-sessions,Create a recording session>>,
7956 <<enabling-disabling-events,create a recording event rule>> matching
7957 all the user space tracepoint events, and
7958 <<basic-tracing-session-control,start recording>>:
7963 $ lttng create my-session
7964 $ lttng enable-event --userspace --all
7969 If you create the channel manually with the man:lttng-enable-channel(1)
7970 command, you can set its <<channel-monitor-timer,monitor timer>> to
7971 control how frequently LTTng samples the current values of the channel
7972 properties to evaluate user conditions.
7974 . Run the `notif-app` application.
7976 This program accepts the <<tracing-session,recording session>> and
7977 user space channel names as its two first arguments. The channel
7978 which LTTng automatically creates with the man:lttng-enable-event(1)
7979 command above is named `channel0`:
7984 $ ./notif-app my-session channel0
7988 . In another terminal, run an application with a very high event
7989 throughput so that the 75{nbsp}% buffer usage condition is reached.
7991 In the first terminal, the application should print lines like this:
7994 Buffer usage is 81.45197 % in recording session "my-session", user space
7998 If you don't see anything, try to make the threshold of the condition in
7999 path:{notif-app.c} lower (0.1{nbsp}%, for example), and then rebuild the
8000 `notif-app` application (step{nbsp}2) and run it again (step{nbsp}4).
8007 [[lttng-modules-ref]]
8008 === noch:{LTTng-modules}
8012 [[lttng-tracepoint-enum]]
8013 ==== `LTTNG_TRACEPOINT_ENUM()` usage
8015 Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
8019 LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
8024 * `name` with the name of the enumeration (C identifier, unique
8025 amongst all the defined enumerations).
8026 * `entries` with a list of enumeration entries.
8028 The available enumeration entry macros are:
8030 +ctf_enum_value(__name__, __value__)+::
8031 Entry named +__name__+ mapped to the integral value +__value__+.
8033 +ctf_enum_range(__name__, __begin__, __end__)+::
8034 Entry named +__name__+ mapped to the range of integral values between
8035 +__begin__+ (included) and +__end__+ (included).
8037 +ctf_enum_auto(__name__)+::
8038 Entry named +__name__+ mapped to the integral value following the
8041 The last value of a `ctf_enum_value()` entry is its +__value__+
8044 The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
8046 If `ctf_enum_auto()` is the first entry in the list, its integral
8049 Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
8050 to use a defined enumeration as a tracepoint field.
8052 .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
8056 LTTNG_TRACEPOINT_ENUM(
8059 ctf_enum_auto("AUTO: EXPECT 0")
8060 ctf_enum_value("VALUE: 23", 23)
8061 ctf_enum_value("VALUE: 27", 27)
8062 ctf_enum_auto("AUTO: EXPECT 28")
8063 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
8064 ctf_enum_auto("AUTO: EXPECT 304")
8072 [[lttng-modules-tp-fields]]
8073 ==== Tracepoint fields macros (for `TP_FIELDS()`)
8075 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
8076 tracepoint fields, which must be listed within `TP_FIELDS()` in
8077 `LTTNG_TRACEPOINT_EVENT()`, are:
8079 [role="func-desc growable",cols="asciidoc,asciidoc"]
8080 .Available macros to define LTTng-modules tracepoint fields
8082 |Macro |Description and parameters
8085 +ctf_integer(__t__, __n__, __e__)+
8087 +ctf_integer_nowrite(__t__, __n__, __e__)+
8089 +ctf_user_integer(__t__, __n__, __e__)+
8091 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
8093 Standard integer, displayed in base{nbsp}10.
8096 Integer C type (`int`, `long`, `size_t`, ...).
8102 Argument expression.
8105 +ctf_integer_hex(__t__, __n__, __e__)+
8107 +ctf_user_integer_hex(__t__, __n__, __e__)+
8109 Standard integer, displayed in base{nbsp}16.
8118 Argument expression.
8120 |+ctf_integer_oct(__t__, __n__, __e__)+
8122 Standard integer, displayed in base{nbsp}8.
8131 Argument expression.
8134 +ctf_integer_network(__t__, __n__, __e__)+
8136 +ctf_user_integer_network(__t__, __n__, __e__)+
8138 Integer in network byte order (big-endian), displayed in base{nbsp}10.
8147 Argument expression.
8150 +ctf_integer_network_hex(__t__, __n__, __e__)+
8152 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
8154 Integer in network byte order, displayed in base{nbsp}16.
8163 Argument expression.
8166 +ctf_enum(__N__, __t__, __n__, __e__)+
8168 +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
8170 +ctf_user_enum(__N__, __t__, __n__, __e__)+
8172 +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
8177 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
8180 Integer C type (`int`, `long`, `size_t`, ...).
8186 Argument expression.
8189 +ctf_string(__n__, __e__)+
8191 +ctf_string_nowrite(__n__, __e__)+
8193 +ctf_user_string(__n__, __e__)+
8195 +ctf_user_string_nowrite(__n__, __e__)+
8197 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
8203 Argument expression.
8206 +ctf_array(__t__, __n__, __e__, __s__)+
8208 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
8210 +ctf_user_array(__t__, __n__, __e__, __s__)+
8212 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
8214 Statically-sized array of integers.
8217 Array element C type.
8223 Argument expression.
8229 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
8231 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
8233 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
8235 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
8237 Statically-sized array of bits.
8239 The type of +__e__+ must be an integer type. +__s__+ is the number
8240 of elements of such type in +__e__+, not the number of bits.
8243 Array element C type.
8249 Argument expression.
8255 +ctf_array_text(__t__, __n__, __e__, __s__)+
8257 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
8259 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
8261 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
8263 Statically-sized array, printed as text.
8265 The string doesn't need to be null-terminated.
8268 Array element C type (always `char`).
8274 Argument expression.
8280 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
8282 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
8284 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
8286 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
8288 Dynamically-sized array of integers.
8290 The type of +__E__+ must be unsigned.
8293 Array element C type.
8299 Argument expression.
8302 Length expression C type.
8308 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
8310 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
8312 Dynamically-sized array of integers, displayed in base{nbsp}16.
8314 The type of +__E__+ must be unsigned.
8317 Array element C type.
8323 Argument expression.
8326 Length expression C type.
8331 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
8333 Dynamically-sized array of integers in network byte order (big-endian),
8334 displayed in base{nbsp}10.
8336 The type of +__E__+ must be unsigned.
8339 Array element C type.
8345 Argument expression.
8348 Length expression C type.
8354 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
8356 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
8358 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
8360 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
8362 Dynamically-sized array of bits.
8364 The type of +__e__+ must be an integer type. +__s__+ is the number
8365 of elements of such type in +__e__+, not the number of bits.
8367 The type of +__E__+ must be unsigned.
8370 Array element C type.
8376 Argument expression.
8379 Length expression C type.
8385 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
8387 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
8389 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
8391 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
8393 Dynamically-sized array, displayed as text.
8395 The string doesn't need to be null-terminated.
8397 The type of +__E__+ must be unsigned.
8399 The behaviour is undefined if +__e__+ is `NULL`.
8402 Sequence element C type (always `char`).
8408 Argument expression.
8411 Length expression C type.
8417 Use the `_user` versions when the argument expression, `e`, is
8418 a user space address. In the cases of `ctf_user_integer*()` and
8419 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
8422 The `_nowrite` versions omit themselves from the trace data, but are
8423 otherwise identical. This means LTTng won't write the `_nowrite` fields
8424 to the recorded trace. Their primary purpose is to make some of the
8425 event context available to the <<enabling-disabling-events,recording
8426 event rule filters>> without having to commit the data to
8427 <<channel,sub-buffers>>.
8433 Terms related to LTTng and to tracing in general:
8435 [[def-action]]action::
8436 The part of a <<def-trigger,trigger>> which LTTng executes when the
8437 trigger <<def-condition,condition>> is satisfied.
8440 The https://diamon.org/babeltrace[Babeltrace] project, which includes:
8443 https://babeltrace.org/docs/v2.0/man1/babeltrace2.1/[cmd:babeltrace2]
8444 command-line interface.
8445 * The libbabeltrace2 library which offers a
8446 https://babeltrace.org/docs/v2.0/libbabeltrace2/[C API].
8447 * https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings].
8450 [[def-buffering-scheme]]<<channel-buffering-schemes,buffering scheme>>::
8451 A layout of <<def-sub-buffer,sub-buffers>> applied to a given channel.
8453 [[def-channel]]<<channel,channel>>::
8454 An entity which is responsible for a set of
8455 <<def-ring-buffer,ring buffers>>.
8457 <<def-recording-event-rule,Recording event rules>> are always attached
8458 to a specific channel.
8461 A source of time for a <<def-tracer,tracer>>.
8463 [[def-condition]]condition::
8464 The part of a <<def-trigger,trigger>> which must be satisfied for
8465 LTTng to attempt to execute the trigger <<def-action,actions>>.
8467 [[def-consumer-daemon]]<<lttng-consumerd,consumer daemon>>::
8468 A program which is responsible for consuming the full
8469 <<def-sub-buffer,sub-buffers>> and write them to a file system or
8470 send them over the network.
8472 [[def-current-trace-chunk]]current trace chunk::
8473 A <<def-trace-chunk,trace chunk>> which includes the current content
8474 of all the <<def-sub-buffer,sub-buffers>> of the
8475 <<def-tracing-session,recording session>> and the stream files
8476 produced since the latest event amongst:
8478 * The creation of the recording session.
8479 * The last <<def-tracing-session-rotation,recording session rotation>>, if
8482 <<channel-overwrite-mode-vs-discard-mode,discard mode>>::
8483 The <<def-event-record-loss-mode,event record loss mode>> in which
8484 the <<def-tracer,tracer>> _discards_ new <<def-event-record,event
8485 records>> when there's no <<def-sub-buffer,sub-buffer>> space left to
8488 [[def-event]]event::
8489 The execution of an <<def-instrumentation-point,instrumentation
8490 point>>, like a <<def-tracepoint,tracepoint>> that you manually place
8491 in some source code, or a Linux kprobe.
8493 When an instrumentation point is executed, LTTng creates an event.
8495 When an <<def-event-rule,event rule>> matches the event,
8496 <<def-lttng,LTTng>> executes some action, for example:
8498 * Record its payload to a <<def-sub-buffer,sub-buffer>> as an
8499 <<def-event-record,event record>>.
8500 * Attempt to execute the user-defined actions of a
8501 <<def-trigger,trigger>> with an
8502 <<add-event-rule-matches-trigger,``event rule matches''>> condition.
8504 [[def-event-name]]event name::
8505 The name of an <<def-event,event>>, which is also the name of the
8506 <<def-event-record,event record>>.
8508 This is also called the _instrumentation point name_.
8510 [[def-event-record]]event record::
8511 A record (binary serialization), in a <<def-trace,trace>>, of the
8512 payload of an <<def-event,event>>.
8514 The payload of an event record has zero or more _fields_.
8516 [[def-event-record-loss-mode]]<<channel-overwrite-mode-vs-discard-mode,event record loss mode>>::
8517 The mechanism by which event records of a given
8518 <<def-channel,channel>> are lost (not recorded) when there's no
8519 <<def-sub-buffer,sub-buffer>> space left to store them.
8521 [[def-event-rule]]<<event-rule,event rule>>::
8522 Set of conditions which an <<def-event,event>> must satisfy
8523 for LTTng to execute some action.
8525 An event rule is said to _match_ events, like a
8526 https://en.wikipedia.org/wiki/Regular_expression[regular expression]
8529 A <<def-recording-event-rule,recording event rule>> is a specific type
8530 of event rule of which the action is to <<def-record,record>> the event
8531 to a <<def-sub-buffer,sub-buffer>>.
8533 [[def-incl-set]]inclusion set::
8534 In the <<pid-tracking,process attribute inclusion set>> context: a
8535 set of <<def-proc-attr,process attributes>> of a given type.
8537 <<instrumenting,instrumentation>>::
8538 The use of <<def-lttng,LTTng>> probes to make a kernel or
8539 <<def-user-application,user application>> traceable.
8541 [[def-instrumentation-point]]instrumentation point::
8542 A point in the execution path of a kernel or
8543 <<def-user-application,user application>> which, when executed,
8544 create an <<def-event,event>>.
8546 instrumentation point name::
8547 See _<<def-event-name,event name>>_.
8549 `java.util.logging`::
8551 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities]
8552 of the Java platform.
8555 A https://logging.apache.org/log4j/1.2/[logging library] for Java
8556 developed by the Apache Software Foundation.
8559 Level of severity of a log statement or user space
8560 <<def-instrumentation-point,instrumentation point>>.
8562 [[def-lttng]]LTTng::
8563 The _Linux Trace Toolkit: next generation_ project.
8565 <<lttng-cli,cmd:lttng>>::
8566 A command-line tool provided by the <<def-lttng-tools,LTTng-tools>>
8567 project which you can use to send and receive control messages to and
8568 from a <<def-session-daemon,session daemon>>.
8571 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
8572 which is a set of analyzing programs that you can use to obtain a
8573 higher level view of an <<def-lttng,LTTng>> <<def-trace,trace>>.
8575 cmd:lttng-consumerd::
8576 The name of the <<def-consumer-daemon,consumer daemon>> program.
8579 A utility provided by the <<def-lttng-tools,LTTng-tools>> project
8580 which can convert <<def-ring-buffer,ring buffer>> files (usually
8581 <<persistent-memory-file-systems,saved on a persistent memory file
8582 system>>) to <<def-trace,trace>> files.
8584 See man:lttng-crash(1).
8586 LTTng Documentation::
8589 <<lttng-live,LTTng live>>::
8590 A communication protocol between the <<lttng-relayd,relay daemon>> and
8591 live readers which makes it possible to show or analyze
8592 <<def-event-record,event records>> ``live'', as they're received by
8593 the <<def-relay-daemon,relay daemon>>.
8595 <<lttng-modules,LTTng-modules>>::
8596 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
8597 which contains the Linux kernel modules to make the Linux kernel
8598 <<def-instrumentation-point,instrumentation points>> available for
8599 <<def-lttng,LTTng>> tracing.
8602 The name of the <<def-relay-daemon,relay daemon>> program.
8604 cmd:lttng-sessiond::
8605 The name of the <<def-session-daemon,session daemon>> program.
8607 [[def-lttng-tools]]LTTng-tools::
8608 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
8609 contains the various programs and libraries used to
8610 <<controlling-tracing,control tracing>>.
8612 [[def-lttng-ust]]<<lttng-ust,LTTng-UST>>::
8613 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
8614 contains libraries to instrument
8615 <<def-user-application,user applications>>.
8617 <<lttng-ust-agents,LTTng-UST Java agent>>::
8618 A Java package provided by the <<def-lttng-ust,LTTng-UST>> project to
8619 allow the LTTng instrumentation of `java.util.logging` and Apache
8620 log4j{nbsp}1.2 logging statements.
8622 <<lttng-ust-agents,LTTng-UST Python agent>>::
8623 A Python package provided by the <<def-lttng-ust,LTTng-UST>> project
8624 to allow the <<def-lttng,LTTng>> instrumentation of Python logging
8627 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
8628 The <<def-event-record-loss-mode,event record loss mode>> in which new
8629 <<def-event-record,event records>> _overwrite_ older event records
8630 when there's no <<def-sub-buffer,sub-buffer>> space left to store
8633 <<channel-buffering-schemes,per-process buffering>>::
8634 A <<def-buffering-scheme,buffering scheme>> in which each instrumented
8635 process has its own <<def-sub-buffer,sub-buffers>> for a given user
8636 space <<def-channel,channel>>.
8638 <<channel-buffering-schemes,per-user buffering>>::
8639 A <<def-buffering-scheme,buffering scheme>> in which all the processes
8640 of a Unix user share the same <<def-sub-buffer,sub-buffers>> for a
8641 given user space <<def-channel,channel>>.
8643 [[def-proc-attr]]process attribute::
8644 In the <<pid-tracking,process attribute inclusion set>> context:
8647 * A virtual process ID.
8649 * A virtual Unix user ID.
8651 * A virtual Unix group ID.
8654 See <<def-event-record,_event record_>>.
8656 [[def-record]]record (_verb_)::
8657 Serialize the binary payload of an <<def-event,event>> to a
8658 <<def-sub-buffer,sub-buffer>>.
8660 [[def-recording-event-rule]]<<event,recording event rule>>::
8661 Specific type of <<def-event-rule,event rule>> of which the action is
8662 to <<def-record,record>> the matched event to a
8663 <<def-sub-buffer,sub-buffer>>.
8665 [[def-tracing-session]][[def-recording-session]]<<tracing-session,recording session>>::
8666 A stateful dialogue between you and a <<lttng-sessiond,session daemon>>.
8668 [[def-tracing-session-rotation]]<<session-rotation,recording session rotation>>::
8669 The action of archiving the
8670 <<def-current-trace-chunk,current trace chunk>> of a
8671 <<def-tracing-session,recording session>>.
8673 [[def-relay-daemon]]<<lttng-relayd,relay daemon>>::
8674 A process which is responsible for receiving the <<def-trace,trace>>
8675 data which a distant <<def-consumer-daemon,consumer daemon>> sends.
8677 [[def-ring-buffer]]ring buffer::
8678 A set of <<def-sub-buffer,sub-buffers>>.
8681 See _<<def-tracing-session-rotation,recording session rotation>>_.
8683 [[def-session-daemon]]<<lttng-sessiond,session daemon>>::
8684 A process which receives control commands from you and orchestrates
8685 the <<def-tracer,tracers>> and various <<def-lttng,LTTng>> daemons.
8687 <<taking-a-snapshot,snapshot>>::
8688 A copy of the current data of all the <<def-sub-buffer,sub-buffers>>
8689 of a given <<def-tracing-session,recording session>>, saved as
8690 <<def-trace,trace>> files.
8692 [[def-sub-buffer]]sub-buffer::
8693 One part of an <<def-lttng,LTTng>> <<def-ring-buffer,ring buffer>>
8694 which contains <<def-event-record,event records>>.
8697 The time information attached to an <<def-event,event>> when LTTng
8700 [[def-trace]]trace (_noun_)::
8703 * One https://diamon.org/ctf/[CTF] metadata stream file.
8704 * One or more CTF data stream files which are the concatenations of one
8705 or more flushed <<def-sub-buffer,sub-buffers>>.
8707 [[def-trace-verb]]trace (_verb_)::
8708 From the perspective of a <<def-tracer,tracer>>: attempt to execute
8709 one or more actions when emitting an <<def-event,event>> in an
8710 application or in a system.
8712 [[def-trace-chunk]]trace chunk::
8713 A self-contained <<def-trace,trace>> which is part of a
8714 <<def-tracing-session,recording session>>. Each
8715 <<def-tracing-session-rotation, recording session rotation>> produces a
8716 <<def-trace-chunk-archive,trace chunk archive>>.
8718 [[def-trace-chunk-archive]]trace chunk archive::
8719 The result of a <<def-tracing-session-rotation, recording session
8722 <<def-lttng,LTTng>> doesn't manage any trace chunk archive, even if its
8723 containing <<def-tracing-session,recording session>> is still active: you
8724 are free to read it, modify it, move it, or remove it.
8727 The http://tracecompass.org[Trace Compass] project and application.
8729 [[def-tracepoint]]tracepoint::
8730 An instrumentation point using the tracepoint mechanism of the Linux
8731 kernel or of <<def-lttng-ust,LTTng-UST>>.
8733 tracepoint definition::
8734 The definition of a single <<def-tracepoint,tracepoint>>.
8737 The name of a <<def-tracepoint,tracepoint>>.
8739 [[def-tracepoint-provider]]tracepoint provider::
8740 A set of functions providing <<def-tracepoint,tracepoints>> to an
8741 instrumented <<def-user-application,user application>>.
8743 Not to be confused with a <<def-tracepoint-provider-package,tracepoint
8744 provider package>>: many tracepoint providers can exist within a
8745 tracepoint provider package.
8747 [[def-tracepoint-provider-package]]tracepoint provider package::
8748 One or more <<def-tracepoint-provider,tracepoint providers>> compiled
8749 as an https://en.wikipedia.org/wiki/Object_file[object file] or as a
8750 link:https://en.wikipedia.org/wiki/Library_(computing)#Shared_libraries[shared
8753 [[def-tracer]]tracer::
8754 A piece of software which executes some action when it emits
8755 an <<def-event,event>>, like <<def-record,record>> it to some
8758 <<domain,tracing domain>>::
8759 A type of LTTng <<def-tracer,tracer>>.
8761 <<tracing-group,tracing group>>::
8762 The Unix group which a Unix user can be part of to be allowed to
8763 control the Linux kernel LTTng <<def-tracer,tracer>>.
8765 [[def-trigger]]<<trigger,trigger>>::
8766 A <<def-condition,condition>>-<<def-action,actions>> pair; when the
8767 condition of a trigger is satisfied, LTTng attempts to execute its
8770 [[def-user-application]]user application::
8771 An application (program or library) running in user space, as opposed
8772 to a Linux kernel module, for example.