1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
7 include::../common/copyright.txt[]
10 include::../common/welcome.txt[]
13 include::../common/audience.txt[]
17 === What's in this documentation?
19 The LTTng Documentation is divided into the following sections:
21 * ``**<<nuts-and-bolts,Nuts and bolts>>**'' explains the
22 rudiments of software tracing and the rationale behind the
25 Skip this section if you’re familiar with software tracing and with the
28 * ``**<<installing-lttng,Installation>>**'' describes the steps to
29 install the LTTng packages on common Linux distributions and from
32 Skip this section if you already properly installed LTTng on your target
35 * ``**<<getting-started,Quick start>>**'' is a concise guide to
36 get started quickly with LTTng kernel and user space tracing.
38 We recommend this section if you're new to LTTng or to software tracing
41 Skip this section if you're not new to LTTng.
43 * ``**<<core-concepts,Core concepts>>**'' explains the concepts at
46 It's a good idea to become familiar with the core concepts
47 before attempting to use the toolkit.
49 * ``**<<plumbing,Components of LTTng>>**'' describes the various
50 components of the LTTng machinery, like the daemons, the libraries,
51 and the command-line interface.
53 * ``**<<instrumenting,Instrumentation>>**'' shows different ways to
54 instrument user applications and the Linux kernel for LTTng tracing.
56 Instrumenting source code is essential to provide a meaningful
59 Skip this section if you don't have a programming background.
61 * ``**<<controlling-tracing,Tracing control>>**'' is divided into topics
62 which demonstrate how to use the vast array of features that
63 LTTng{nbsp}{revision} offers.
65 * ``**<<reference,Reference>>**'' contains API reference tables.
67 * ``**<<glossary,Glossary>>**'' is a specialized dictionary of terms
68 related to LTTng or to the field of software tracing.
71 include::../common/convention.txt[]
74 include::../common/acknowledgements.txt[]
78 == What's new in LTTng{nbsp}{revision}?
80 LTTng{nbsp}{revision} bears the name _Nordicité_, the product of a
81 collaboration between https://champlibre.co/[Champ Libre] and
82 https://www.boreale.com/[Boréale]. This farmhouse IPA is brewed with
83 https://en.wikipedia.org/wiki/Kveik[Kveik] yeast and Québec-grown
84 barley, oats, and juniper branches. The result is a remarkable, fruity,
85 hazy golden IPA that offers a balanced touch of resinous and woodsy
88 New features and changes in LTTng{nbsp}{revision}:
92 * The LTTng trigger API of <<liblttng-ctl-lttng,`liblttng-ctl`>> now
93 offers the ``__event rule matches__'' condition (an <<event-rule,event
94 rule>> matches an event) as well as the following new actions:
97 * <<basic-tracing-session-control,Start or stop>> a recording session.
98 * <<session-rotation,Archive the current trace chunk>> of a
99 recording session (rotate).
100 * <<taking-a-snapshot,Take a snapshot>> of a recording session.
103 As a reminder, a <<trigger,trigger>> is a condition-actions pair. When
104 the condition of a trigger is satisfied, LTTng attempts to execute its
107 This feature is also available with the new man:lttng-add-trigger(1),
108 man:lttng-remove-trigger(1), and man:lttng-list-triggers(1)
109 <<lttng-cli,cmd:lttng>> commands.
111 Starting from LTTng{nbsp}{revision}, a trigger may have more than one
114 See “<<add-event-rule-matches-trigger,Add an ``event rule matches''
115 trigger to a session daemon>>” to learn more.
117 * The LTTng <<lttng-ust,user space>> and <<lttng-modules,kernel>>
118 tracers offer the new namespace context field `time_ns`, which is the
119 inode number, in the proc file system, of the current clock namespace.
121 See man:lttng-add-context(1), man:lttng-ust(3), and
122 man:time_namespaces(7).
124 * The link:/man[manual pages] of LTTng-tools now have a terminology and
125 style which match the LTTng Documentation, many fixes, more internal
126 and manual page links, clearer lists and procedures, superior
127 consistency, and usage examples.
129 The new man:lttng-event-rule(7) manual page explains the new, common
130 way to specify an event rule on the command line.
132 The new man:lttng-concepts(7) manual page explains the core concepts of
133 LTTng. Its contents is essentially the ``<<core-concepts,Core
134 concepts>>'' section of this documentation, but more adapted to the
141 The major version part of the `liblttng-ust`
142 https://en.wikipedia.org/wiki/Soname[soname] is bumped, which means you
143 **must recompile** your instrumented applications/libraries and
144 <<tracepoint-provider,tracepoint provider packages>> to use
145 LTTng-UST{nbsp}{revision}.
147 This change became a necessity to clean up the library and for
148 `liblttng-ust` to stop exporting private symbols.
150 Also, LTTng{nbsp}{revision} prepends the `lttng_ust_` and `LTTNG_UST_`
151 prefix to all public macro/definition/function names to offer a
152 consistent API namespace. The LTTng{nbsp}2.12 API is still available;
153 see the ``Compatibility with previous APIs'' section of
157 Other notable changes:
159 * The `liblttng-ust` C{nbsp}API offers the new man:lttng_ust_vtracef(3)
160 and man:lttng_ust_vtracelog(3) macros which are to
161 man:lttng_ust_tracef(3) and man:lttng_ust_tracelog(3) what
162 man:vprintf(3) is to man:printf(3).
164 * LTTng-UST now only depends on https://liburcu.org/[`liburcu`] at build
165 time, not at run time.
169 * The preferred display base of event record integer fields which
170 contain memory addresses is now hexadecimal instead of decimal.
172 * The `pid` field is removed from `lttng_statedump_file_descriptor`
173 event records and the `file_table_address` field is added.
175 This new field is the address of the `files_struct` structure which
176 contains the file descriptor.
179 ``https://github.com/lttng/lttng-modules/commit/e7a0ca7205fd4be7c829d171baa8823fe4784c90[statedump: introduce `file_table_address`]''
182 * The `flags` field of `syscall_entry_clone` event records is now a
183 structure containing two enumerations (exit signal and options).
185 This change makes the flag values more readable and meaningful.
188 ``https://github.com/lttng/lttng-modules/commit/d775625e2ba4825b73b5897e7701ad6e2bdba115[syscalls: Make `clone()`'s `flags` field a 2 enum struct]''
191 * The memory footprint of the kernel tracer is improved: the latter only
192 generates metadata for the specific system call recording event rules
193 that you <<enabling-disabling-events,create>>.
199 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
200 generation_ is a modern toolkit for tracing Linux systems and
201 applications. So your first question might be:
208 As the history of software engineering progressed and led to what
209 we now take for granted--complex, numerous and
210 interdependent software applications running in parallel on
211 sophisticated operating systems like Linux--the authors of such
212 components, software developers, began feeling a natural
213 urge to have tools that would ensure the robustness and good performance
214 of their masterpieces.
216 One major achievement in this field is, inarguably, the
217 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
218 an essential tool for developers to find and fix bugs. But even the best
219 debugger won't help make your software run faster, and nowadays, faster
220 software means either more work done by the same hardware, or cheaper
221 hardware for the same work.
223 A _profiler_ is often the tool of choice to identify performance
224 bottlenecks. Profiling is suitable to identify _where_ performance is
225 lost in a given piece of software. The profiler outputs a profile, a
226 statistical summary of observed events, which you may use to discover
227 which functions took the most time to execute. However, a profiler won't
228 report _why_ some identified functions are the bottleneck. Bottlenecks
229 might only occur when specific conditions are met, conditions that are
230 sometimes impossible to capture by a statistical profiler, or impossible
231 to reproduce with an application altered by the overhead of an
232 event-based profiler. For a thorough investigation of software
233 performance issues, a history of execution is essential, with the
234 recorded values of variables and context fields you choose, and with as
235 little influence as possible on the instrumented application. This is
236 where tracing comes in handy.
238 _Tracing_ is a technique used to understand what goes on in a running
239 software system. The piece of software used for tracing is called a
240 _tracer_, which is conceptually similar to a tape recorder. When
241 recording, specific instrumentation points placed in the software source
242 code generate events that are saved on a giant tape: a _trace_ file. You
243 can record user application and operating system events at the same
244 time, opening the possibility of resolving a wide range of problems that
245 would otherwise be extremely challenging.
247 Tracing is often compared to _logging_. However, tracers and loggers are
248 two different tools, serving two different purposes. Tracers are
249 designed to record much lower-level events that occur much more
250 frequently than log messages, often in the range of thousands per
251 second, with very little execution overhead. Logging is more appropriate
252 for a very high-level analysis of less frequent events: user accesses,
253 exceptional conditions (errors and warnings, for example), database
254 transactions, instant messaging communications, and such. Simply put,
255 logging is one of the many use cases that can be satisfied with tracing.
257 The list of recorded events inside a trace file can be read manually
258 like a log file for the maximum level of detail, but it's generally
259 much more interesting to perform application-specific analyses to
260 produce reduced statistics and graphs that are useful to resolve a
261 given problem. Trace viewers and analyzers are specialized tools
264 In the end, this is what LTTng is: a powerful, open source set of
265 tools to trace the Linux kernel and user applications at the same time.
266 LTTng is composed of several components actively maintained and
267 developed by its link:/community/#where[community].
270 [[lttng-alternatives]]
271 === Alternatives to noch:{LTTng}
273 Excluding proprietary solutions, a few competing software tracers
276 https://github.com/dtrace4linux/linux[dtrace4linux]::
277 A port of Sun Microsystems' DTrace to Linux.
279 The cmd:dtrace tool interprets user scripts and is responsible for
280 loading code into the Linux kernel for further execution and collecting
283 https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF]::
284 A subsystem in the Linux kernel in which a virtual machine can
285 execute programs passed from the user space to the kernel.
287 You can attach such programs to tracepoints and kprobes thanks to a
288 system call, and they can output data to the user space when executed
289 thanks to different mechanisms (pipe, VM register values, and eBPF maps,
292 https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]::
293 The de facto function tracer of the Linux kernel.
295 Its user interface is a set of special files in sysfs.
297 https://perf.wiki.kernel.org/[perf]::
298 A performance analysis tool for Linux which supports hardware
299 performance counters, tracepoints, as well as other counters and
302 The controlling utility of perf is the cmd:perf command line/text UI
305 https://linux.die.net/man/1/strace[strace]::
306 A command-line utility which records system calls made by a
307 user process, as well as signal deliveries and changes of process
310 strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace] to
311 fulfill its function.
313 https://www.sysdig.org/[sysdig]::
314 Like SystemTap, uses scripts to analyze Linux kernel events.
316 You write scripts, or _chisels_ in the jargon of sysdig, in Lua and
317 sysdig executes them while it traces the system or afterwards. The
318 interface of sysdig is the cmd:sysdig command-line tool as well as the
319 text UI-based cmd:csysdig tool.
321 https://sourceware.org/systemtap/[SystemTap]::
322 A Linux kernel and user space tracer which uses custom user scripts
323 to produce plain text traces.
325 SystemTap converts the scripts to the C language, and then compiles them
326 as Linux kernel modules which are loaded to produce trace data. The
327 primary user interface of SystemTap is the cmd:stap command-line tool.
329 The main distinctive features of LTTng is that it produces correlated
330 kernel and user space traces, as well as doing so with the lowest
331 overhead amongst other solutions. It produces trace files in the
332 https://diamon.org/ctf[CTF] format, a file format optimized
333 for the production and analyses of multi-gigabyte data.
335 LTTng is the result of more than 10{nbsp}years of active open source
336 development by a community of passionate developers. LTTng is currently
337 available on major desktop and server Linux distributions.
339 The main interface for tracing control is a single command-line tool
340 named cmd:lttng. The latter can create several recording sessions, enable
341 and disable recording event rules on the fly, filter events efficiently
342 with custom user expressions, start and stop tracing, and much more.
343 LTTng can write the traces on the file system or send them over the
344 network, and keep them totally or partially. You can make LTTng execute
345 user-defined actions when LTTng emits an event. You can view the traces
346 once tracing becomes inactive or as LTTng records events.
348 <<installing-lttng,Install LTTng now>> and
349 <<getting-started,start tracing>>!
355 **LTTng** is a set of software <<plumbing,components>> which interact to
356 <<instrumenting,instrument>> the Linux kernel and user applications, and
357 to <<controlling-tracing,control tracing>> (start and stop
358 recording, create recording event rules, and the rest). Those
359 components are bundled into the following packages:
362 Libraries and command-line interface to control tracing.
365 Linux kernel modules to instrument and trace the kernel.
368 Libraries and Java/Python packages to instrument and trace user
371 Most distributions mark the LTTng-modules and LTTng-UST packages as
372 optional when installing LTTng-tools (which is always required). In the
373 following sections, we always provide the steps to install all three,
376 * You only need to install LTTng-modules if you intend to use
377 the Linux kernel LTTng tracer.
379 * You only need to install LTTng-UST if you intend to use the user
384 As of 10{nbsp}June{nbsp}2021, LTTng{nbsp}{revision} is not yet available
385 in any major non-enterprise Linux distribution.
387 For https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
388 packages, see https://packages.efficios.com/[EfficiOS Enterprise
391 For other distributions, <<building-from-source,build LTTng from
396 [[building-from-source]]
397 === Build from source
399 To build and install LTTng{nbsp}{revision} from source:
401 . Using the package manager of your distribution, or from source,
402 install the following dependencies of LTTng-tools and LTTng-UST:
405 * https://sourceforge.net/projects/libuuid/[libuuid]
406 * https://directory.fsf.org/wiki/Popt[popt]
407 * https://liburcu.org/[Userspace RCU]
408 * http://www.xmlsoft.org/[libxml2]
409 * **Optional**: https://github.com/numactl/numactl[numactl]
412 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
418 wget https://lttng.org/files/lttng-modules/lttng-modules-latest-2.13.tar.bz2 &&
419 tar -xf lttng-modules-latest-2.13.tar.bz2 &&
420 cd lttng-modules-2.13.* &&
422 sudo make modules_install &&
427 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
433 wget https://lttng.org/files/lttng-ust/lttng-ust-latest-2.13.tar.bz2 &&
434 tar -xf lttng-ust-latest-2.13.tar.bz2 &&
435 cd lttng-ust-2.13.* &&
443 Add `--disable-numa` to `./configure` if you don't have
444 https://github.com/numactl/numactl[numactl].
448 .Java and Python application tracing
450 If you need to instrument and have LTTng trace <<java-application,Java
451 applications>>, pass the `--enable-java-agent-jul`,
452 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
453 `configure` script, depending on which Java logging framework you use.
455 If you need to instrument and have LTTng trace
456 <<python-application,Python applications>>, pass the
457 `--enable-python-agent` option to the `configure` script. You can set
458 the env:PYTHON environment variable to the path to the Python interpreter
459 for which to install the LTTng-UST Python agent package.
466 By default, LTTng-UST libraries are installed to
467 dir:{/usr/local/lib}, which is the de facto directory in which to
468 keep self-compiled and third-party libraries.
470 When <<building-tracepoint-providers-and-user-application,linking an
471 instrumented user application with `liblttng-ust`>>:
473 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
476 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
477 man:gcc(1), man:g++(1), or man:clang(1).
481 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
487 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
488 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
489 cd lttng-tools-2.13.* &&
497 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
498 previous steps automatically for a given version of LTTng and confine
499 the installed files to a specific directory. This can be useful to try
500 LTTng without installing it on your system.
506 This is a short guide to get started quickly with LTTng kernel and user
509 Before you follow this guide, make sure to <<installing-lttng,install>>
512 This tutorial walks you through the steps to:
514 . <<tracing-the-linux-kernel,Record Linux kernel events>>.
516 . <<tracing-your-own-user-application,Record the events of a user
517 application>> written in C.
519 . <<viewing-and-analyzing-your-traces,View and analyze the
523 [[tracing-the-linux-kernel]]
524 === Record Linux kernel events
526 NOTE: The following command lines start with the `#` prompt because you
527 need root privileges to control the Linux kernel LTTng tracer. You can
528 also control the kernel tracer as a regular user if your Unix user is a
529 member of the <<tracing-group,tracing group>>.
531 . Create a <<tracing-session,recording session>> to write LTTng traces
532 to dir:{/tmp/my-kernel-trace}:
537 # lttng create my-kernel-session --output=/tmp/my-kernel-trace
541 . List the available kernel tracepoints and system calls:
546 # lttng list --kernel
547 # lttng list --kernel --syscall
551 . Create <<event,recording event rules>> which match events having
552 the desired names, for example the `sched_switch` and
553 `sched_process_fork` tracepoints, and the man:open(2) and man:close(2)
559 # lttng enable-event --kernel sched_switch,sched_process_fork
560 # lttng enable-event --kernel --syscall open,close
564 Create a recording event rule which matches _all_ the Linux kernel
565 tracepoint events with the opt:lttng-enable-event(1):--all option
566 (recording with such a recording event rule generates a lot of data):
571 # lttng enable-event --kernel --all
575 . <<basic-tracing-session-control,Start recording>>:
584 . Do some operation on your system for a few seconds. For example,
585 load a website, or list the files of a directory.
587 . <<creating-destroying-tracing-sessions,Destroy>> the current
597 The man:lttng-destroy(1) command doesn't destroy the trace data; it
598 only destroys the state of the recording session.
600 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
601 implicitly (see ``<<basic-tracing-session-control,Start and stop a
602 recording session>>''). You need to stop recording to make LTTng flush
603 the remaining trace data and make the trace readable.
605 . For the sake of this example, make the recorded trace accessible to
611 # chown -R $(whoami) /tmp/my-kernel-trace
615 See ``<<viewing-and-analyzing-your-traces,View and analyze the
616 recorded events>>'' to view the recorded events.
619 [[tracing-your-own-user-application]]
620 === Record user application events
622 This section walks you through a simple example to record the events of
623 a _Hello world_ program written in{nbsp}C.
625 To create the traceable user application:
627 . Create the tracepoint provider header file, which defines the
628 tracepoints and the events they can generate:
634 #undef LTTNG_UST_TRACEPOINT_PROVIDER
635 #define LTTNG_UST_TRACEPOINT_PROVIDER hello_world
637 #undef LTTNG_UST_TRACEPOINT_INCLUDE
638 #define LTTNG_UST_TRACEPOINT_INCLUDE "./hello-tp.h"
640 #if !defined(_HELLO_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ)
643 #include <lttng/tracepoint.h>
645 LTTNG_UST_TRACEPOINT_EVENT(
650 char *, my_string_arg
653 lttng_ust_field_string(my_string_field, my_string_arg)
654 lttng_ust_field_integer(int, my_integer_field, my_integer_arg)
658 #endif /* _HELLO_TP_H */
660 #include <lttng/tracepoint-event.h>
664 . Create the tracepoint provider package source file:
670 #define LTTNG_UST_TRACEPOINT_CREATE_PROBES
671 #define LTTNG_UST_TRACEPOINT_DEFINE
673 #include "hello-tp.h"
677 . Build the tracepoint provider package:
682 $ gcc -c -I. hello-tp.c
686 . Create the _Hello World_ application source file:
693 #include "hello-tp.h"
695 int main(int argc, char *argv[])
699 puts("Hello, World!\nPress Enter to continue...");
702 * The following getchar() call only exists for the purpose of this
703 * demonstration, to pause the application in order for you to have
704 * time to list its tracepoints. You don't need it otherwise.
709 * An lttng_ust_tracepoint() call.
711 * Arguments, as defined in `hello-tp.h`:
713 * 1. Tracepoint provider name (required)
714 * 2. Tracepoint name (required)
715 * 3. `my_integer_arg` (first user-defined argument)
716 * 4. `my_string_arg` (second user-defined argument)
718 * Notice the tracepoint provider and tracepoint names are
719 * C identifiers, NOT strings: they're in fact parts of variables
720 * that the macros in `hello-tp.h` create.
722 lttng_ust_tracepoint(hello_world, my_first_tracepoint, 23,
725 for (i = 0; i < argc; i++) {
726 lttng_ust_tracepoint(hello_world, my_first_tracepoint,
730 puts("Quitting now!");
731 lttng_ust_tracepoint(hello_world, my_first_tracepoint,
738 . Build the application:
747 . Link the application with the tracepoint provider package,
748 `liblttng-ust` and `libdl`:
753 $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
757 Here's the whole build process:
760 .Build steps of the user space tracing tutorial.
761 image::ust-flow.png[]
763 To record the events of the user application:
765 . Run the application with a few arguments:
770 $ ./hello world and beyond
779 Press Enter to continue...
783 . Start an LTTng <<lttng-sessiond,session daemon>>:
788 $ lttng-sessiond --daemonize
792 NOTE: A session daemon might already be running, for example as a
793 service that the service manager of your distribution started.
795 . List the available user space tracepoints:
800 $ lttng list --userspace
804 You see the `hello_world:my_first_tracepoint` tracepoint listed
805 under the `./hello` process.
807 . Create a <<tracing-session,recording session>>:
812 $ lttng create my-user-space-session
816 . Create a <<event,recording event rule>> which matches user space
817 tracepoint events named `hello_world:my_first_tracepoint`:
822 $ lttng enable-event --userspace hello_world:my_first_tracepoint
826 . <<basic-tracing-session-control,Start recording>>:
835 . Go back to the running `hello` application and press **Enter**.
837 The program executes all `lttng_ust_tracepoint()` instrumentation
838 points, emitting events as the event rule you created in step{nbsp}5
842 . <<creating-destroying-tracing-sessions,Destroy>> the current
852 The man:lttng-destroy(1) command doesn't destroy the trace data; it
853 only destroys the state of the recording session.
855 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
856 implicitly (see ``<<basic-tracing-session-control,Start and stop a
857 recording session>>''). You need to stop recording to make LTTng flush
858 the remaining trace data and make the trace readable.
860 By default, LTTng saves the traces to the
861 +$LTTNG_HOME/lttng-traces/__NAME__-__DATE__-__TIME__+ directory, where
862 +__NAME__+ is the recording session name. The env:LTTNG_HOME environment
863 variable defaults to `$HOME` if not set.
866 [[viewing-and-analyzing-your-traces]]
867 === View and analyze the recorded events
869 Once you have completed the <<tracing-the-linux-kernel,Record Linux
870 kernel events>> and <<tracing-your-own-user-application,Record user
871 application events>> tutorials, you can inspect the recorded events.
873 There are many tools you can use to read LTTng traces:
875 https://babeltrace.org/[Babeltrace{nbsp}2]::
876 A rich, flexible trace manipulation toolkit which includes
877 a versatile command-line interface
878 (man:babeltrace2(1)),
879 a https://babeltrace.org/docs/v2.0/libbabeltrace2/[C{nbsp}library],
880 and https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings]
881 so that you can easily process or convert an LTTng trace with
884 The Babeltrace{nbsp}2 project ships with a plugin
885 (man:babeltrace2-plugin-ctf(7)) which supports the format of the traces
886 which LTTng produces, https://diamon.org/ctf/[CTF].
888 http://tracecompass.org/[Trace Compass]::
889 A graphical user interface for viewing and analyzing any type of
890 logs or traces, including those of LTTng.
892 https://github.com/lttng/lttng-analyses[LTTng analyses]::
893 An experimental project which includes many high-level analyses of
894 LTTng kernel traces, like scheduling statistics, interrupt
895 frequency distribution, top CPU usage, and more.
897 NOTE: This section assumes that LTTng wrote the traces it recorded
898 during the previous tutorials to their default location, in the
899 dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
900 environment variable defaults to `$HOME` if not set.
903 [[viewing-and-analyzing-your-traces-bt]]
904 ==== Use the cmd:babeltrace2 command-line tool
906 The simplest way to list all the recorded events of an LTTng trace is to
907 pass its path to man:babeltrace2(1), without options:
911 $ babeltrace2 ~/lttng-traces/my-user-space-session*
914 The cmd:babeltrace2 command finds all traces recursively within the
915 given path and prints all their events, sorting them chronologically.
917 Pipe the output of cmd:babeltrace2 into a tool like man:grep(1) for
922 $ babeltrace2 /tmp/my-kernel-trace | grep _switch
925 Pipe the output of cmd:babeltrace2 into a tool like man:wc(1) to count
930 $ babeltrace2 /tmp/my-kernel-trace | grep _open | wc --lines
934 [[viewing-and-analyzing-your-traces-bt-python]]
935 ==== Use the Babeltrace{nbsp}2 Python bindings
937 The <<viewing-and-analyzing-your-traces-bt,text output of
938 cmd:babeltrace2>> is useful to isolate event records by simple matching
939 using man:grep(1) and similar utilities. However, more elaborate
940 filters, such as keeping only event records with a field value falling
941 within a specific range, are not trivial to write using a shell.
942 Moreover, reductions and even the most basic computations involving
943 multiple event records are virtually impossible to implement.
945 Fortunately, Babeltrace{nbsp}2 ships with
946 https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings]
947 which make it easy to read the event records of an LTTng trace
948 sequentially and compute the desired information.
950 The following script accepts an LTTng Linux kernel trace path as its
951 first argument and prints the short names of the top five running
952 processes on CPU{nbsp}0 during the whole trace:
963 # Get the trace path from the first command-line argument
964 it = bt2.TraceCollectionMessageIterator(sys.argv[1])
966 # This counter dictionary will hold execution times:
968 # Task command name -> Total execution time (ns)
969 exec_times = collections.Counter()
971 # This holds the last `sched_switch` timestamp
975 # We only care about event messages
976 if type(msg) is not bt2._EventMessageConst:
979 # Event of the event message
982 # Keep only `sched_switch` events
983 if event.cls.name != 'sched_switch':
986 # Keep only records of events which LTTng emitted from CPU 0
987 if event.packet.context_field['cpu_id'] != 0:
990 # Event timestamp (ns)
991 cur_ts = msg.default_clock_snapshot.ns_from_origin
997 # (Short) name of the previous task command
998 prev_comm = str(event.payload_field['prev_comm'])
1000 # Initialize an entry in our dictionary if not done yet
1001 if prev_comm not in exec_times:
1002 exec_times[prev_comm] = 0
1004 # Compute previous command execution time
1005 diff = cur_ts - last_ts
1007 # Update execution time of this command
1008 exec_times[prev_comm] += diff
1010 # Update last timestamp
1014 for name, ns in exec_times.most_common(5):
1015 print('{:20}{} s'.format(name, ns / 1e9))
1018 if __name__ == '__main__':
1026 $ python3 top5proc.py /tmp/my-kernel-trace/kernel
1032 swapper/0 48.607245889 s
1033 chromium 7.192738188 s
1034 pavucontrol 0.709894415 s
1035 Compositor 0.660867933 s
1036 Xorg.bin 0.616753786 s
1039 Note that `swapper/0` is the ``idle'' process of CPU{nbsp}0 on Linux;
1040 since we weren't using the CPU that much when recording, its first
1041 position in the list makes sense.
1045 == [[understanding-lttng]]Core concepts
1047 From a user's perspective, the LTTng system is built on a few concepts,
1048 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1049 operates by sending commands to the <<lttng-sessiond,session daemon>>
1050 (through <<liblttng-ctl-lttng,`liblttng-ctl`>>).
1052 Understanding how those objects relate to each other is key to master
1055 The core concepts of LTTng are:
1057 * <<"event-rule","Instrumentation point, event rule, and event">>
1058 * <<trigger,Trigger>>
1059 * <<tracing-session,Recording session>>
1060 * <<domain,Tracing domain>>
1061 * <<channel,Channel and ring buffer>>
1062 * <<event,Recording event rule and event record>>
1064 NOTE: The man:lttng-concepts(7) manual page also documents the core
1065 concepts of LTTng, with more links to other LTTng-tools manual pages.
1069 === Instrumentation point, event rule, and event
1071 An _instrumentation point_ is a point, within a piece of software,
1072 which, when executed, creates an LTTng _event_.
1074 LTTng offers various <<instrumentation-point-types,types of
1077 An _event rule_ is a set of conditions to match a set of events.
1079 When LTTng creates an event{nbsp}__E__, an event rule{nbsp}__ER__ is
1080 said to __match__{nbsp}__E__ when{nbsp}__E__ satisfies _all_ the
1081 conditions of{nbsp}__ER__. This concept is similar to a
1082 https://en.wikipedia.org/wiki/Regular_expression[regular expression]
1083 which matches a set of strings.
1085 When an event rule matches an event, LTTng _emits_ the event, therefore
1086 attempting to execute one or more actions.
1090 [[event-creation-emission-opti]]The event creation and emission
1091 processes are documentation concepts to help understand the journey from
1092 an instrumentation point to the execution of actions.
1094 The actual creation of an event can be costly because LTTng needs to
1095 evaluate the arguments of the instrumentation point.
1097 In practice, LTTng implements various optimizations for the Linux kernel
1098 and user space <<domain,tracing domains>> to avoid actually creating an
1099 event when the tracer knows, thanks to properties which are independent
1100 from the event payload and current context, that it would never emit
1101 such an event. Those properties are:
1103 * The <<instrumentation-point-types,instrumentation point type>>.
1105 * The instrumentation point name.
1107 * The instrumentation point log level.
1109 * For a <<event,recording event rule>>:
1110 ** The status of the rule itself.
1111 ** The status of the <<channel,channel>>.
1112 ** The activity of the <<tracing-session,recording session>>.
1113 ** Whether or not the process for which LTTng would create the event is
1114 <<pid-tracking,allowed to record events>>.
1116 In other words: if, for a given instrumentation point{nbsp}__IP__, the
1117 LTTng tracer knows that it would never emit an event,
1118 executing{nbsp}__IP__ represents a simple boolean variable check and,
1119 for a Linux kernel recording event rule, a few process attribute checks.
1122 As of LTTng{nbsp}{revision}, there are two places where you can find an
1125 <<event,Recording event rule>>::
1126 A specific type of event rule of which the action is to record the
1127 matched event as an event record.
1129 See ``<<enabling-disabling-events,Create and enable a recording event
1130 rule>>'' to learn more.
1132 ``Event rule matches'' <<trigger,trigger>> condition (since LTTng{nbsp}2.13)::
1133 When the event rule of the trigger condition matches an event, LTTng
1134 can execute user-defined actions such as sending an LTTng
1136 <<basic-tracing-session-control,starting a recording session>>,
1139 See “<<add-event-rule-matches-trigger,Add an ``event rule matches''
1140 trigger to a session daemon>>” to learn more.
1142 For LTTng to emit an event{nbsp}__E__,{nbsp}__E__ must satisfy _all_ the
1143 basic conditions of an event rule{nbsp}__ER__, that is:
1145 * The instrumentation point from which LTTng
1146 creates{nbsp}__E__ has a specific
1147 <<instrumentation-point-types,type>>.
1149 * A pattern matches the name of{nbsp}__E__ while another pattern
1152 * The log level of the instrumentation point from which LTTng
1153 creates{nbsp}__E__ is at least as severe as some value, or is exactly
1156 * The fields of the payload of{nbsp}__E__ and the current context fields
1157 satisfy a filter expression.
1159 A <<event,recording event rule>> has additional, implicit conditions to
1163 [[instrumentation-point-types]]
1164 ==== Instrumentation point types
1166 As of LTTng{nbsp}{revision}, the available instrumentation point
1167 types are, depending on the <<domain,tracing domain>>:
1171 A statically defined point in the source code of the kernel
1172 image or of a kernel module using the
1173 <<lttng-modules,LTTng-modules>> macros.
1175 Linux kernel system call:::
1176 Entry, exit, or both of a Linux kernel system call.
1178 Linux https://www.kernel.org/doc/html/latest/trace/kprobes.html[kprobe]:::
1179 A single probe dynamically placed in the compiled kernel code.
1181 When you create such an instrumentation point, you set its memory
1182 address or symbol name.
1184 Linux user space probe:::
1185 A single probe dynamically placed at the entry of a compiled
1186 user space application/library function through the kernel.
1188 When you create such an instrumentation point, you set:
1191 With the ELF method::
1192 Its application/library path and its symbol name.
1194 With the USDT method::
1195 Its application/library path, its provider name, and its probe name.
1197 ``USDT'' stands for _SystemTap User-level Statically Defined Tracing_,
1198 a http://dtrace.org/blogs/about/[DTrace]-style marker.
1201 As of LTTng{nbsp}{revision}, LTTng only supports USDT probes which
1202 are _not_ reference-counted.
1204 Linux https://www.kernel.org/doc/html/latest/trace/kprobes.html[kretprobe]:::
1205 Entry, exit, or both of a Linux kernel function.
1207 When you create such an instrumentation point, you set the memory
1208 address or symbol name of its function.
1212 A statically defined point in the source code of a C/$$C++$$
1213 application/library using the
1214 <<lttng-ust,LTTng-UST>> macros.
1216 `java.util.logging`, Apache log4j, and Python::
1217 Java or Python logging statement:::
1218 A method call on a Java or Python logger attached to an
1221 See ``<<list-instrumentation-points,List the available instrumentation
1222 points>>'' to learn how to list available Linux kernel, user space, and
1223 logging instrumentation points.
1229 A _trigger_ associates a condition to one or more actions.
1231 When the condition of a trigger is satisfied, LTTng attempts to execute
1234 As of LTTng{nbsp}{revision}, the available trigger conditions and
1239 * The consumed buffer size of a given <<tracing-session,recording
1240 session>> becomes greater than some value.
1242 * The buffer usage of a given <<channel,channel>> becomes greater than
1245 * The buffer usage of a given channel becomes less than some value.
1247 * There's an ongoing <<session-rotation,recording session rotation>>.
1249 * A recording session rotation becomes completed.
1251 * An <<add-event-rule-matches-trigger,event rule matches>> an event.
1255 * <<trigger-event-notif,Send a notification>> to a user application.
1256 * <<basic-tracing-session-control,Start>> a given recording session.
1257 * <<basic-tracing-session-control,Stop>> a given recording session.
1258 * <<session-rotation,Archive the current trace chunk>> of a given
1259 recording session (rotate).
1260 * <<taking-a-snapshot,Take a snapshot>> of a given recording session.
1262 A trigger belongs to a <<lttng-sessiond,session daemon>>, not to a
1263 specific recording session. For a given session daemon, each Unix user has
1264 its own, private triggers. Note, however, that the `root` Unix user may,
1265 for the root session daemon:
1267 * Add a trigger as another Unix user.
1269 * List all the triggers, regardless of their owner.
1271 * Remove a trigger which belongs to another Unix user.
1273 For a given session daemon and Unix user, a trigger has a unique name.
1277 === Recording session
1279 A _recording session_ (named ``tracing session'' prior to
1280 LTTng{nbsp}2.13) is a stateful dialogue between you and a
1281 <<lttng-sessiond,session daemon>> for everything related to
1282 <<event,event recording>>.
1284 Everything that you do when you control LTTng tracers to record events
1285 happens within a recording session. In particular, a recording session:
1287 * Has its own name, unique for a given session daemon.
1289 * Has its own set of trace files, if any.
1291 * Has its own state of activity (started or stopped).
1293 An active recording session is an implicit <<event,recording event rule>>
1296 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
1299 * Has its own <<channel,channels>> to which are attached their own
1300 recording event rules.
1302 * Has its own <<pid-tracking,process attribute inclusion sets>>.
1305 .A _recording session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,recording event rules>>.
1306 image::concepts.png[]
1308 Those attributes and objects are completely isolated between different
1311 A recording session is like an
1312 https://en.wikipedia.org/wiki/Automated_teller_machine[ATM] session: the
1313 operations you do on the banking system through the ATM don't alter the
1314 data of other users of the same system. In the case of the ATM, a
1315 session lasts as long as your bank card is inside. In the case of LTTng,
1316 a recording session lasts from the man:lttng-create(1) command to the
1317 man:lttng-destroy(1) command.
1320 .Each Unix user has its own set of recording sessions.
1321 image::many-sessions.png[]
1323 A recording session belongs to a <<lttng-sessiond,session daemon>>. For a
1324 given session daemon, each Unix user has its own, private recording
1325 sessions. Note, however, that the `root` Unix user may operate on or
1326 destroy another user's recording session.
1329 [[tracing-session-mode]]
1330 ==== Recording session mode
1332 LTTng offers four recording session modes:
1334 [[local-mode]]Local mode::
1335 Write the trace data to the local file system.
1337 [[net-streaming-mode]]Network streaming mode::
1338 Send the trace data over the network to a listening
1339 <<lttng-relayd,relay daemon>>.
1341 [[snapshot-mode]]Snapshot mode::
1342 Only write the trace data to the local file system or send it to a
1343 listening relay daemon when LTTng <<taking-a-snapshot,takes a
1346 LTTng forces all the <<channel,channels>>
1347 to be created to be configured to be snapshot-ready.
1349 LTTng takes a snapshot of such a recording session when:
1352 * You run the man:lttng-snapshot(1) command.
1354 * LTTng executes a `snapshot-session` <<trigger,trigger>> action.
1357 [[live-mode]]Live mode::
1358 Send the trace data over the network to a listening relay daemon
1359 for <<lttng-live,live reading>>.
1361 An LTTng live reader (for example, man:babeltrace2(1)) can connect to
1362 the same relay daemon to receive trace data while the recording session is
1369 A _tracing domain_ identifies a type of LTTng tracer.
1371 A tracing domain has its own properties and features.
1373 There are currently five available tracing domains:
1377 * `java.util.logging` (JUL)
1381 You must specify a tracing domain to target a type of LTTng tracer when
1382 using some <<lttng-cli,cmd:lttng>> commands to avoid ambiguity. For
1383 example, because the Linux kernel and user space tracing domains support
1384 named tracepoints as <<event-rule,instrumentation points>>, you need to
1385 specify a tracing domain when you <<enabling-disabling-events,create
1386 an event rule>> because both tracing domains could have tracepoints
1387 sharing the same name.
1389 You can create <<channel,channels>> in the Linux kernel and user space
1390 tracing domains. The other tracing domains have a single, default
1395 === Channel and ring buffer
1397 A _channel_ is an object which is responsible for a set of
1400 Each ring buffer is divided into multiple _sub-buffers_. When a
1401 <<event,recording event rule>>
1402 matches an event, LTTng can record it to one or more sub-buffers of one
1405 When you <<enabling-disabling-channels,create a channel>>, you set its
1406 final attributes, that is:
1408 * Its <<channel-buffering-schemes,buffering scheme>>.
1410 * What to do <<channel-overwrite-mode-vs-discard-mode,when there's no
1411 space left>> for a new event record because all sub-buffers are full.
1413 * The <<channel-subbuf-size-vs-subbuf-count,size of each ring buffer and
1414 how many sub-buffers>> a ring buffer has.
1416 * The <<tracefile-rotation,size of each trace file LTTng writes for this
1417 channel and the maximum count>> of trace files.
1419 * The periods of its <<channel-read-timer,read>>,
1420 <<channel-switch-timer,switch>>, and <<channel-monitor-timer,monitor>>
1423 * For a Linux kernel channel: its output type.
1425 See the opt:lttng-enable-channel(1):--output option of the
1426 man:lttng-enable-channel(1) command.
1428 * For a user space channel: the value of its
1429 <<blocking-timeout-example,blocking timeout>>.
1431 A channel is always associated to a <<domain,tracing domain>>. The
1432 `java.util.logging` (JUL), log4j, and Python tracing domains each have a
1433 default channel which you can't configure.
1435 A channel owns <<event,recording event rules>>.
1438 [[channel-buffering-schemes]]
1439 ==== Buffering scheme
1441 A channel has at least one ring buffer _per CPU_. LTTng always records
1442 an event to the ring buffer dedicated to the CPU which emits it.
1444 The buffering scheme of a user space channel determines what has its own
1445 set of per-CPU ring buffers:
1447 Per-user buffering::
1448 Allocate one set of ring buffers--one per CPU--shared by all the
1449 instrumented processes of:
1450 If your Unix user is `root`:::
1455 .Per-user buffering scheme (recording session belongs to the `root` Unix user).
1456 image::per-user-buffering-root.png[]
1464 .Per-user buffering scheme (recording session belongs to the `Bob` Unix user).
1465 image::per-user-buffering.png[]
1468 Per-process buffering::
1469 Allocate one set of ring buffers--one per CPU--for each
1470 instrumented process of:
1471 If your Unix user is `root`:::
1476 .Per-process buffering scheme (recording session belongs to the `root` Unix user).
1477 image::per-process-buffering-root.png[]
1485 .Per-process buffering scheme (recording session belongs to the `Bob` Unix user).
1486 image::per-process-buffering.png[]
1489 The per-process buffering scheme tends to consume more memory than the
1490 per-user option because systems generally have more instrumented
1491 processes than Unix users running instrumented processes. However, the
1492 per-process buffering scheme ensures that one process having a high
1493 event throughput won't fill all the shared sub-buffers of the same Unix
1496 The buffering scheme of a Linux kernel channel is always to allocate a
1497 single set of ring buffers for the whole system. This scheme is similar
1498 to the per-user option, but with a single, global user ``running'' the
1502 [[channel-overwrite-mode-vs-discard-mode]]
1503 ==== Event record loss mode
1505 When LTTng emits an event, LTTng can record it to a specific, available
1506 sub-buffer within the ring buffers of specific channels. When there's no
1507 space left in a sub-buffer, the tracer marks it as consumable and
1508 another, available sub-buffer starts receiving the following event
1509 records. An LTTng <<lttng-consumerd,consumer daemon>> eventually
1510 consumes the marked sub-buffer, which returns to the available state.
1513 [role="docsvg-channel-subbuf-anim"]
1518 In an ideal world, sub-buffers are consumed faster than they're filled.
1519 In the real world, however, all sub-buffers can be full at some point,
1520 leaving no space to record the following events.
1522 In an ideal world, sub-buffers are consumed faster than they're filled,
1523 as it's the case in the previous animation. In the real world,
1524 however, all sub-buffers can be full at some point, leaving no space to
1525 record the following events.
1527 By default, <<lttng-modules,LTTng-modules>> and <<lttng-ust,LTTng-UST>>
1528 are _non-blocking_ tracers: when there's no available sub-buffer to
1529 record an event, it's acceptable to lose event records when the
1530 alternative would be to cause substantial delays in the execution of the
1531 instrumented application. LTTng privileges performance over integrity;
1532 it aims at perturbing the instrumented application as little as possible
1533 in order to make the detection of subtle race conditions and rare
1534 interrupt cascades possible.
1536 Since LTTng{nbsp}2.10, the LTTng user space tracer, LTTng-UST, supports
1537 a _blocking mode_. See the <<blocking-timeout-example,blocking timeout
1538 example>> to learn how to use the blocking mode.
1540 When it comes to losing event records because there's no available
1541 sub-buffer, or because the blocking timeout of
1542 the channel is reached, the _event record loss mode_ of the channel
1543 determines what to do. The available event record loss modes are:
1545 [[discard-mode]]Discard mode::
1546 Drop the newest event records until a sub-buffer becomes available.
1548 This is the only available mode when you specify a blocking timeout.
1550 With this mode, LTTng increments a count of lost event records when an
1551 event record is lost and saves this count to the trace. A trace reader
1552 can use the saved discarded event record count of the trace to decide
1553 whether or not to perform some analysis even if trace data is known to
1556 [[overwrite-mode]]Overwrite mode::
1557 Clear the sub-buffer containing the oldest event records and start
1558 writing the newest event records there.
1560 This mode is sometimes called _flight recorder mode_ because it's
1561 similar to a https://en.wikipedia.org/wiki/Flight_recorder[flight
1562 recorder]: always keep a fixed amount of the latest data. It's also
1563 similar to the roll mode of an oscilloscope.
1565 Since LTTng{nbsp}2.8, with this mode, LTTng writes to a given sub-buffer
1566 its sequence number within its data stream. With a <<local-mode,local>>,
1567 <<net-streaming-mode,network streaming>>, or <<live-mode,live>> recording
1568 session, a trace reader can use such sequence numbers to report lost
1569 packets. A trace reader can use the saved discarded sub-buffer (packet)
1570 count of the trace to decide whether or not to perform some analysis
1571 even if trace data is known to be missing.
1573 With this mode, LTTng doesn't write to the trace the exact number of
1574 lost event records in the lost sub-buffers.
1576 Which mechanism you should choose depends on your context: prioritize
1577 the newest or the oldest event records in the ring buffer?
1579 Beware that, in overwrite mode, the tracer abandons a _whole sub-buffer_
1580 as soon as a there's no space left for a new event record, whereas in
1581 discard mode, the tracer only discards the event record that doesn't
1584 There are a few ways to decrease your probability of losing event
1585 records. The ``<<channel-subbuf-size-vs-subbuf-count,Sub-buffer size and
1586 count>>'' section shows how to fine-tune the sub-buffer size and count
1587 of a channel to virtually stop losing event records, though at the cost
1588 of greater memory usage.
1591 [[channel-subbuf-size-vs-subbuf-count]]
1592 ==== Sub-buffer size and count
1594 A channel has one or more ring buffer for each CPU of the target system.
1596 See the ``<<channel-buffering-schemes,Buffering scheme>>'' section to
1597 learn how many ring buffers of a given channel are dedicated to each CPU
1598 depending on its buffering scheme.
1600 Set the size of each sub-buffer the ring buffers of a channel contain
1601 and how many there are
1602 when you <<enabling-disabling-channels,create it>>.
1604 Note that LTTng switching the current sub-buffer of a ring buffer
1605 (marking a full one as consumable and switching to an available one for
1606 LTTng to record the next events) introduces noticeable CPU overhead.
1607 Knowing this, the following list presents a few practical situations
1608 along with how to configure the sub-buffer size and count for them:
1610 High event throughput::
1611 In general, prefer large sub-buffers to lower the risk of losing
1614 Having larger sub-buffers also ensures a lower sub-buffer switching
1617 The sub-buffer count is only meaningful if you create the channel in
1618 <<overwrite-mode,overwrite mode>>: in this case, if LTTng overwrites a
1619 sub-buffer, then the other sub-buffers are left unaltered.
1621 Low event throughput::
1622 In general, prefer smaller sub-buffers since the risk of losing
1623 event records is low.
1625 Because LTTng emits events less frequently, the sub-buffer switching
1626 frequency should remain low and therefore the overhead of the tracer
1627 shouldn't be a problem.
1630 If your target system has a low memory limit, prefer fewer first,
1631 then smaller sub-buffers.
1633 Even if the system is limited in memory, you want to keep the
1634 sub-buffers as large as possible to avoid a high sub-buffer switching
1637 Note that LTTng uses https://diamon.org/ctf/[CTF] as its trace format,
1638 which means event record data is very compact. For example, the average
1639 LTTng kernel event record weights about 32{nbsp}bytes. Therefore, a
1640 sub-buffer size of 1{nbsp}MiB is considered large.
1642 The previous scenarios highlight the major trade-off between a few large
1643 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1644 frequency vs. how many event records are lost in overwrite mode.
1645 Assuming a constant event throughput and using the overwrite mode, the
1646 two following configurations have the same ring buffer total size:
1649 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1654 Two sub-buffers of 4{nbsp}MiB each::
1655 Expect a very low sub-buffer switching frequency, but if LTTng
1656 ever needs to overwrite a sub-buffer, half of the event records so
1657 far (4{nbsp}MiB) are definitely lost.
1659 Eight sub-buffers of 1{nbsp}MiB each::
1660 Expect four times the tracer overhead of the configuration above,
1661 but if LTTng needs to overwrite a sub-buffer, only the eighth of
1662 event records so far (1{nbsp}MiB) are definitely lost.
1664 In <<discard-mode,discard mode>>, the sub-buffer count parameter is
1665 pointless: use two sub-buffers and set their size according to your
1669 [[tracefile-rotation]]
1670 ==== Maximum trace file size and count (trace file rotation)
1672 By default, trace files can grow as large as needed.
1674 Set the maximum size of each trace file that LTTng writes of a given
1675 channel when you <<enabling-disabling-channels,create it>>.
1677 When the size of a trace file reaches the fixed maximum size of the
1678 channel, LTTng creates another file to contain the next event records.
1679 LTTng appends a file count to each trace file name in this case.
1681 If you set the trace file size attribute when you create a channel, the
1682 maximum number of trace files that LTTng creates is _unlimited_ by
1683 default. To limit them, set a maximum number of trace files. When the
1684 number of trace files reaches the fixed maximum count of the channel,
1685 LTTng overwrites the oldest trace file. This mechanism is called _trace
1690 Even if you don't limit the trace file count, always assume that LTTng
1691 manages all the trace files of the recording session.
1693 In other words, there's no safe way to know if LTTng still holds a given
1694 trace file open with the trace file rotation feature.
1696 The only way to obtain an unmanaged, self-contained LTTng trace before
1697 you <<creating-destroying-tracing-sessions,destroy the recording session>>
1698 is with the <<session-rotation,recording session rotation>> feature, which
1699 is available since LTTng{nbsp}2.11.
1706 Each channel can have up to three optional timers:
1708 [[channel-switch-timer]]Switch timer::
1709 When this timer expires, a sub-buffer switch happens: for each ring
1710 buffer of the channel, LTTng marks the current sub-buffer as
1711 consumable and _switches_ to an available one to record the next
1715 [role="docsvg-channel-switch-timer"]
1720 A switch timer is useful to ensure that LTTng consumes and commits trace
1721 data to trace files or to a distant <<lttng-relayd,relay daemon>>
1722 periodically in case of a low event throughput.
1724 Such a timer is also convenient when you use large
1725 <<channel-subbuf-size-vs-subbuf-count,sub-buffers>> to cope with a
1726 sporadic high event throughput, even if the throughput is otherwise low.
1728 Set the period of the switch timer of a channel when you
1729 <<enabling-disabling-channels,create it>> with
1730 the opt:lttng-enable-channel(1):--switch-timer option.
1732 [[channel-read-timer]]Read timer::
1733 When this timer expires, LTTng checks for full, consumable
1736 By default, the LTTng tracers use an asynchronous message mechanism to
1737 signal a full sub-buffer so that a <<lttng-consumerd,consumer daemon>>
1740 When such messages must be avoided, for example in real-time
1741 applications, use this timer instead.
1743 Set the period of the read timer of a channel when you
1744 <<enabling-disabling-channels,create it>> with the
1745 opt:lttng-enable-channel(1):--read-timer option.
1747 [[channel-monitor-timer]]Monitor timer::
1748 When this timer expires, the consumer daemon samples some channel
1749 statistics to evaluate the following <<trigger,trigger>>
1753 . The consumed buffer size of a given <<tracing-session,recording
1754 session>> becomes greater than some value.
1755 . The buffer usage of a given channel becomes greater than some value.
1756 . The buffer usage of a given channel becomes less than some value.
1759 If you disable the monitor timer of a channel{nbsp}__C__:
1762 * The consumed buffer size value of the recording session of{nbsp}__C__
1763 could be wrong for trigger condition type{nbsp}1: the consumed buffer
1764 size of{nbsp}__C__ won't be part of the grand total.
1766 * The buffer usage trigger conditions (types{nbsp}2 and{nbsp}3)
1767 for{nbsp}__C__ will never be satisfied.
1770 Set the period of the monitor timer of a channel when you
1771 <<enabling-disabling-channels,create it>> with the
1772 opt:lttng-enable-channel(1):--monitor-timer option.
1776 === Recording event rule and event record
1778 A _recording event rule_ is a specific type of <<event-rule,event rule>>
1779 of which the action is to serialize and record the matched event as an
1782 Set the explicit conditions of a recording event rule when you
1783 <<enabling-disabling-events,create it>>. A recording event rule also has
1784 the following implicit conditions:
1786 * The recording event rule itself is enabled.
1788 A recording event rule is enabled on creation.
1790 * The <<channel,channel>> to which the recording event rule is attached
1793 A channel is enabled on creation.
1795 * The <<tracing-session,recording session>> of the recording event rule is
1796 <<basic-tracing-session-control,active>> (started).
1798 A recording session is inactive (stopped) on creation.
1800 * The process for which LTTng creates an event to match is
1801 <<pid-tracking,allowed to record events>>.
1803 All processes are allowed to record events on recording session
1806 You always attach a recording event rule to a channel, which belongs to
1807 a recording session, when you create it.
1809 When a recording event rule{nbsp}__ER__ matches an event{nbsp}__E__,
1810 LTTng attempts to serialize and record{nbsp}__E__ to one of the
1811 available sub-buffers of the channel to which{nbsp}__E__ is attached.
1813 When multiple matching recording event rules are attached to the same
1814 channel, LTTng attempts to serialize and record the matched event
1815 _once_. In the following example, the second recording event rule is
1816 redundant when both are enabled:
1820 $ lttng enable-event --userspace hello:world
1821 $ lttng enable-event --userspace hello:world --loglevel=INFO
1825 .Logical path from an instrumentation point to an event record.
1826 image::event-rule.png[]
1828 As of LTTng{nbsp}{revision}, you cannot remove a recording event
1829 rule: it exists as long as its recording session exists.
1833 == Components of noch:{LTTng}
1835 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1836 to call LTTng a simple _tool_ since it's composed of multiple
1837 interacting components.
1839 This section describes those components, explains their respective
1840 roles, and shows how they connect together to form the LTTng ecosystem.
1842 The following diagram shows how the most important components of LTTng
1843 interact with user applications, the Linux kernel, and you:
1846 .Control and trace data paths between LTTng components.
1847 image::plumbing.png[]
1849 The LTTng project integrates:
1852 Libraries and command-line interface to control recording sessions:
1854 * <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1855 * <<lttng-consumerd,Consumer daemon>> (cmd:lttng-consumerd).
1856 * <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1857 * <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1858 * <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1859 * <<persistent-memory-file-systems,`lttng-crash` command-line tool>>
1860 (man:lttng-crash(1)).
1863 Libraries and Java/Python packages to instrument and trace user
1866 * <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1867 headers to instrument and trace any native user application.
1868 * <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1869 ** `liblttng-ust-libc-wrapper`
1870 ** `liblttng-ust-pthread-wrapper`
1871 ** `liblttng-ust-cyg-profile`
1872 ** `liblttng-ust-cyg-profile-fast`
1873 ** `liblttng-ust-dl`
1874 * <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1875 Java applications using `java.util.logging` or
1876 Apache log4j{nbsp}1.2 logging.
1877 * <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1878 Python applications using the standard `logging` package.
1881 <<lttng-modules,Linux kernel modules>> to instrument and trace the
1884 * LTTng kernel tracer module.
1885 * Recording ring buffer kernel modules.
1886 * Probe kernel modules.
1887 * LTTng logger kernel module.
1891 === Tracing control command-line interface
1893 The _man:lttng(1) command-line tool_ is the standard user interface to
1894 control LTTng <<tracing-session,recording sessions>>.
1896 The cmd:lttng tool is part of LTTng-tools.
1898 The cmd:lttng tool is linked with
1899 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1900 one or more <<lttng-sessiond,session daemons>> behind the scenes.
1902 The cmd:lttng tool has a Git-like interface:
1906 $ lttng [GENERAL OPTIONS] <COMMAND> [COMMAND OPTIONS]
1909 The ``<<controlling-tracing,Tracing control>>'' section explores the
1910 available features of LTTng through its cmd:lttng tool.
1913 [[liblttng-ctl-lttng]]
1914 === Tracing control library
1917 .The tracing control library.
1918 image::plumbing-liblttng-ctl.png[]
1920 The _LTTng control library_, `liblttng-ctl`, is used to communicate with
1921 a <<lttng-sessiond,session daemon>> using a C{nbsp}API that hides the
1922 underlying details of the protocol.
1924 `liblttng-ctl` is part of LTTng-tools.
1926 The <<lttng-cli,cmd:lttng command-line tool>> is linked with
1929 Use `liblttng-ctl` in C or $$C++$$ source code by including its
1934 #include <lttng/lttng.h>
1937 As of LTTng{nbsp}{revision}, the best available developer documentation
1938 for `liblttng-ctl` is its installed header files. Functions and
1939 structures are documented with header comments.
1943 === User space tracing library
1946 .The user space tracing library.
1947 image::plumbing-liblttng-ust.png[]
1949 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1950 is the LTTng user space tracer.
1952 `liblttng-ust` receives commands from a <<lttng-sessiond,session
1953 daemon>>, for example to allow specific instrumentation points to emit
1954 LTTng <<event-rule,events>>, and writes event records to <<channel,ring
1955 buffers>> shared with a <<lttng-consumerd,consumer daemon>>.
1957 `liblttng-ust` is part of LTTng-UST.
1959 `liblttng-ust` can also send asynchronous messages to the session daemon
1960 when it emits an event. This supports the ``event rule matches''
1961 <<trigger,trigger>> condition feature (see
1962 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
1963 to a session daemon>>”).
1965 Public C{nbsp}header files are installed beside `liblttng-ust` to
1966 instrument any <<c-application,C or $$C++$$ application>>.
1968 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1969 packages, use their own <<tracepoint-provider,tracepoint provider
1970 package>> which is linked with `liblttng-ust`.
1972 An application or library doesn't have to initialize `liblttng-ust`
1973 manually: its constructor does the necessary tasks to register the
1974 application to a session daemon. The initialization phase also
1975 configures instrumentation points depending on the <<event-rule,event
1976 rules>> that you already created.
1979 [[lttng-ust-agents]]
1980 === User space tracing agents
1983 .The user space tracing agents.
1984 image::plumbing-lttng-ust-agents.png[]
1986 The _LTTng-UST Java and Python agents_ are regular Java and Python
1987 packages which add LTTng tracing capabilities to the
1988 native logging frameworks.
1990 The LTTng-UST agents are part of LTTng-UST.
1992 In the case of Java, the
1993 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1994 core logging facilities] and
1995 https://logging.apache.org/log4j/1.2/[Apache log4j{nbsp}1.2] are supported.
1996 Note that Apache Log4j{nbsp}2 isn't supported.
1998 In the case of Python, the standard
1999 https://docs.python.org/3/library/logging.html[`logging`] package
2000 is supported. Both Python{nbsp}2 and Python{nbsp}3 modules can import the
2001 LTTng-UST Python agent package.
2003 The applications using the LTTng-UST agents are in the
2004 `java.util.logging` (JUL), log4j, and Python <<domain,tracing domains>>.
2006 Both agents use the same mechanism to convert log statements to LTTng
2007 events. When an agent initializes, it creates a log handler that
2008 attaches to the root logger. The agent also registers to a
2009 <<lttng-sessiond,session daemon>>. When the user application executes a
2010 log statement, the root logger passes it to the log handler of the
2011 agent. The custom log handler of the agent calls a native function in a
2012 tracepoint provider package shared library linked with
2013 <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
2014 other fields, like its logger name and its log level. This native
2015 function contains a user space instrumentation point, therefore tracing
2018 The log level condition of a <<event,recording event rule>> is
2019 considered when tracing a Java or a Python application, and it's
2020 compatible with the standard `java.util.logging`, log4j, and Python log
2025 === LTTng kernel modules
2028 .The LTTng kernel modules.
2029 image::plumbing-lttng-modules.png[]
2031 The _LTTng kernel modules_ are a set of Linux kernel modules
2032 which implement the kernel tracer of the LTTng project.
2034 The LTTng kernel modules are part of LTTng-modules.
2036 The LTTng kernel modules include:
2038 * A set of _probe_ modules.
2040 Each module attaches to a specific subsystem
2041 of the Linux kernel using its tracepoint instrument points.
2043 There are also modules to attach to the entry and return points of the
2044 Linux system call functions.
2046 * _Ring buffer_ modules.
2048 A ring buffer implementation is provided as kernel modules. The LTTng
2049 kernel tracer writes to ring buffers; a
2050 <<lttng-consumerd,consumer daemon>> reads from ring buffers.
2052 * The _LTTng kernel tracer_ module.
2053 * The <<proc-lttng-logger-abi,_LTTng logger_>> module.
2055 The LTTng logger module implements the special path:{/proc/lttng-logger}
2056 (and path:{/dev/lttng-logger}, since LTTng{nbsp}2.11) files so that any
2057 executable can generate LTTng events by opening those files and
2060 The LTTng kernel tracer can also send asynchronous messages to the
2061 <<lttng-sessiond,session daemon>> when it emits an event.
2062 This supports the ``event rule matches''
2063 <<trigger,trigger>> condition feature (see
2064 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
2065 to a session daemon>>”).
2067 Generally, you don't have to load the LTTng kernel modules manually
2068 (using man:modprobe(8), for example): a root session daemon loads the
2069 necessary modules when starting. If you have extra probe modules, you
2070 can specify to load them to the session daemon on the command line
2071 (see the opt:lttng-sessiond(8):--extra-kmod-probes option).
2073 The LTTng kernel modules are installed in
2074 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
2075 the kernel release (output of `uname --kernel-release`).
2082 .The session daemon.
2083 image::plumbing-sessiond.png[]
2085 The _session daemon_, man:lttng-sessiond(8), is a
2086 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] which:
2088 * Manages <<tracing-session,recording sessions>>.
2090 * Controls the various components (like tracers and
2091 <<lttng-consumerd,consumer daemons>>) of LTTng.
2093 * Sends <<notif-trigger-api,asynchronous notifications>> to user
2096 The session daemon is part of LTTng-tools.
2098 The session daemon sends control requests to and receives control
2101 * The <<lttng-ust,user space tracing library>>.
2103 Any instance of the user space tracing library first registers to
2104 a session daemon. Then, the session daemon can send requests to
2105 this instance, such as:
2108 ** Get the list of tracepoints.
2109 ** Share a <<event,recording event rule>> so that the user space tracing
2110 library can decide whether or not a given tracepoint can emit events.
2111 Amongst the possible conditions of a recording event rule is a filter
2112 expression which `liblttng-ust` evaluates before it emits an event.
2113 ** Share <<channel,channel>> attributes and ring buffer locations.
2116 The session daemon and the user space tracing library use a Unix
2117 domain socket to communicate.
2119 * The <<lttng-ust-agents,user space tracing agents>>.
2121 Any instance of a user space tracing agent first registers to
2122 a session daemon. Then, the session daemon can send requests to
2123 this instance, such as:
2126 ** Get the list of loggers.
2127 ** Enable or disable a specific logger.
2130 The session daemon and the user space tracing agent use a TCP connection
2133 * The <<lttng-modules,LTTng kernel tracer>>.
2134 * The <<lttng-consumerd,consumer daemon>>.
2136 The session daemon sends requests to the consumer daemon to instruct
2137 it where to send the trace data streams, amongst other information.
2139 * The <<lttng-relayd,relay daemon>>.
2141 The session daemon receives commands from the
2142 <<liblttng-ctl-lttng,tracing control library>>.
2144 The session daemon can receive asynchronous messages from the
2145 <<lttng-ust,user space>> and <<lttng-modules,kernel>> tracers
2146 when they emit events. This supports the ``event rule matches''
2147 <<trigger,trigger>> condition feature (see
2148 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
2149 to a session daemon>>”).
2151 The root session daemon loads the appropriate
2152 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
2153 one or more <<lttng-consumerd,consumer daemons>> as soon as you create
2154 a <<event,recording event rule>>.
2156 The session daemon doesn't send and receive trace data: this is the
2157 role of the <<lttng-consumerd,consumer daemon>> and
2158 <<lttng-relayd,relay daemon>>. It does, however, generate the
2159 https://diamon.org/ctf/[CTF] metadata stream.
2161 Each Unix user can have its own session daemon instance. The
2162 recording sessions which different session daemons manage are completely
2165 The root user's session daemon is the only one which is
2166 allowed to control the LTTng kernel tracer, and its spawned consumer
2167 daemon is the only one which is allowed to consume trace data from the
2168 LTTng kernel tracer. Note, however, that any Unix user which is a member
2169 of the <<tracing-group,tracing group>> is allowed
2170 to create <<channel,channels>> in the
2171 Linux kernel <<domain,tracing domain>>, and therefore to use the Linux
2172 kernel LTTng tracer.
2174 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
2175 session daemon when using its `create` command if none is currently
2176 running. You can also start the session daemon manually.
2183 .The consumer daemon.
2184 image::plumbing-consumerd.png[]
2186 The _consumer daemon_, cmd:lttng-consumerd, is a
2187 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] which shares
2188 ring buffers with user applications or with the LTTng kernel modules to
2189 collect trace data and send it to some location (file system or to a
2190 <<lttng-relayd,relay daemon>> over the network).
2192 The consumer daemon is part of LTTng-tools.
2194 You don't start a consumer daemon manually: a consumer daemon is always
2195 spawned by a <<lttng-sessiond,session daemon>> as soon as you create a
2196 <<event,recording event rule>>, that is, before you start recording. When
2197 you kill its owner session daemon, the consumer daemon also exits
2198 because it's the child process of the session daemon. Command-line
2199 options of man:lttng-sessiond(8) target the consumer daemon process.
2201 There are up to two running consumer daemons per Unix user, whereas only
2202 one session daemon can run per user. This is because each process can be
2203 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
2204 and 64-bit processes, it's more efficient to have separate
2205 corresponding 32-bit and 64-bit consumer daemons. The root user is an
2206 exception: it can have up to _three_ running consumer daemons: 32-bit
2207 and 64-bit instances for its user applications, and one more
2208 reserved for collecting kernel trace data.
2216 image::plumbing-relayd.png[]
2218 The _relay daemon_, man:lttng-relayd(8), is a
2219 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] acting as a bridge
2220 between remote session and consumer daemons, local trace files, and a
2221 remote live trace reader.
2223 The relay daemon is part of LTTng-tools.
2225 The main purpose of the relay daemon is to implement a receiver of
2226 <<sending-trace-data-over-the-network,trace data over the network>>.
2227 This is useful when the target system doesn't have much file system
2228 space to write trace files locally.
2230 The relay daemon is also a server to which a
2231 <<lttng-live,live trace reader>> can
2232 connect. The live trace reader sends requests to the relay daemon to
2233 receive trace data as the target system records events. The
2234 communication protocol is named _LTTng live_; it's used over TCP
2237 Note that you can start the relay daemon on the target system directly.
2238 This is the setup of choice when the use case is to view/analyze events
2239 as the target system records them without the need of a remote system.
2243 == [[using-lttng]]Instrumentation
2245 There are many examples of tracing and monitoring in our everyday life:
2247 * You have access to real-time and historical weather reports and
2248 forecasts thanks to weather stations installed around the country.
2249 * You know your heart is safe thanks to an electrocardiogram.
2250 * You make sure not to drive your car too fast and to have enough fuel
2251 to reach your destination thanks to gauges visible on your dashboard.
2253 All the previous examples have something in common: they rely on
2254 **instruments**. Without the electrodes attached to the surface of your
2255 body skin, cardiac monitoring is futile.
2257 LTTng, as a tracer, is no different from those real life examples. If
2258 you're about to trace a software system or, in other words, record its
2259 history of execution, you better have **instrumentation points** in the
2260 subject you're tracing, that is, the actual software system.
2262 <<instrumentation-point-types,Various ways>> were developed to
2263 instrument a piece of software for LTTng tracing. The most
2264 straightforward one is to manually place static instrumentation points,
2265 called _tracepoints_, in the source code of the application. The Linux
2266 kernel <<domain,tracing domain>> also makes it possible to dynamically
2267 add instrumentation points.
2269 If you're only interested in tracing the Linux kernel, your
2270 instrumentation needs are probably already covered by the built-in
2271 <<lttng-modules,Linux kernel instrumentation points>> of LTTng. You may
2272 also wish to have LTTng trace a user application which is already
2273 instrumented for LTTng tracing. In such cases, skip this whole section
2274 and read the topics of the ``<<controlling-tracing,Tracing control>>''
2277 Many methods are available to instrument a piece of software for LTTng
2280 * <<c-application,Instrument a C/$$C++$$ user application>>.
2281 * <<prebuilt-ust-helpers,Load a prebuilt user space tracing helper>>.
2282 * <<java-application,Instrument a Java application>>.
2283 * <<python-application,Instrument a Python application>>.
2284 * <<proc-lttng-logger-abi,Use the LTTng logger>>.
2285 * <<instrumenting-linux-kernel,Instrument a Linux kernel image or module>>.
2289 === [[cxx-application]]Instrument a C/$$C++$$ user application
2291 The high level procedure to instrument a C or $$C++$$ user application
2292 with the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`,
2295 . <<tracepoint-provider,Create the source files of a tracepoint provider
2298 . <<probing-the-application-source-code,Add tracepoints to
2299 the source code of the application>>.
2301 . <<building-tracepoint-providers-and-user-application,Build and link
2302 a tracepoint provider package and the user application>>.
2304 If you need quick, man:printf(3)-like instrumentation, skip those steps
2305 and use <<tracef,`lttng_ust_tracef()`>> or
2306 <<tracelog,`lttng_ust_tracelog()`>> instead.
2308 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
2309 instrument a user application with `liblttng-ust`.
2312 [[tracepoint-provider]]
2313 ==== Create the source files of a tracepoint provider package
2315 A _tracepoint provider_ is a set of compiled functions which provide
2316 **tracepoints** to an application, the type of instrumentation point
2317 which LTTng-UST provides.
2319 Those functions can make LTTng emit events with user-defined fields and
2320 serialize those events as event records to one or more LTTng-UST
2321 <<channel,channel>> sub-buffers. The `lttng_ust_tracepoint()` macro,
2322 which you <<probing-the-application-source-code,insert in the source
2323 code of a user application>>, calls those functions.
2325 A _tracepoint provider package_ is an object file (`.o`) or a shared
2326 library (`.so`) which contains one or more tracepoint providers. Its
2329 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
2330 * A <<tpp-source,tracepoint provider package source>> (`.c`).
2332 A tracepoint provider package is dynamically linked with `liblttng-ust`,
2333 the LTTng user space tracer, at run time.
2336 .User application linked with `liblttng-ust` and containing a tracepoint provider.
2337 image::ust-app.png[]
2339 NOTE: If you need quick, man:printf(3)-like instrumentation, skip
2340 creating and using a tracepoint provider and use
2341 <<tracef,`lttng_ust_tracef()`>> or <<tracelog,`lttng_ust_tracelog()`>>
2346 ===== Create a tracepoint provider header file template
2348 A _tracepoint provider header file_ contains the tracepoint definitions
2349 of a tracepoint provider.
2351 To create a tracepoint provider header file:
2353 . Start from this template:
2357 .Tracepoint provider header file template (`.h` file extension).
2359 #undef LTTNG_UST_TRACEPOINT_PROVIDER
2360 #define LTTNG_UST_TRACEPOINT_PROVIDER provider_name
2362 #undef LTTNG_UST_TRACEPOINT_INCLUDE
2363 #define LTTNG_UST_TRACEPOINT_INCLUDE "./tp.h"
2365 #if !defined(_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ)
2368 #include <lttng/tracepoint.h>
2371 * Use LTTNG_UST_TRACEPOINT_EVENT(), LTTNG_UST_TRACEPOINT_EVENT_CLASS(),
2372 * LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(), and
2373 * LTTNG_UST_TRACEPOINT_LOGLEVEL() here.
2378 #include <lttng/tracepoint-event.h>
2384 * +__provider_name__+ with the name of your tracepoint provider.
2385 * `"tp.h"` with the name of your tracepoint provider header file.
2387 . Below the `#include <lttng/tracepoint.h>` line, put your
2388 <<defining-tracepoints,tracepoint definitions>>.
2390 Your tracepoint provider name must be unique amongst all the possible
2391 tracepoint provider names used on the same target system. We suggest to
2392 include the name of your project or company in the name, for example,
2393 `org_lttng_my_project_tpp`.
2396 [[defining-tracepoints]]
2397 ===== Create a tracepoint definition
2399 A _tracepoint definition_ defines, for a given tracepoint:
2401 * Its **input arguments**.
2403 They're the macro parameters that the `lttng_ust_tracepoint()` macro
2404 accepts for this particular tracepoint in the source code of the user
2407 * Its **output event fields**.
2409 They're the sources of event fields that form the payload of any event
2410 that the execution of the `lttng_ust_tracepoint()` macro emits for this
2411 particular tracepoint.
2413 Create a tracepoint definition with the
2414 `LTTNG_UST_TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2416 <<tpp-header,tracepoint provider header file template>>.
2418 The syntax of the `LTTNG_UST_TRACEPOINT_EVENT()` macro is:
2421 .`LTTNG_UST_TRACEPOINT_EVENT()` macro syntax.
2423 LTTNG_UST_TRACEPOINT_EVENT(
2424 /* Tracepoint provider name */
2427 /* Tracepoint name */
2430 /* Input arguments */
2435 /* Output event fields */
2436 LTTNG_UST_TP_FIELDS(
2444 * +__provider_name__+ with your tracepoint provider name.
2445 * +__tracepoint_name__+ with your tracepoint name.
2446 * +__arguments__+ with the <<tpp-def-input-args,input arguments>>.
2447 * +__fields__+ with the <<tpp-def-output-fields,output event field>>
2450 The full name of this tracepoint is `provider_name:tracepoint_name`.
2453 .Event name length limitation
2455 The concatenation of the tracepoint provider name and the tracepoint
2456 name must not exceed **254{nbsp}characters**. If it does, the
2457 instrumented application compiles and runs, but LTTng throws multiple
2458 warnings and you could experience serious issues.
2461 [[tpp-def-input-args]]The syntax of the `LTTNG_UST_TP_ARGS()` macro is:
2464 .`LTTNG_UST_TP_ARGS()` macro syntax.
2473 * +__type__+ with the C{nbsp}type of the argument.
2474 * +__arg_name__+ with the argument name.
2476 You can repeat +__type__+ and +__arg_name__+ up to 10{nbsp}times to have
2477 more than one argument.
2479 .`LTTNG_UST_TP_ARGS()` usage with three arguments.
2491 The `LTTNG_UST_TP_ARGS()` and `LTTNG_UST_TP_ARGS(void)` forms are valid
2492 to create a tracepoint definition with no input arguments.
2494 [[tpp-def-output-fields]]The `LTTNG_UST_TP_FIELDS()` macro contains a
2495 list of `lttng_ust_field_*()` macros. Each `lttng_ust_field_*()` macro
2496 defines one event field. See man:lttng-ust(3) for a complete description
2497 of the available `lttng_ust_field_*()` macros. A `lttng_ust_field_*()`
2498 macro specifies the type, size, and byte order of one event field.
2500 Each `lttng_ust_field_*()` macro takes an _argument expression_
2501 parameter. This is a C{nbsp}expression that the tracer evaluates at the
2502 `lttng_ust_tracepoint()` macro site in the source code of the
2503 application. This expression provides the source of data of a field. The
2504 argument expression can include input argument names listed in the
2505 `LTTNG_UST_TP_ARGS()` macro.
2507 Each `lttng_ust_field_*()` macro also takes a _field name_ parameter.
2508 Field names must be unique within a given tracepoint definition.
2510 Here's a complete tracepoint definition example:
2512 .Tracepoint definition.
2514 The following tracepoint definition defines a tracepoint which takes
2515 three input arguments and has four output event fields.
2519 #include "my-custom-structure.h"
2521 LTTNG_UST_TRACEPOINT_EVENT(
2525 const struct my_custom_structure *, my_custom_structure,
2529 LTTNG_UST_TP_FIELDS(
2530 lttng_ust_field_string(query_field, query)
2531 lttng_ust_field_float(double, ratio_field, ratio)
2532 lttng_ust_field_integer(int, recv_size,
2533 my_custom_structure->recv_size)
2534 lttng_ust_field_integer(int, send_size,
2535 my_custom_structure->send_size)
2540 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
2541 macro in the source code of your application like this:
2545 lttng_ust_tracepoint(my_provider, my_tracepoint,
2546 my_structure, some_ratio, the_query);
2550 NOTE: The LTTng-UST tracer only evaluates the arguments of a tracepoint
2551 at run time when such a tracepoint _could_ emit an event. See
2552 <<event-creation-emission-opti,this note>> to learn more.
2555 [[using-tracepoint-classes]]
2556 ===== Use a tracepoint class
2558 A _tracepoint class_ is a class of tracepoints which share the same
2559 output event field definitions. A _tracepoint instance_ is one
2560 instance of such a defined tracepoint class, with its own tracepoint
2563 The <<defining-tracepoints,`LTTNG_UST_TRACEPOINT_EVENT()` macro>> is
2564 actually a shorthand which defines both a tracepoint class and a
2565 tracepoint instance at the same time.
2567 When you build a tracepoint provider package, the C or $$C++$$ compiler
2568 creates one serialization function for each **tracepoint class**. A
2569 serialization function is responsible for serializing the event fields
2570 of a tracepoint to a sub-buffer when recording.
2572 For various performance reasons, when your situation requires multiple
2573 tracepoint definitions with different names, but with the same event
2574 fields, we recommend that you manually create a tracepoint class and
2575 instantiate as many tracepoint instances as needed. One positive effect
2576 of such a design, amongst other advantages, is that all tracepoint
2577 instances of the same tracepoint class reuse the same serialization
2578 function, thus reducing
2579 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2581 .Use a tracepoint class and tracepoint instances.
2583 Consider the following three tracepoint definitions:
2587 LTTNG_UST_TRACEPOINT_EVENT(
2594 LTTNG_UST_TP_FIELDS(
2595 lttng_ust_field_integer(int, userid, userid)
2596 lttng_ust_field_integer(size_t, len, len)
2600 LTTNG_UST_TRACEPOINT_EVENT(
2607 LTTNG_UST_TP_FIELDS(
2608 lttng_ust_field_integer(int, userid, userid)
2609 lttng_ust_field_integer(size_t, len, len)
2613 LTTNG_UST_TRACEPOINT_EVENT(
2620 LTTNG_UST_TP_FIELDS(
2621 lttng_ust_field_integer(int, userid, userid)
2622 lttng_ust_field_integer(size_t, len, len)
2627 In this case, we create three tracepoint classes, with one implicit
2628 tracepoint instance for each of them: `get_account`, `get_settings`, and
2629 `get_transaction`. However, they all share the same event field names
2630 and types. Hence three identical, yet independent serialization
2631 functions are created when you build the tracepoint provider package.
2633 A better design choice is to define a single tracepoint class and three
2634 tracepoint instances:
2638 /* The tracepoint class */
2639 LTTNG_UST_TRACEPOINT_EVENT_CLASS(
2640 /* Tracepoint class provider name */
2643 /* Tracepoint class name */
2646 /* Input arguments */
2652 /* Output event fields */
2653 LTTNG_UST_TP_FIELDS(
2654 lttng_ust_field_integer(int, userid, userid)
2655 lttng_ust_field_integer(size_t, len, len)
2659 /* The tracepoint instances */
2660 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
2661 /* Tracepoint class provider name */
2664 /* Tracepoint class name */
2667 /* Instance provider name */
2670 /* Tracepoint name */
2673 /* Input arguments */
2679 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
2688 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
2700 The tracepoint class and instance provider names must be the same if the
2701 `LTTNG_UST_TRACEPOINT_EVENT_CLASS()` and
2702 `LTTNG_UST_TRACEPOINT_EVENT_INSTANCE()` expansions are part of the same
2703 translation unit. See man:lttng-ust(3) to learn more.
2706 [[assigning-log-levels]]
2707 ===== Assign a log level to a tracepoint definition
2709 Assign a _log level_ to a <<defining-tracepoints,tracepoint definition>>
2710 with the `LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro.
2712 Assigning different levels of severity to tracepoint definitions can be
2713 useful: when you <<enabling-disabling-events,create a recording event
2714 rule>>, you can target tracepoints having a log level at least as severe
2715 as a specific value.
2717 The concept of LTTng-UST log levels is similar to the levels found
2718 in typical logging frameworks:
2720 * In a logging framework, the log level is given by the function
2721 or method name you use at the log statement site: `debug()`,
2722 `info()`, `warn()`, `error()`, and so on.
2724 * In LTTng-UST, you statically assign the log level to a tracepoint
2725 definition; any `lttng_ust_tracepoint()` macro invocation which refers
2726 to this definition has this log level.
2728 You must use `LTTNG_UST_TRACEPOINT_LOGLEVEL()` _after_ the
2729 <<defining-tracepoints,`LTTNG_UST_TRACEPOINT_EVENT()`>> or
2730 <<using-tracepoint-classes,`LTTNG_UST_TRACEPOINT_INSTANCE()`>> macro for
2733 The syntax of the `LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro is:
2736 .`LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro syntax.
2738 LTTNG_UST_TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2743 * +__provider_name__+ with the tracepoint provider name.
2744 * +__tracepoint_name__+ with the tracepoint name.
2745 * +__log_level__+ with the log level to assign to the tracepoint
2746 definition named +__tracepoint_name__+ in the +__provider_name__+
2747 tracepoint provider.
2749 See man:lttng-ust(3) for a list of available log level names.
2751 .Assign the `LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_UNIT` log level to a tracepoint definition.
2755 /* Tracepoint definition */
2756 LTTNG_UST_TRACEPOINT_EVENT(
2763 LTTNG_UST_TP_FIELDS(
2764 lttng_ust_field_integer(int, userid, userid)
2765 lttng_ust_field_integer(size_t, len, len)
2769 /* Log level assignment */
2770 LTTNG_UST_TRACEPOINT_LOGLEVEL(my_app, get_transaction,
2771 LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_UNIT)
2777 ===== Create a tracepoint provider package source file
2779 A _tracepoint provider package source file_ is a C source file which
2780 includes a <<tpp-header,tracepoint provider header file>> to expand its
2781 macros into event serialization and other functions.
2783 Use the following tracepoint provider package source file template:
2786 .Tracepoint provider package source file template.
2788 #define LTTNG_UST_TRACEPOINT_CREATE_PROBES
2793 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2794 header file>> name. You may also include more than one tracepoint
2795 provider header file here to create a tracepoint provider package
2796 holding more than one tracepoint providers.
2799 [[probing-the-application-source-code]]
2800 ==== Add tracepoints to the source code of an application
2802 Once you <<tpp-header,create a tracepoint provider header file>>, use
2803 the `lttng_ust_tracepoint()` macro in the source code of your
2804 application to insert the tracepoints that this header
2805 <<defining-tracepoints,defines>>.
2807 The `lttng_ust_tracepoint()` macro takes at least two parameters: the
2808 tracepoint provider name and the tracepoint name. The corresponding
2809 tracepoint definition defines the other parameters.
2811 .`lttng_ust_tracepoint()` usage.
2813 The following <<defining-tracepoints,tracepoint definition>> defines a
2814 tracepoint which takes two input arguments and has two output event
2818 .Tracepoint provider header file.
2820 #include "my-custom-structure.h"
2822 LTTNG_UST_TRACEPOINT_EVENT(
2827 const char *, cmd_name
2829 LTTNG_UST_TP_FIELDS(
2830 lttng_ust_field_string(cmd_name, cmd_name)
2831 lttng_ust_field_integer(int, number_of_args, argc)
2836 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
2837 macro in the source code of your application like this:
2840 .Application source file.
2844 int main(int argc, char* argv[])
2846 lttng_ust_tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2851 Note how the source code of the application includes
2852 the tracepoint provider header file containing the tracepoint
2853 definitions to use, path:{tp.h}.
2856 .`lttng_ust_tracepoint()` usage with a complex tracepoint definition.
2858 Consider this complex tracepoint definition, where multiple event
2859 fields refer to the same input arguments in their argument expression
2863 .Tracepoint provider header file.
2865 /* For `struct stat` */
2866 #include <sys/types.h>
2867 #include <sys/stat.h>
2870 LTTNG_UST_TRACEPOINT_EVENT(
2878 LTTNG_UST_TP_FIELDS(
2879 lttng_ust_field_integer(int, my_constant_field, 23 + 17)
2880 lttng_ust_field_integer(int, my_int_arg_field, my_int_arg)
2881 lttng_ust_field_integer(int, my_int_arg_field2,
2882 my_int_arg * my_int_arg)
2883 lttng_ust_field_integer(int, sum4_field,
2884 my_str_arg[0] + my_str_arg[1] +
2885 my_str_arg[2] + my_str_arg[3])
2886 lttng_ust_field_string(my_str_arg_field, my_str_arg)
2887 lttng_ust_field_integer_hex(off_t, size_field, st->st_size)
2888 lttng_ust_field_float(double, size_dbl_field, (double) st->st_size)
2889 lttng_ust_field_sequence_text(char, half_my_str_arg_field,
2891 strlen(my_str_arg) / 2)
2896 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
2897 macro in the source code of your application like this:
2900 .Application source file.
2902 #define LTTNG_UST_TRACEPOINT_DEFINE
2909 stat("/etc/fstab", &s);
2910 lttng_ust_tracepoint(my_provider, my_tracepoint, 23,
2911 "Hello, World!", &s);
2917 If you look at the event record that LTTng writes when recording this
2918 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2919 it should look like this:
2921 .Event record fields
2923 |Field name |Field value
2924 |`my_constant_field` |40
2925 |`my_int_arg_field` |23
2926 |`my_int_arg_field2` |529
2928 |`my_str_arg_field` |`Hello, World!`
2929 |`size_field` |0x12d
2930 |`size_dbl_field` |301.0
2931 |`half_my_str_arg_field` |`Hello,`
2935 Sometimes, the arguments you pass to `lttng_ust_tracepoint()` are
2936 expensive to evaluate--they use the call stack, for example. To avoid
2937 this computation when LTTng wouldn't emit any event anyway, use the
2938 `lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros.
2940 The syntax of the `lttng_ust_tracepoint_enabled()` and
2941 `lttng_ust_do_tracepoint()` macros is:
2944 .`lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros syntax.
2946 lttng_ust_tracepoint_enabled(provider_name, tracepoint_name)
2948 lttng_ust_do_tracepoint(provider_name, tracepoint_name, ...)
2953 * +__provider_name__+ with the tracepoint provider name.
2954 * +__tracepoint_name__+ with the tracepoint name.
2956 `lttng_ust_tracepoint_enabled()` returns a non-zero value if executing
2957 the tracepoint named `tracepoint_name` from the provider named
2958 `provider_name` _could_ make LTTng emit an event, depending on the
2959 payload of said event.
2961 `lttng_ust_do_tracepoint()` is like `lttng_ust_tracepoint()`, except
2962 that it doesn't check what `lttng_ust_tracepoint_enabled()` checks.
2963 Using `lttng_ust_tracepoint()` with `lttng_ust_tracepoint_enabled()` is
2964 dangerous because `lttng_ust_tracepoint()` also contains the
2965 `lttng_ust_tracepoint_enabled()` check; therefore, a race condition is
2966 possible in this situation:
2969 .Possible race condition when using `lttng_ust_tracepoint_enabled()` with `lttng_ust_tracepoint()`.
2971 if (lttng_ust_tracepoint_enabled(my_provider, my_tracepoint)) {
2972 stuff = prepare_stuff();
2975 lttng_ust_tracepoint(my_provider, my_tracepoint, stuff);
2978 If `lttng_ust_tracepoint_enabled()` is false, but would be true after
2979 the conditional block, then `stuff` isn't prepared: the emitted event
2980 will either contain wrong data, or the whole application could crash
2981 (with a segmentation fault, for example).
2983 NOTE: Neither `lttng_ust_tracepoint_enabled()` nor
2984 `lttng_ust_do_tracepoint()` have an `STAP_PROBEV()` call. If you need
2985 it, you must emit this call yourself.
2988 [[building-tracepoint-providers-and-user-application]]
2989 ==== Build and link a tracepoint provider package and an application
2991 Once you have one or more <<tpp-header,tracepoint provider header
2992 files>> and a <<tpp-source,tracepoint provider package source file>>,
2993 create the tracepoint provider package by compiling its source
2994 file. From here, multiple build and run scenarios are possible. The
2995 following table shows common application and library configurations
2996 along with the required command lines to achieve them.
2998 In the following diagrams, we use the following file names:
3001 Executable application.
3004 Application object file.
3007 Tracepoint provider package object file.
3010 Tracepoint provider package archive file.
3013 Tracepoint provider package shared object file.
3016 User library object file.
3019 User library shared object file.
3021 We use the following symbols in the diagrams of table below:
3024 .Symbols used in the build scenario diagrams.
3025 image::ust-sit-symbols.png[]
3027 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
3028 variable in the following instructions.
3030 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
3031 .Common tracepoint provider package scenarios.
3033 |Scenario |Instructions
3036 The instrumented application is statically linked with
3037 the tracepoint provider package object.
3039 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
3042 include::../common/ust-sit-step-tp-o.txt[]
3044 To build the instrumented application:
3046 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3051 #define LTTNG_UST_TRACEPOINT_DEFINE
3055 . Compile the application source file:
3064 . Build the application:
3069 $ gcc -o app app.o tpp.o -llttng-ust -ldl
3073 To run the instrumented application:
3075 * Start the application:
3085 The instrumented application is statically linked with the
3086 tracepoint provider package archive file.
3088 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
3091 To create the tracepoint provider package archive file:
3093 . Compile the <<tpp-source,tracepoint provider package source file>>:
3102 . Create the tracepoint provider package archive file:
3107 $ ar rcs tpp.a tpp.o
3111 To build the instrumented application:
3113 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3118 #define LTTNG_UST_TRACEPOINT_DEFINE
3122 . Compile the application source file:
3131 . Build the application:
3136 $ gcc -o app app.o tpp.a -llttng-ust -ldl
3140 To run the instrumented application:
3142 * Start the application:
3152 The instrumented application is linked with the tracepoint provider
3153 package shared object.
3155 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
3158 include::../common/ust-sit-step-tp-so.txt[]
3160 To build the instrumented application:
3162 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3167 #define LTTNG_UST_TRACEPOINT_DEFINE
3171 . Compile the application source file:
3180 . Build the application:
3185 $ gcc -o app app.o -ldl -L. -ltpp
3189 To run the instrumented application:
3191 * Start the application:
3201 The tracepoint provider package shared object is preloaded before the
3202 instrumented application starts.
3204 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
3207 include::../common/ust-sit-step-tp-so.txt[]
3209 To build the instrumented application:
3211 . In path:{app.c}, before including path:{tpp.h}, add the
3217 #define LTTNG_UST_TRACEPOINT_DEFINE
3218 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3222 . Compile the application source file:
3231 . Build the application:
3236 $ gcc -o app app.o -ldl
3240 To run the instrumented application with tracing support:
3242 * Preload the tracepoint provider package shared object and
3243 start the application:
3248 $ LD_PRELOAD=./libtpp.so ./app
3252 To run the instrumented application without tracing support:
3254 * Start the application:
3264 The instrumented application dynamically loads the tracepoint provider
3265 package shared object.
3267 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
3270 include::../common/ust-sit-step-tp-so.txt[]
3272 To build the instrumented application:
3274 . In path:{app.c}, before including path:{tpp.h}, add the
3280 #define LTTNG_UST_TRACEPOINT_DEFINE
3281 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3285 . Compile the application source file:
3294 . Build the application:
3299 $ gcc -o app app.o -ldl
3303 To run the instrumented application:
3305 * Start the application:
3315 The application is linked with the instrumented user library.
3317 The instrumented user library is statically linked with the tracepoint
3318 provider package object file.
3320 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
3323 include::../common/ust-sit-step-tp-o-fpic.txt[]
3325 To build the instrumented user library:
3327 . In path:{emon.c}, before including path:{tpp.h}, add the
3333 #define LTTNG_UST_TRACEPOINT_DEFINE
3337 . Compile the user library source file:
3342 $ gcc -I. -fpic -c emon.c
3346 . Build the user library shared object:
3351 $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
3355 To build the application:
3357 . Compile the application source file:
3366 . Build the application:
3371 $ gcc -o app app.o -L. -lemon
3375 To run the application:
3377 * Start the application:
3387 The application is linked with the instrumented user library.
3389 The instrumented user library is linked with the tracepoint provider
3390 package shared object.
3392 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3395 include::../common/ust-sit-step-tp-so.txt[]
3397 To build the instrumented user library:
3399 . In path:{emon.c}, before including path:{tpp.h}, add the
3405 #define LTTNG_UST_TRACEPOINT_DEFINE
3409 . Compile the user library source file:
3414 $ gcc -I. -fpic -c emon.c
3418 . Build the user library shared object:
3423 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3427 To build the application:
3429 . Compile the application source file:
3438 . Build the application:
3443 $ gcc -o app app.o -L. -lemon
3447 To run the application:
3449 * Start the application:
3459 The tracepoint provider package shared object is preloaded before the
3462 The application is linked with the instrumented user library.
3464 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3467 include::../common/ust-sit-step-tp-so.txt[]
3469 To build the instrumented user library:
3471 . In path:{emon.c}, before including path:{tpp.h}, add the
3477 #define LTTNG_UST_TRACEPOINT_DEFINE
3478 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3482 . Compile the user library source file:
3487 $ gcc -I. -fpic -c emon.c
3491 . Build the user library shared object:
3496 $ gcc -shared -o libemon.so emon.o -ldl
3500 To build the application:
3502 . Compile the application source file:
3511 . Build the application:
3516 $ gcc -o app app.o -L. -lemon
3520 To run the application with tracing support:
3522 * Preload the tracepoint provider package shared object and
3523 start the application:
3528 $ LD_PRELOAD=./libtpp.so ./app
3532 To run the application without tracing support:
3534 * Start the application:
3544 The application is linked with the instrumented user library.
3546 The instrumented user library dynamically loads the tracepoint provider
3547 package shared object.
3549 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3552 include::../common/ust-sit-step-tp-so.txt[]
3554 To build the instrumented user library:
3556 . In path:{emon.c}, before including path:{tpp.h}, add the
3562 #define LTTNG_UST_TRACEPOINT_DEFINE
3563 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3567 . Compile the user library source file:
3572 $ gcc -I. -fpic -c emon.c
3576 . Build the user library shared object:
3581 $ gcc -shared -o libemon.so emon.o -ldl
3585 To build the application:
3587 . Compile the application source file:
3596 . Build the application:
3601 $ gcc -o app app.o -L. -lemon
3605 To run the application:
3607 * Start the application:
3617 The application dynamically loads the instrumented user library.
3619 The instrumented user library is linked with the tracepoint provider
3620 package shared object.
3622 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3625 include::../common/ust-sit-step-tp-so.txt[]
3627 To build the instrumented user library:
3629 . In path:{emon.c}, before including path:{tpp.h}, add the
3635 #define LTTNG_UST_TRACEPOINT_DEFINE
3639 . Compile the user library source file:
3644 $ gcc -I. -fpic -c emon.c
3648 . Build the user library shared object:
3653 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3657 To build the application:
3659 . Compile the application source file:
3668 . Build the application:
3673 $ gcc -o app app.o -ldl -L. -lemon
3677 To run the application:
3679 * Start the application:
3689 The application dynamically loads the instrumented user library.
3691 The instrumented user library dynamically loads the tracepoint provider
3692 package shared object.
3694 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3697 include::../common/ust-sit-step-tp-so.txt[]
3699 To build the instrumented user library:
3701 . In path:{emon.c}, before including path:{tpp.h}, add the
3707 #define LTTNG_UST_TRACEPOINT_DEFINE
3708 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3712 . Compile the user library source file:
3717 $ gcc -I. -fpic -c emon.c
3721 . Build the user library shared object:
3726 $ gcc -shared -o libemon.so emon.o -ldl
3730 To build the application:
3732 . Compile the application source file:
3741 . Build the application:
3746 $ gcc -o app app.o -ldl -L. -lemon
3750 To run the application:
3752 * Start the application:
3762 The tracepoint provider package shared object is preloaded before the
3765 The application dynamically loads the instrumented user library.
3767 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3770 include::../common/ust-sit-step-tp-so.txt[]
3772 To build the instrumented user library:
3774 . In path:{emon.c}, before including path:{tpp.h}, add the
3780 #define LTTNG_UST_TRACEPOINT_DEFINE
3781 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3785 . Compile the user library source file:
3790 $ gcc -I. -fpic -c emon.c
3794 . Build the user library shared object:
3799 $ gcc -shared -o libemon.so emon.o -ldl
3803 To build the application:
3805 . Compile the application source file:
3814 . Build the application:
3819 $ gcc -o app app.o -L. -lemon
3823 To run the application with tracing support:
3825 * Preload the tracepoint provider package shared object and
3826 start the application:
3831 $ LD_PRELOAD=./libtpp.so ./app
3835 To run the application without tracing support:
3837 * Start the application:
3847 The application is statically linked with the tracepoint provider
3848 package object file.
3850 The application is linked with the instrumented user library.
3852 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3855 include::../common/ust-sit-step-tp-o.txt[]
3857 To build the instrumented user library:
3859 . In path:{emon.c}, before including path:{tpp.h}, add the
3865 #define LTTNG_UST_TRACEPOINT_DEFINE
3869 . Compile the user library source file:
3874 $ gcc -I. -fpic -c emon.c
3878 . Build the user library shared object:
3883 $ gcc -shared -o libemon.so emon.o
3887 To build the application:
3889 . Compile the application source file:
3898 . Build the application:
3903 $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3907 To run the instrumented application:
3909 * Start the application:
3919 The application is statically linked with the tracepoint provider
3920 package object file.
3922 The application dynamically loads the instrumented user library.
3924 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3927 include::../common/ust-sit-step-tp-o.txt[]
3929 To build the application:
3931 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3936 #define LTTNG_UST_TRACEPOINT_DEFINE
3940 . Compile the application source file:
3949 . Build the application:
3954 $ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3959 The `--export-dynamic` option passed to the linker is necessary for the
3960 dynamically loaded library to ``see'' the tracepoint symbols defined in
3963 To build the instrumented user library:
3965 . Compile the user library source file:
3970 $ gcc -I. -fpic -c emon.c
3974 . Build the user library shared object:
3979 $ gcc -shared -o libemon.so emon.o
3983 To run the application:
3985 * Start the application:
3996 [[using-lttng-ust-with-daemons]]
3997 ===== Use noch:{LTTng-UST} with daemons
3999 If your instrumented application calls man:fork(2), man:clone(2),
4000 or BSD's man:rfork(2), without a following man:exec(3)-family
4001 system call, you must preload the path:{liblttng-ust-fork.so} shared
4002 object when you start the application.
4006 $ LD_PRELOAD=liblttng-ust-fork.so ./my-app
4009 If your tracepoint provider package is
4010 a shared library which you also preload, you must put both
4011 shared objects in env:LD_PRELOAD:
4015 $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
4021 ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
4023 If your instrumented application closes one or more file descriptors
4024 which it did not open itself, you must preload the
4025 path:{liblttng-ust-fd.so} shared object when you start the application:
4029 $ LD_PRELOAD=liblttng-ust-fd.so ./my-app
4032 Typical use cases include closing all the file descriptors after
4033 man:fork(2) or man:rfork(2) and buggy applications doing
4037 [[lttng-ust-pkg-config]]
4038 ===== Use noch:{pkg-config}
4040 On some distributions, LTTng-UST ships with a
4041 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
4042 metadata file. If this is your case, then use cmd:pkg-config to
4043 build an application on the command line:
4047 $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
4051 [[instrumenting-32-bit-app-on-64-bit-system]]
4052 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
4054 In order to trace a 32-bit application running on a 64-bit system,
4055 LTTng must use a dedicated 32-bit
4056 <<lttng-consumerd,consumer daemon>>.
4058 The following steps show how to build and install a 32-bit consumer
4059 daemon, which is _not_ part of the default 64-bit LTTng build, how to
4060 build and install the 32-bit LTTng-UST libraries, and how to build and
4061 link an instrumented 32-bit application in that context.
4063 To build a 32-bit instrumented application for a 64-bit target system,
4064 assuming you have a fresh target system with no installed Userspace RCU
4067 . Download, build, and install a 32-bit version of Userspace RCU:
4072 $ cd $(mktemp -d) &&
4073 wget https://lttng.org/files/urcu/userspace-rcu-latest-0.13.tar.bz2 &&
4074 tar -xf userspace-rcu-latest-0.13.tar.bz2 &&
4075 cd userspace-rcu-0.13.* &&
4076 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
4078 sudo make install &&
4083 . Using the package manager of your distribution, or from source,
4084 install the 32-bit versions of the following dependencies of
4085 LTTng-tools and LTTng-UST:
4088 * https://sourceforge.net/projects/libuuid/[libuuid]
4089 * https://directory.fsf.org/wiki/Popt[popt]
4090 * https://www.xmlsoft.org/[libxml2]
4091 * **Optional**: https://github.com/numactl/numactl[numactl]
4094 . Download, build, and install a 32-bit version of the latest
4095 LTTng-UST{nbsp}{revision}:
4100 $ cd $(mktemp -d) &&
4101 wget https://lttng.org/files/lttng-ust/lttng-ust-latest-2.13.tar.bz2 &&
4102 tar -xf lttng-ust-latest-2.13.tar.bz2 &&
4103 cd lttng-ust-2.13.* &&
4104 ./configure --libdir=/usr/local/lib32 \
4105 CFLAGS=-m32 CXXFLAGS=-m32 \
4106 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
4108 sudo make install &&
4113 Add `--disable-numa` to `./configure` if you don't have
4114 https://github.com/numactl/numactl[numactl].
4118 Depending on your distribution, 32-bit libraries could be installed at a
4119 different location than `/usr/lib32`. For example, Debian is known to
4120 install some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
4122 In this case, make sure to set `LDFLAGS` to all the
4123 relevant 32-bit library paths, for example:
4127 $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
4131 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
4132 the 32-bit consumer daemon:
4137 $ cd $(mktemp -d) &&
4138 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
4139 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
4140 cd lttng-tools-2.13.* &&
4141 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
4142 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
4143 --disable-bin-lttng --disable-bin-lttng-crash \
4144 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
4146 cd src/bin/lttng-consumerd &&
4147 sudo make install &&
4152 . From your distribution or from source, <<installing-lttng,install>>
4153 the 64-bit versions of LTTng-UST and Userspace RCU.
4155 . Download, build, and install the 64-bit version of the
4156 latest LTTng-tools{nbsp}{revision}:
4161 $ cd $(mktemp -d) &&
4162 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
4163 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
4164 cd lttng-tools-2.13.* &&
4165 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
4166 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
4168 sudo make install &&
4173 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
4174 when linking your 32-bit application:
4177 -m32 -L/usr/lib32 -L/usr/local/lib32 \
4178 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
4181 For example, let's rebuild the quick start example in
4182 ``<<tracing-your-own-user-application,Record user application events>>''
4183 as an instrumented 32-bit application:
4188 $ gcc -m32 -c -I. hello-tp.c
4189 $ gcc -m32 -c hello.c
4190 $ gcc -m32 -o hello hello.o hello-tp.o \
4191 -L/usr/lib32 -L/usr/local/lib32 \
4192 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
4197 No special action is required to execute the 32-bit application and
4198 for LTTng to trace it: use the command-line man:lttng(1) tool as usual.
4203 ==== Use `lttng_ust_tracef()`
4205 man:lttng_ust_tracef(3) is a small LTTng-UST API designed for quick,
4206 man:printf(3)-like instrumentation without the burden of
4207 <<tracepoint-provider,creating>> and
4208 <<building-tracepoint-providers-and-user-application,building>>
4209 a tracepoint provider package.
4211 To use `lttng_ust_tracef()` in your application:
4213 . In the C or $$C++$$ source files where you need to use
4214 `lttng_ust_tracef()`, include `<lttng/tracef.h>`:
4219 #include <lttng/tracef.h>
4223 . In the source code of the application, use `lttng_ust_tracef()` like
4224 you would use man:printf(3):
4231 lttng_ust_tracef("my message: %d (%s)", my_integer, my_string);
4237 . Link your application with `liblttng-ust`:
4242 $ gcc -o app app.c -llttng-ust
4246 To record the events that `lttng_ust_tracef()` calls emit:
4248 * <<enabling-disabling-events,Create a recording event rule>> which
4249 matches user space events named `lttng_ust_tracef:*`:
4254 $ lttng enable-event --userspace 'lttng_ust_tracef:*'
4259 .Limitations of `lttng_ust_tracef()`
4261 The `lttng_ust_tracef()` utility function was developed to make user
4262 space tracing super simple, albeit with notable disadvantages compared
4263 to <<defining-tracepoints,user-defined tracepoints>>:
4265 * All the created events have the same tracepoint provider and
4266 tracepoint names, respectively `lttng_ust_tracef` and `event`.
4267 * There's no static type checking.
4268 * The only event record field you actually get, named `msg`, is a string
4269 potentially containing the values you passed to `lttng_ust_tracef()`
4270 using your own format string. This also means that you can't filter
4271 events with a custom expression at run time because there are no
4273 * Since `lttng_ust_tracef()` uses the man:vasprintf(3) function of the
4274 C{nbsp}standard library behind the scenes to format the strings at run
4275 time, its expected performance is lower than with user-defined
4276 tracepoints, which don't require a conversion to a string.
4278 Taking this into consideration, `lttng_ust_tracef()` is useful for some
4279 quick prototyping and debugging, but you shouldn't consider it for any
4280 permanent and serious applicative instrumentation.
4286 ==== Use `lttng_ust_tracelog()`
4288 The man:tracelog(3) API is very similar to
4289 <<tracef,`lttng_ust_tracef()`>>, with the difference that it accepts an
4290 additional log level parameter.
4292 The goal of `lttng_ust_tracelog()` is to ease the migration from logging
4295 To use `lttng_ust_tracelog()` in your application:
4297 . In the C or $$C++$$ source files where you need to use `tracelog()`,
4298 include `<lttng/tracelog.h>`:
4303 #include <lttng/tracelog.h>
4307 . In the source code of the application, use `lttng_ust_tracelog()` like
4308 you would use man:printf(3), except for the first parameter which is
4316 tracelog(LTTNG_UST_TRACEPOINT_LOGLEVEL_WARNING,
4317 "my message: %d (%s)", my_integer, my_string);
4323 See man:lttng-ust(3) for a list of available log level names.
4325 . Link your application with `liblttng-ust`:
4330 $ gcc -o app app.c -llttng-ust
4334 To record the events that `lttng_ust_tracelog()` calls emit with a log
4335 level _at least as severe as_ a specific log level:
4337 * <<enabling-disabling-events,Create a recording event rule>> which
4338 matches user space tracepoint events named `lttng_ust_tracelog:*` and
4339 with some minimum level of severity:
4344 $ lttng enable-event --userspace 'lttng_ust_tracelog:*' \
4349 To record the events that `lttng_ust_tracelog()` calls emit with a
4350 _specific log level_:
4352 * Create a recording event rule which matches tracepoint events named
4353 `lttng_ust_tracelog:*` and with a specific log level:
4358 $ lttng enable-event --userspace 'lttng_ust_tracelog:*' \
4359 --loglevel-only=INFO
4364 [[prebuilt-ust-helpers]]
4365 === Load a prebuilt user space tracing helper
4367 The LTTng-UST package provides a few helpers in the form of preloadable
4368 shared objects which automatically instrument system functions and
4371 The helper shared objects are normally found in dir:{/usr/lib}. If you
4372 built LTTng-UST <<building-from-source,from source>>, they're probably
4373 located in dir:{/usr/local/lib}.
4375 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4378 path:{liblttng-ust-libc-wrapper.so}::
4379 path:{liblttng-ust-pthread-wrapper.so}::
4380 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4381 memory and POSIX threads function tracing>>.
4383 path:{liblttng-ust-cyg-profile.so}::
4384 path:{liblttng-ust-cyg-profile-fast.so}::
4385 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4387 path:{liblttng-ust-dl.so}::
4388 <<liblttng-ust-dl,Dynamic linker tracing>>.
4390 To use a user space tracing helper with any user application:
4392 * Preload the helper shared object when you start the application:
4397 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4401 You can preload more than one helper:
4406 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4412 [[liblttng-ust-libc-pthread-wrapper]]
4413 ==== Instrument C standard library memory and POSIX threads functions
4415 The path:{liblttng-ust-libc-wrapper.so} and
4416 path:{liblttng-ust-pthread-wrapper.so} helpers
4417 add instrumentation to some C standard library and POSIX
4421 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4423 |TP provider name |TP name |Instrumented function
4425 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4426 |`calloc` |man:calloc(3)
4427 |`realloc` |man:realloc(3)
4428 |`free` |man:free(3)
4429 |`memalign` |man:memalign(3)
4430 |`posix_memalign` |man:posix_memalign(3)
4434 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4436 |TP provider name |TP name |Instrumented function
4438 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4439 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4440 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4441 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4444 When you preload the shared object, it replaces the functions listed
4445 in the previous tables by wrappers which contain tracepoints and call
4446 the replaced functions.
4449 [[liblttng-ust-cyg-profile]]
4450 ==== Instrument function entry and exit
4452 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4453 to the entry and exit points of functions.
4455 man:gcc(1) and man:clang(1) have an option named
4456 https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
4457 which generates instrumentation calls for entry and exit to functions.
4458 The LTTng-UST function tracing helpers,
4459 path:{liblttng-ust-cyg-profile.so} and
4460 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4461 to add tracepoints to the two generated functions (which contain
4462 `cyg_profile` in their names, hence the name of the helper).
4464 To use the LTTng-UST function tracing helper, the source files to
4465 instrument must be built using the `-finstrument-functions` compiler
4468 There are two versions of the LTTng-UST function tracing helper:
4470 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4471 that you should only use when it can be _guaranteed_ that the
4472 complete event stream is recorded without any lost event record.
4473 Any kind of duplicate information is left out.
4475 Assuming no event record is lost, having only the function addresses on
4476 entry is enough to create a call graph, since an event record always
4477 contains the ID of the CPU that generated it.
4479 Use a tool like man:addr2line(1) to convert function addresses back to
4480 source file names and line numbers.
4482 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4483 which also works in use cases where event records might get discarded or
4484 not recorded from application startup.
4485 In these cases, the trace analyzer needs more information to be
4486 able to reconstruct the program flow.
4488 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4489 points of this helper.
4491 All the tracepoints that this helper provides have the log level
4492 `LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4494 TIP: It's sometimes a good idea to limit the number of source files that
4495 you compile with the `-finstrument-functions` option to prevent LTTng
4496 from writing an excessive amount of trace data at run time. When using
4498 `-finstrument-functions-exclude-function-list` option to avoid
4499 instrument entries and exits of specific function names.
4504 ==== Instrument the dynamic linker
4506 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4507 man:dlopen(3) and man:dlclose(3) function calls.
4509 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4514 [[java-application]]
4515 === Instrument a Java application
4517 You can instrument any Java application which uses one of the following
4520 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4521 (JUL) core logging facilities.
4523 * https://logging.apache.org/log4j/1.2/[**Apache log4j{nbsp}1.2**], since
4524 LTTng{nbsp}2.6. Note that Apache Log4j{nbsp}2 isn't supported.
4527 .LTTng-UST Java agent imported by a Java application.
4528 image::java-app.png[]
4530 Note that the methods described below are new in LTTng{nbsp}2.8.
4531 Previous LTTng versions use another technique.
4533 NOTE: We use https://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4534 and https://ci.lttng.org/[continuous integration], thus this version is
4535 directly supported. However, the LTTng-UST Java agent is also tested
4536 with OpenJDK{nbsp}7.
4541 ==== Use the LTTng-UST Java agent for `java.util.logging`
4543 To use the LTTng-UST Java agent in a Java application which uses
4544 `java.util.logging` (JUL):
4546 . In the source code of the Java application, import the LTTng-UST log
4547 handler package for `java.util.logging`:
4552 import org.lttng.ust.agent.jul.LttngLogHandler;
4556 . Create an LTTng-UST `java.util.logging` log handler:
4561 Handler lttngUstLogHandler = new LttngLogHandler();
4565 . Add this handler to the `java.util.logging` loggers which should emit
4571 Logger myLogger = Logger.getLogger("some-logger");
4573 myLogger.addHandler(lttngUstLogHandler);
4577 . Use `java.util.logging` log statements and configuration as usual.
4578 The loggers with an attached LTTng-UST log handler can emit
4581 . Before exiting the application, remove the LTTng-UST log handler from
4582 the loggers attached to it and call its `close()` method:
4587 myLogger.removeHandler(lttngUstLogHandler);
4588 lttngUstLogHandler.close();
4592 This isn't strictly necessary, but it's recommended for a clean
4593 disposal of the resources of the handler.
4595 . Include the common and JUL-specific JAR files of the LTTng-UST Java agent,
4596 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4598 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4599 path] when you build the Java application.
4601 The JAR files are typically located in dir:{/usr/share/java}.
4603 IMPORTANT: The LTTng-UST Java agent must be
4604 <<installing-lttng,installed>> for the logging framework your
4607 .Use the LTTng-UST Java agent for `java.util.logging`.
4612 import java.io.IOException;
4613 import java.util.logging.Handler;
4614 import java.util.logging.Logger;
4615 import org.lttng.ust.agent.jul.LttngLogHandler;
4619 private static final int answer = 42;
4621 public static void main(String[] argv) throws Exception
4624 Logger logger = Logger.getLogger("jello");
4626 // Create an LTTng-UST log handler
4627 Handler lttngUstLogHandler = new LttngLogHandler();
4629 // Add the LTTng-UST log handler to our logger
4630 logger.addHandler(lttngUstLogHandler);
4633 logger.info("some info");
4634 logger.warning("some warning");
4636 logger.finer("finer information; the answer is " + answer);
4638 logger.severe("error!");
4640 // Not mandatory, but cleaner
4641 logger.removeHandler(lttngUstLogHandler);
4642 lttngUstLogHandler.close();
4651 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4654 <<creating-destroying-tracing-sessions,Create a recording session>>,
4655 <<enabling-disabling-events,create a recording event rule>> matching JUL
4656 events named `jello`, and <<basic-tracing-session-control,start
4662 $ lttng enable-event --jul jello
4666 Run the compiled class:
4670 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4673 <<basic-tracing-session-control,Stop recording>> and inspect the
4683 In the resulting trace, an <<event,event record>> which a Java
4684 application using `java.util.logging` generated is named
4685 `lttng_jul:event` and has the following fields:
4694 Name of the class in which the log statement was executed.
4697 Name of the method in which the log statement was executed.
4700 Logging time (timestamp in milliseconds).
4703 Log level integer value.
4706 ID of the thread in which the log statement was executed.
4708 Use the opt:lttng-enable-event(1):--loglevel or
4709 opt:lttng-enable-event(1):--loglevel-only option of the
4710 man:lttng-enable-event(1) command to target a range of
4711 `java.util.logging` log levels or a specific `java.util.logging` log
4717 ==== Use the LTTng-UST Java agent for Apache log4j
4719 To use the LTTng-UST Java agent in a Java application which uses
4720 Apache log4j{nbsp}1.2:
4722 . In the source code of the Java application, import the LTTng-UST log
4723 appender package for Apache log4j:
4728 import org.lttng.ust.agent.log4j.LttngLogAppender;
4732 . Create an LTTng-UST log4j log appender:
4737 Appender lttngUstLogAppender = new LttngLogAppender();
4741 . Add this appender to the log4j loggers which should emit LTTng events:
4746 Logger myLogger = Logger.getLogger("some-logger");
4748 myLogger.addAppender(lttngUstLogAppender);
4752 . Use Apache log4j log statements and configuration as usual. The
4753 loggers with an attached LTTng-UST log appender can emit LTTng events.
4755 . Before exiting the application, remove the LTTng-UST log appender from
4756 the loggers attached to it and call its `close()` method:
4761 myLogger.removeAppender(lttngUstLogAppender);
4762 lttngUstLogAppender.close();
4766 This isn't strictly necessary, but it's recommended for a clean
4767 disposal of the resources of the appender.
4769 . Include the common and log4j-specific JAR
4770 files of the LTTng-UST Java agent, path:{lttng-ust-agent-common.jar} and
4771 path:{lttng-ust-agent-log4j.jar}, in the
4772 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4773 path] when you build the Java application.
4775 The JAR files are typically located in dir:{/usr/share/java}.
4777 IMPORTANT: The LTTng-UST Java agent must be
4778 <<installing-lttng,installed>> for the logging framework your
4781 .Use the LTTng-UST Java agent for Apache log4j.
4786 import org.apache.log4j.Appender;
4787 import org.apache.log4j.Logger;
4788 import org.lttng.ust.agent.log4j.LttngLogAppender;
4792 private static final int answer = 42;
4794 public static void main(String[] argv) throws Exception
4797 Logger logger = Logger.getLogger("jello");
4799 // Create an LTTng-UST log appender
4800 Appender lttngUstLogAppender = new LttngLogAppender();
4802 // Add the LTTng-UST log appender to our logger
4803 logger.addAppender(lttngUstLogAppender);
4806 logger.info("some info");
4807 logger.warn("some warning");
4809 logger.debug("debug information; the answer is " + answer);
4811 logger.fatal("error!");
4813 // Not mandatory, but cleaner
4814 logger.removeAppender(lttngUstLogAppender);
4815 lttngUstLogAppender.close();
4821 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4826 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4829 <<creating-destroying-tracing-sessions,Create a recording session>>,
4830 <<enabling-disabling-events,create a recording event rule>> matching
4831 log4j events named `jello`, and <<basic-tracing-session-control,start
4837 $ lttng enable-event --log4j jello
4841 Run the compiled class:
4845 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4848 <<basic-tracing-session-control,Stop recording>> and inspect the
4858 In the resulting trace, an <<event,event record>> which a Java
4859 application using log4j generated is named `lttng_log4j:event` and
4860 has the following fields:
4869 Name of the class in which the log statement was executed.
4872 Name of the method in which the log statement was executed.
4875 Name of the file in which the executed log statement is located.
4878 Line number at which the log statement was executed.
4884 Log level integer value.
4887 Name of the Java thread in which the log statement was executed.
4889 Use the opt:lttng-enable-event(1):--loglevel or
4890 opt:lttng-enable-event(1):--loglevel-only option of the
4891 man:lttng-enable-event(1) command to target a range of Apache log4j
4892 log levels or a specific log4j log level.
4896 [[java-application-context]]
4897 ==== Provide application-specific context fields in a Java application
4899 A Java application-specific context field is a piece of state which
4900 the Java application provides. You can <<adding-context,add>> such
4901 a context field to be recorded, using the
4902 man:lttng-add-context(1) command, to each <<event,event record>>
4903 which the log statements of this application produce.
4905 For example, a given object might have a current request ID variable.
4906 You can create a context information retriever for this object and
4907 assign a name to this current request ID. You can then, using the
4908 man:lttng-add-context(1) command, add this context field by name so that
4909 LTTng writes it to the event records of a given `java.util.logging` or
4910 log4j <<channel,channel>>.
4912 To provide application-specific context fields in a Java application:
4914 . In the source code of the Java application, import the LTTng-UST
4915 Java agent context classes and interfaces:
4920 import org.lttng.ust.agent.context.ContextInfoManager;
4921 import org.lttng.ust.agent.context.IContextInfoRetriever;
4925 . Create a context information retriever class, that is, a class which
4926 implements the `IContextInfoRetriever` interface:
4931 class MyContextInfoRetriever implements IContextInfoRetriever
4934 public Object retrieveContextInfo(String key)
4936 if (key.equals("intCtx")) {
4938 } else if (key.equals("strContext")) {
4939 return "context value!";
4948 This `retrieveContextInfo()` method is the only member of the
4949 `IContextInfoRetriever` interface. Its role is to return the current
4950 value of a state by name to create a context field. The names of the
4951 context fields and which state variables they return depends on your
4954 All primitive types and objects are supported as context fields.
4955 When `retrieveContextInfo()` returns an object, the context field
4956 serializer calls its `toString()` method to add a string field to
4957 event records. The method can also return `null`, which means that
4958 no context field is available for the required name.
4960 . Register an instance of your context information retriever class to
4961 the context information manager singleton:
4966 IContextInfoRetriever cir = new MyContextInfoRetriever();
4967 ContextInfoManager cim = ContextInfoManager.getInstance();
4968 cim.registerContextInfoRetriever("retrieverName", cir);
4972 . Before exiting the application, remove your context information
4973 retriever from the context information manager singleton:
4978 ContextInfoManager cim = ContextInfoManager.getInstance();
4979 cim.unregisterContextInfoRetriever("retrieverName");
4983 This isn't strictly necessary, but it's recommended for a clean
4984 disposal of some resources of the manager.
4986 . Build your Java application with LTTng-UST Java agent support as
4987 usual, following the procedure for either the
4988 <<jul,`java.util.logging`>> or <<log4j,Apache log4j>> framework.
4990 .Provide application-specific context fields in a Java application.
4995 import java.util.logging.Handler;
4996 import java.util.logging.Logger;
4997 import org.lttng.ust.agent.jul.LttngLogHandler;
4998 import org.lttng.ust.agent.context.ContextInfoManager;
4999 import org.lttng.ust.agent.context.IContextInfoRetriever;
5003 // Our context information retriever class
5004 private static class MyContextInfoRetriever
5005 implements IContextInfoRetriever
5008 public Object retrieveContextInfo(String key) {
5009 if (key.equals("intCtx")) {
5011 } else if (key.equals("strContext")) {
5012 return "context value!";
5019 private static final int answer = 42;
5021 public static void main(String args[]) throws Exception
5023 // Get the context information manager instance
5024 ContextInfoManager cim = ContextInfoManager.getInstance();
5026 // Create and register our context information retriever
5027 IContextInfoRetriever cir = new MyContextInfoRetriever();
5028 cim.registerContextInfoRetriever("myRetriever", cir);
5031 Logger logger = Logger.getLogger("jello");
5033 // Create an LTTng-UST log handler
5034 Handler lttngUstLogHandler = new LttngLogHandler();
5036 // Add the LTTng-UST log handler to our logger
5037 logger.addHandler(lttngUstLogHandler);
5040 logger.info("some info");
5041 logger.warning("some warning");
5043 logger.finer("finer information; the answer is " + answer);
5045 logger.severe("error!");
5047 // Not mandatory, but cleaner
5048 logger.removeHandler(lttngUstLogHandler);
5049 lttngUstLogHandler.close();
5050 cim.unregisterContextInfoRetriever("myRetriever");
5059 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
5062 <<creating-destroying-tracing-sessions,Create a recording session>> and
5063 <<enabling-disabling-events,create a recording event rule>> matching
5064 `java.util.logging` events named `jello`:
5069 $ lttng enable-event --jul jello
5072 <<adding-context,Add the application-specific context fields>> to be
5073 recorded to the event records of the `java.util.logging` channel:
5077 $ lttng add-context --jul --type='$app.myRetriever:intCtx'
5078 $ lttng add-context --jul --type='$app.myRetriever:strContext'
5081 <<basic-tracing-session-control,Start recording>>:
5088 Run the compiled class:
5092 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
5095 <<basic-tracing-session-control,Stop recording>> and inspect the
5107 [[python-application]]
5108 === Instrument a Python application
5110 You can instrument a Python{nbsp}2 or Python{nbsp}3 application which
5112 https://docs.python.org/3/library/logging.html[`logging`] package.
5114 Each log statement creates an LTTng event once the application module
5115 imports the <<lttng-ust-agents,LTTng-UST Python agent>> package.
5118 .A Python application importing the LTTng-UST Python agent.
5119 image::python-app.png[]
5121 To use the LTTng-UST Python agent:
5123 . In the source code of the Python application, import the LTTng-UST
5133 The LTTng-UST Python agent automatically adds its logging handler to the
5134 root logger at import time.
5136 A log statement that the application executes before this import doesn't
5137 create an LTTng event.
5139 IMPORTANT: The LTTng-UST Python agent must be
5140 <<installing-lttng,installed>>.
5142 . Use log statements and logging configuration as usual.
5143 Since the LTTng-UST Python agent adds a handler to the _root_
5144 logger, any log statement from any logger can emit an LTTng event.
5146 .Use the LTTng-UST Python agent.
5157 logging.basicConfig()
5158 logger = logging.getLogger('my-logger')
5161 logger.debug('debug message')
5162 logger.info('info message')
5163 logger.warn('warn message')
5164 logger.error('error message')
5165 logger.critical('critical message')
5169 if __name__ == '__main__':
5173 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
5174 logging handler which prints to the standard error stream, isn't
5175 strictly required for LTTng-UST tracing to work, but in versions of
5176 Python preceding{nbsp}3.2, you could see a warning message which
5177 indicates that no handler exists for the logger `my-logger`.
5179 <<creating-destroying-tracing-sessions,Create a recording session>>,
5180 <<enabling-disabling-events,create a recording event rule>> matching
5181 Python logging events named `my-logger`, and
5182 <<basic-tracing-session-control,start recording>>:
5187 $ lttng enable-event --python my-logger
5191 Run the Python script:
5198 <<basic-tracing-session-control,Stop recording>> and inspect the
5208 In the resulting trace, an <<event,event record>> which a Python
5209 application generated is named `lttng_python:event` and has the
5213 Logging time (string).
5222 Name of the function in which the log statement was executed.
5225 Line number at which the log statement was executed.
5228 Log level integer value.
5231 ID of the Python thread in which the log statement was executed.
5234 Name of the Python thread in which the log statement was executed.
5236 Use the opt:lttng-enable-event(1):--loglevel or
5237 opt:lttng-enable-event(1):--loglevel-only option of the
5238 man:lttng-enable-event(1) command to target a range of Python log levels
5239 or a specific Python log level.
5241 When an application imports the LTTng-UST Python agent, the agent tries
5242 to register to a <<lttng-sessiond,session daemon>>. Note that you must
5243 <<start-sessiond,start the session daemon>> _before_ you run the Python
5244 application. If a session daemon is found, the agent tries to register
5245 to it during five seconds, after which the application continues
5246 without LTTng tracing support. Override this timeout value with
5247 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
5250 If the session daemon stops while a Python application with an imported
5251 LTTng-UST Python agent runs, the agent retries to connect and to
5252 register to a session daemon every three seconds. Override this
5253 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
5258 [[proc-lttng-logger-abi]]
5259 === Use the LTTng logger
5261 The `lttng-tracer` Linux kernel module, part of
5262 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger files
5263 path:{/proc/lttng-logger} and path:{/dev/lttng-logger} (since
5264 LTTng{nbsp}2.11) when it's loaded. Any application can write text data
5265 to any of those files to create one or more LTTng events.
5268 .An application writes to the LTTng logger file to create one or more LTTng events.
5269 image::lttng-logger.png[]
5271 The LTTng logger is the quickest method--not the most efficient,
5272 however--to add instrumentation to an application. It's designed
5273 mostly to instrument shell scripts:
5277 $ echo "Some message, some $variable" > /dev/lttng-logger
5280 Any event that the LTTng logger creates is named `lttng_logger` and
5281 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
5282 other instrumentation points in the kernel tracing domain, **any Unix
5283 user** can <<enabling-disabling-events,create a recording event rule>>
5284 which matches events named `lttng_logger`, not only the root user or
5285 users in the <<tracing-group,tracing group>>.
5287 To use the LTTng logger:
5289 * From any application, write text data to the path:{/dev/lttng-logger}
5292 The `msg` field of `lttng_logger` event records contains the
5295 NOTE: The maximum message length of an LTTng logger event is
5296 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
5297 than one event to contain the remaining data.
5299 You shouldn't use the LTTng logger to trace a user application which you
5300 can instrument in a more efficient way, namely:
5302 * <<c-application,C and $$C++$$ applications>>.
5303 * <<java-application,Java applications>>.
5304 * <<python-application,Python applications>>.
5306 .Use the LTTng logger.
5311 echo 'Hello, World!' > /dev/lttng-logger
5313 df --human-readable --print-type / > /dev/lttng-logger
5316 <<creating-destroying-tracing-sessions,Create a recording session>>,
5317 <<enabling-disabling-events,create a recording event rule>> matching
5318 Linux kernel tracepoint events named `lttng_logger`, and
5319 <<basic-tracing-session-control,start recording>>:
5324 $ lttng enable-event --kernel lttng_logger
5328 Run the Bash script:
5335 <<basic-tracing-session-control,Stop recording>> and inspect the recorded
5346 [[instrumenting-linux-kernel]]
5347 === Instrument a Linux kernel image or module
5349 NOTE: This section shows how to _add_ instrumentation points to the
5350 Linux kernel. The subsystems of the kernel are already thoroughly
5351 instrumented at strategic points for LTTng when you
5352 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
5356 [[linux-add-lttng-layer]]
5357 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5359 This section shows how to add an LTTng layer to existing ftrace
5360 instrumentation using the `TRACE_EVENT()` API.
5362 This section doesn't document the `TRACE_EVENT()` macro. Read the
5363 following articles to learn more about this API:
5365 * https://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part{nbsp}1)]
5366 * https://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part{nbsp}2)]
5367 * https://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part{nbsp}3)]
5369 The following procedure assumes that your ftrace tracepoints are
5370 correctly defined in their own header and that they're created in
5371 one source file using the `CREATE_TRACE_POINTS` definition.
5373 To add an LTTng layer over an existing ftrace tracepoint:
5375 . Make sure the following kernel configuration options are
5381 * `CONFIG_HIGH_RES_TIMERS`
5382 * `CONFIG_TRACEPOINTS`
5385 . Build the Linux source tree with your custom ftrace tracepoints.
5386 . Boot the resulting Linux image on your target system.
5388 Confirm that the tracepoints exist by looking for their names in the
5389 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5390 is your subsystem name.
5392 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
5397 $ cd $(mktemp -d) &&
5398 wget https://lttng.org/files/lttng-modules/lttng-modules-latest-2.13.tar.bz2 &&
5399 tar -xf lttng-modules-latest-2.13.tar.bz2 &&
5400 cd lttng-modules-2.13.*
5404 . In dir:{instrumentation/events/lttng-module}, relative to the root
5405 of the LTTng-modules source tree, create a header file named
5406 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5407 LTTng-modules tracepoint definitions using the LTTng-modules
5410 Start with this template:
5414 .path:{instrumentation/events/lttng-module/my_subsys.h}
5417 #define TRACE_SYSTEM my_subsys
5419 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5420 #define _LTTNG_MY_SUBSYS_H
5422 #include "../../../probes/lttng-tracepoint-event.h"
5423 #include <linux/tracepoint.h>
5425 LTTNG_TRACEPOINT_EVENT(
5427 * Format is identical to the TRACE_EVENT() version for the three
5428 * following macro parameters:
5431 TP_PROTO(int my_int, const char *my_string),
5432 TP_ARGS(my_int, my_string),
5434 /* LTTng-modules specific macros */
5436 ctf_integer(int, my_int_field, my_int)
5437 ctf_string(my_bar_field, my_bar)
5441 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5443 #include "../../../probes/define_trace.h"
5447 The entries in the `TP_FIELDS()` section are the list of fields for the
5448 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5449 the `TRACE_EVENT()` ftrace macro.
5451 See ``<<lttng-modules-tp-fields,Tracepoint fields macros>>'' for a
5452 complete description of the available `ctf_*()` macros.
5454 . Create the kernel module C{nbsp}source file of the LTTng-modules
5455 probe, +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5460 .path:{probes/lttng-probe-my-subsys.c}
5462 #include <linux/module.h>
5463 #include "../lttng-tracer.h"
5466 * Build-time verification of mismatch between mainline
5467 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5468 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5470 #include <trace/events/my_subsys.h>
5472 /* Create LTTng tracepoint probes */
5473 #define LTTNG_PACKAGE_BUILD
5474 #define CREATE_TRACE_POINTS
5475 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5477 #include "../instrumentation/events/lttng-module/my_subsys.h"
5479 MODULE_LICENSE("GPL and additional rights");
5480 MODULE_AUTHOR("Your name <your-email>");
5481 MODULE_DESCRIPTION("LTTng my_subsys probes");
5482 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5483 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5484 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5485 LTTNG_MODULES_EXTRAVERSION);
5489 . Edit path:{probes/KBuild} and add your new kernel module object
5490 next to the existing ones:
5494 .path:{probes/KBuild}
5498 obj-m += lttng-probe-module.o
5499 obj-m += lttng-probe-power.o
5501 obj-m += lttng-probe-my-subsys.o
5507 . Build and install the LTTng kernel modules:
5512 $ make KERNELDIR=/path/to/linux
5513 # make modules_install && depmod -a
5517 Replace `/path/to/linux` with the path to the Linux source tree where
5518 you defined and used tracepoints with the `TRACE_EVENT()` ftrace macro.
5520 Note that you can also use the
5521 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5522 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5523 C{nbsp}code that need to be executed before LTTng records the event
5526 The best way to learn how to use the previous LTTng-modules macros is to
5527 inspect the existing LTTng-modules tracepoint definitions in the
5528 dir:{instrumentation/events/lttng-module} header files. Compare them
5529 with the Linux kernel mainline versions in the
5530 dir:{include/trace/events} directory of the Linux source tree.
5534 [[lttng-tracepoint-event-code]]
5535 ===== Use custom C code to access the data for tracepoint fields
5537 Although we recommended to always use the
5538 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5539 the arguments and fields of an LTTng-modules tracepoint when possible,
5540 sometimes you need a more complex process to access the data that the
5541 tracer records as event record fields. In other words, you need local
5542 variables and multiple C{nbsp}statements instead of simple
5543 argument-based expressions that you pass to the
5544 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5546 Use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5547 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5548 a block of C{nbsp}code to be executed before LTTng records the fields.
5549 The structure of this macro is:
5552 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5554 LTTNG_TRACEPOINT_EVENT_CODE(
5556 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5557 * version for the following three macro parameters:
5560 TP_PROTO(int my_int, const char *my_string),
5561 TP_ARGS(my_int, my_string),
5563 /* Declarations of custom local variables */
5566 unsigned long b = 0;
5567 const char *name = "(undefined)";
5568 struct my_struct *my_struct;
5572 * Custom code which uses both tracepoint arguments
5573 * (in TP_ARGS()) and local variables (in TP_locvar()).
5575 * Local variables are actually members of a structure pointed
5576 * to by the special variable tp_locvar.
5580 tp_locvar->a = my_int + 17;
5581 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5582 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5583 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5584 put_my_struct(tp_locvar->my_struct);
5593 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5594 * version for this, except that tp_locvar members can be
5595 * used in the argument expression parameters of
5596 * the ctf_*() macros.
5599 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5600 ctf_integer(int, my_struct_a, tp_locvar->a)
5601 ctf_string(my_string_field, my_string)
5602 ctf_string(my_struct_name, tp_locvar->name)
5607 IMPORTANT: The C code defined in `TP_code()` must not have any side
5608 effects when executed. In particular, the code must not allocate
5609 memory or get resources without deallocating this memory or putting
5610 those resources afterwards.
5613 [[instrumenting-linux-kernel-tracing]]
5614 ==== Load and unload a custom probe kernel module
5616 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5617 kernel module>> in the kernel before it can emit LTTng events.
5619 To load the default probe kernel modules and a custom probe kernel
5622 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5623 probe modules to load when starting a root <<lttng-sessiond,session
5627 .Load the `my_subsys`, `usb`, and the default probe modules.
5631 # lttng-sessiond --extra-kmod-probes=my_subsys,usb
5636 You only need to pass the subsystem name, not the whole kernel module
5639 To load _only_ a given custom probe kernel module:
5641 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5642 modules to load when starting a root session daemon:
5645 .Load only the `my_subsys` and `usb` probe modules.
5649 # lttng-sessiond --kmod-probes=my_subsys,usb
5654 To confirm that a probe module is loaded:
5661 $ lsmod | grep lttng_probe_usb
5665 To unload the loaded probe modules:
5667 * Kill the session daemon with `SIGTERM`:
5672 # pkill lttng-sessiond
5676 You can also use the `--remove` option of man:modprobe(8) if the session
5677 daemon terminates abnormally.
5680 [[controlling-tracing]]
5683 Once an application or a Linux kernel is <<instrumenting,instrumented>>
5684 for LTTng tracing, you can _trace_ it.
5686 In the LTTng context, _tracing_ means making sure that LTTng attempts to
5687 execute some action(s) when a CPU executes an instrumentation point.
5689 This section is divided in topics on how to use the various
5690 <<plumbing,components of LTTng>>, in particular the
5691 <<lttng-cli,cmd:lttng command-line tool>>, to _control_ the LTTng
5692 daemons and tracers.
5694 NOTE: In the following subsections, we refer to an man:lttng(1) command
5695 using its man page name. For example, instead of ``Run the `create`
5696 command to'', we write ``Run the man:lttng-create(1) command to''.
5700 === Start a session daemon
5702 In some situations, you need to run a <<lttng-sessiond,session daemon>>
5703 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5706 You will see the following error when you run a command while no session
5710 Error: No session daemon is available
5713 The only command that automatically runs a session daemon is
5714 man:lttng-create(1), which you use to
5715 <<creating-destroying-tracing-sessions,create a recording session>>. While
5716 this could be your most used first operation, sometimes it's not. Some
5719 * <<list-instrumentation-points,List the available instrumentation points>>.
5720 * <<saving-loading-tracing-session,Load a recording session configuration>>.
5721 * <<add-event-rule-matches-trigger,Add a trigger>>.
5723 All the examples above don't require a recording session to operate on.
5725 [[tracing-group]] Each Unix user can have its own running session daemon
5726 to use the user space LTTng tracer. The session daemon that the `root`
5727 user starts is the only one allowed to control the LTTng kernel tracer.
5728 Members of the Unix _tracing group_ may connect to and control the root
5729 session daemon, even for user space tracing. See the ``Session daemon
5730 connection'' section of man:lttng(1) to learn more about the Unix
5733 To start a user session daemon:
5735 * Run man:lttng-sessiond(8):
5740 $ lttng-sessiond --daemonize
5744 To start the root session daemon:
5746 * Run man:lttng-sessiond(8) as the `root` user:
5751 # lttng-sessiond --daemonize
5755 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5756 start the session daemon in foreground.
5758 To stop a session daemon, kill its process (see man:kill(1)) with the
5759 standard `TERM` signal.
5761 Note that some Linux distributions could manage the LTTng session daemon
5762 as a service. In this case, we suggest that you use the service manager
5763 to start, restart, and stop session daemons.
5766 [[creating-destroying-tracing-sessions]]
5767 === Create and destroy a recording session
5769 Many LTTng control operations happen in the scope of a
5770 <<tracing-session,recording session>>, which is the dialogue between the
5771 <<lttng-sessiond,session daemon>> and you for everything related to
5772 <<event,event recording>>.
5774 To create a recording session with a generated name:
5776 * Use the man:lttng-create(1) command:
5785 The name of the created recording session is `auto` followed by the
5788 To create a recording session with a specific name:
5790 * Use the optional argument of the man:lttng-create(1) command:
5795 $ lttng create SESSION
5799 Replace +__SESSION__+ with your specific recording session name.
5801 In <<local-mode,local mode>>, LTTng writes the traces of a recording
5802 session to the +$LTTNG_HOME/lttng-traces/__NAME__-__DATE__-__TIME__+
5803 directory by default, where +__NAME__+ is the name of the recording
5804 session. Note that the env:LTTNG_HOME environment variable defaults to
5807 To output LTTng traces to a non-default location:
5809 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1)
5815 $ lttng create my-session --output=/tmp/some-directory
5819 You may create as many recording sessions as you wish.
5821 To list all the existing recording sessions for your Unix user, or for
5822 all users if your Unix user is `root`:
5824 * Use the man:lttng-list(1) command:
5833 [[cur-tracing-session]]When you create a recording session, the
5834 man:lttng-create(1) command sets it as the _current recording session_.
5835 The following man:lttng(1) commands operate on the current recording
5836 session when you don't specify one:
5838 [role="list-3-cols"]
5839 * man:lttng-add-context(1)
5840 * man:lttng-clear(1)
5841 * man:lttng-destroy(1)
5842 * man:lttng-disable-channel(1)
5843 * man:lttng-disable-event(1)
5844 * man:lttng-disable-rotation(1)
5845 * man:lttng-enable-channel(1)
5846 * man:lttng-enable-event(1)
5847 * man:lttng-enable-rotation(1)
5849 * man:lttng-regenerate(1)
5850 * man:lttng-rotate(1)
5852 * man:lttng-snapshot(1)
5853 * man:lttng-start(1)
5854 * man:lttng-status(1)
5856 * man:lttng-track(1)
5857 * man:lttng-untrack(1)
5860 To change the current recording session:
5862 * Use the man:lttng-set-session(1) command:
5867 $ lttng set-session SESSION
5871 Replace +__SESSION__+ with the name of the new current recording session.
5873 When you're done recording in a given recording session, destroy it.
5874 This operation frees the resources taken by the recording session to
5875 destroy; it doesn't destroy the trace data that LTTng wrote for this
5876 recording session (see ``<<clear,Clear a recording session>>'' for one
5879 To destroy the current recording session:
5881 * Use the man:lttng-destroy(1) command:
5890 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
5891 implicitly (see ``<<basic-tracing-session-control,Start and stop a
5892 recording session>>''). You need to stop recording to make LTTng flush the
5893 remaining trace data and make the trace readable.
5896 [[list-instrumentation-points]]
5897 === List the available instrumentation points
5899 The <<lttng-sessiond,session daemon>> can query the running instrumented
5900 user applications and the Linux kernel to get a list of available
5901 instrumentation points:
5903 * LTTng tracepoints and system calls for the Linux kernel
5904 <<domain,tracing domain>>.
5906 * LTTng tracepoints for the user space tracing domain.
5908 To list the available instrumentation points:
5910 . <<start-sessiond,Make sure>> there's a running
5911 <<lttng-sessiond,session daemon>> to which your Unix user can
5914 . Use the man:lttng-list(1) command with the option of the requested
5915 tracing domain amongst:
5918 opt:lttng-list(1):--kernel::
5919 Linux kernel tracepoints.
5921 Your Unix user must be `root`, or it must be a member of the Unix
5922 <<tracing-group,tracing group>>.
5924 opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall::
5925 Linux kernel system calls.
5927 Your Unix user must be `root`, or it must be a member of the Unix
5928 <<tracing-group,tracing group>>.
5930 opt:lttng-list(1):--userspace::
5931 User space tracepoints.
5933 opt:lttng-list(1):--jul::
5934 `java.util.logging` loggers.
5936 opt:lttng-list(1):--log4j::
5937 Apache log4j loggers.
5939 opt:lttng-list(1):--python::
5943 .List the available user space tracepoints.
5947 $ lttng list --userspace
5951 .List the available Linux kernel system calls.
5955 $ lttng list --kernel --syscall
5960 [[enabling-disabling-events]]
5961 === Create and enable a recording event rule
5963 Once you <<creating-destroying-tracing-sessions,create a recording
5964 session>>, you can create <<event,recording event rules>> with the
5965 man:lttng-enable-event(1) command.
5967 The man:lttng-enable-event(1) command always attaches an event rule to a
5968 <<channel,channel>> on creation. The command can create a _default
5969 channel_, named `channel0`, for you. The man:lttng-enable-event(1)
5970 command reuses the default channel each time you run it for the same
5971 tracing domain and session.
5973 A recording event rule is always enabled at creation time.
5975 The following examples show how to combine the command-line arguments of
5976 the man:lttng-enable-event(1) command to create simple to more complex
5977 recording event rules within the <<cur-tracing-session,current recording
5980 .Create a recording event rule matching specific Linux kernel tracepoint events (default channel).
5984 # lttng enable-event --kernel sched_switch
5988 .Create a recording event rule matching Linux kernel system call events with four specific names (default channel).
5992 # lttng enable-event --kernel --syscall open,write,read,close
5996 .Create recording event rules matching tracepoint events which satisfy a filter expressions (default channel).
6000 # lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
6005 # lttng enable-event --kernel --all \
6006 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
6011 $ lttng enable-event --jul my_logger \
6012 --filter='$app.retriever:cur_msg_id > 3'
6015 IMPORTANT: Make sure to always single-quote the filter string when you
6016 run man:lttng(1) from a shell.
6018 See also ``<<pid-tracking,Allow specific processes to record events>>''
6019 which offers another, more efficient filtering mechanism for process ID,
6020 user ID, and group ID attributes.
6023 .Create a recording event rule matching any user space event from the `my_app` tracepoint provider and with a log level range (default channel).
6027 $ lttng enable-event --userspace my_app:'*' --loglevel=INFO
6030 IMPORTANT: Make sure to always single-quote the wildcard character when
6031 you run man:lttng(1) from a shell.
6034 .Create a recording event rule matching user space events named specifically, but with name exclusions (default channel).
6038 $ lttng enable-event --userspace my_app:'*' \
6039 --exclude=my_app:set_user,my_app:handle_sig
6043 .Create a recording event rule matching any Apache log4j event with a specific log level (default channel).
6047 $ lttng enable-event --log4j --all --loglevel-only=WARN
6051 .Create a recording event rule, attached to a specific channel, and matching user space tracepoint events named `my_app:my_tracepoint`.
6055 $ lttng enable-event --userspace my_app:my_tracepoint \
6056 --channel=my-channel
6060 .Create a recording event rule matching user space probe events for the `malloc` function entry in path:{/usr/lib/libc.so.6}:
6064 # lttng enable-event --kernel \
6065 --userspace-probe=/usr/lib/libc.so.6:malloc \
6070 .Create a recording event rule matching user space probe events for the `server`/`accept_request` https://www.sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps[USDT probe] in path:{/usr/bin/serv}:
6074 # lttng enable-event --kernel \
6075 --userspace-probe=sdt:serv:server:accept_request \
6076 server_accept_request
6080 The recording event rules of a given channel form a whitelist: as soon
6081 as an event rule matches an event, LTTng emits it _once_ and therefore
6082 <<channel-overwrite-mode-vs-discard-mode,can>> record it. For example,
6083 the following rules both match user space tracepoint events named
6084 `my_app:my_tracepoint` with an `INFO` log level:
6088 $ lttng enable-event --userspace my_app:my_tracepoint
6089 $ lttng enable-event --userspace my_app:my_tracepoint \
6093 The second recording event rule is redundant: the first one includes the
6097 [[disable-event-rule]]
6098 === Disable a recording event rule
6100 To disable a <<event,recording event rule>> that you
6101 <<enabling-disabling-events,created>> previously, use the
6102 man:lttng-disable-event(1) command.
6104 man:lttng-disable-event(1) can only find recording event rules to
6105 disable by their <<instrumentation-point-types,instrumentation point
6106 type>> and event name conditions. Therefore, you cannot disable
6107 recording event rules having a specific instrumentation point log level
6108 condition, for example.
6110 LTTng doesn't emit (and, therefore, won't record) an event which only
6111 _disabled_ recording event rules match.
6113 .Disable event rules matching Python logging events from the `my-logger` logger (default <<channel,channel>>, <<cur-tracing-session,current recording session>>).
6117 $ lttng disable-event --python my-logger
6121 .Disable event rules matching all `java.util.logging` events (default channel, recording session `my-session`).
6125 $ lttng disable-event --jul --session=my-session '*'
6129 .Disable _all_ the Linux kernel recording event rules (channel `my-chan`, current recording session).
6131 The opt:lttng-disable-event(1):--all-events option isn't, like the
6132 opt:lttng-enable-event(1):--all option of the man:lttng-enable-event(1)
6133 command, an alias for the event name globbing pattern `*`: it disables
6134 _all_ the recording event rules of a given channel.
6138 # lttng disable-event --kernel --channel=my-chan --all-events
6142 NOTE: You can't _remove_ a recording event rule once you create it.
6146 === Get the status of a recording session
6148 To get the status of the <<cur-tracing-session,current recording
6149 session>>, that is, its parameters, its channels, recording event rules,
6150 and their attributes:
6152 * Use the man:lttng-status(1) command:
6161 To get the status of any recording session:
6163 * Use the man:lttng-list(1) command with the name of the recording
6169 $ lttng list SESSION
6173 Replace +__SESSION__+ with the recording session name.
6176 [[basic-tracing-session-control]]
6177 === Start and stop a recording session
6179 Once you <<creating-destroying-tracing-sessions,create a recording
6180 session>> and <<enabling-disabling-events,create one or more recording
6181 event rules>>, you can start and stop the tracers for this recording
6184 To start the <<cur-tracing-session,current recording session>>:
6186 * Use the man:lttng-start(1) command:
6195 LTTng is flexible: you can launch user applications before or after you
6196 start the tracers. An LTTng tracer only <<event,records an event>> if a
6197 recording event rule matches it, which means the tracer is active.
6199 The `start-session` <<trigger,trigger>> action can also start a recording
6202 To stop the current recording session:
6204 * Use the man:lttng-stop(1) command:
6213 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
6214 records>> or lost sub-buffers since the last time you ran
6215 man:lttng-start(1), the man:lttng-stop(1) command prints corresponding
6218 IMPORTANT: You need to stop recording to make LTTng flush the remaining
6219 trace data and make the trace readable. Note that the
6220 man:lttng-destroy(1) command (see
6221 ``<<creating-destroying-tracing-sessions,Create and destroy a recording
6222 session>>'') also runs the man:lttng-stop(1) command implicitly.
6224 The `stop-session` <<trigger,trigger>> action can also stop a recording
6229 === Clear a recording session
6231 You might need to remove all the current tracing data of one or more
6232 <<tracing-session,recording sessions>> between multiple attempts to
6233 reproduce a problem without interrupting the LTTng recording activity.
6235 To clear the tracing data of the
6236 <<cur-tracing-session,current recording session>>:
6238 * Use the man:lttng-clear(1) command:
6247 To clear the tracing data of all the recording sessions:
6249 * Use the `lttng clear` command with its opt:lttng-clear(1):--all
6260 [[enabling-disabling-channels]]
6261 === Create a channel
6263 Once you <<creating-destroying-tracing-sessions,create a recording
6264 session>>, you can create a <<channel,channel>> with the
6265 man:lttng-enable-channel(1) command.
6267 Note that LTTng can automatically create a default channel when you
6268 <<enabling-disabling-events,create a recording event rule>>.
6269 Therefore, you only need to create a channel when you need non-default
6272 Specify each non-default channel attribute with a command-line
6273 option when you run the man:lttng-enable-channel(1) command.
6275 You can only create a custom channel in the Linux kernel and user space
6276 <<domain,tracing domains>>: the Java/Python logging tracing domains have
6277 their own default channel which LTTng automatically creates when you
6278 <<enabling-disabling-events,create a recording event rule>>.
6282 As of LTTng{nbsp}{revision}, you may _not_ perform the
6283 following operations with the man:lttng-enable-channel(1) command:
6285 * Change an attribute of an existing channel.
6287 * Enable a disabled channel once its recording session has been
6288 <<basic-tracing-session-control,active>> at least once.
6290 * Create a channel once its recording session has been active at
6293 * Create a user space channel with a given
6294 <<channel-buffering-schemes,buffering scheme>> and create a second
6295 user space channel with a different buffering scheme in the same
6299 The following examples show how to combine the command-line options of
6300 the man:lttng-enable-channel(1) command to create simple to more complex
6301 channels within the <<cur-tracing-session,current recording session>>.
6303 .Create a Linux kernel channel with default attributes.
6307 # lttng enable-channel --kernel my-channel
6311 .Create a user space channel with four sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6315 $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6316 --buffers-pid my-channel
6320 .[[blocking-timeout-example]]Create a default user space channel with an infinite blocking timeout.
6322 <<creating-destroying-tracing-sessions,Create a recording session>>,
6323 create the channel, <<enabling-disabling-events,create a recording event
6324 rule>>, and <<basic-tracing-session-control,start recording>>:
6329 $ lttng enable-channel --userspace --blocking-timeout=inf blocking-chan
6330 $ lttng enable-event --userspace --channel=blocking-chan --all
6334 Run an application instrumented with LTTng-UST tracepoints and allow it
6339 $ LTTNG_UST_ALLOW_BLOCKING=1 my-app
6343 .Create a Linux kernel channel which rotates eight trace files of 4{nbsp}MiB each for each stream.
6347 # lttng enable-channel --kernel --tracefile-count=8 \
6348 --tracefile-size=4194304 my-channel
6352 .Create a user space channel in <<overwrite-mode,overwrite>> (or ``flight recorder'') mode.
6356 $ lttng enable-channel --userspace --overwrite my-channel
6360 .<<enabling-disabling-events,Create>> the same <<event,recording event rule>> attached to two different channels.
6364 $ lttng enable-event --userspace --channel=my-channel app:tp
6365 $ lttng enable-event --userspace --channel=other-channel app:tp
6368 When a CPU executes the `app:tp` <<c-application,user space
6369 tracepoint>>, the two recording event rules above match the created
6370 event, making LTTng emit the event. Because the recording event rules
6371 are not attached to the same channel, LTTng records the event twice.
6376 === Disable a channel
6378 To disable a specific channel that you
6379 <<enabling-disabling-channels,created>> previously, use the
6380 man:lttng-disable-channel(1) command.
6382 .Disable a specific Linux kernel channel (<<cur-tracing-session,current recording session>>).
6386 # lttng disable-channel --kernel my-channel
6390 An enabled channel is an implicit <<event,recording event rule>>
6393 NOTE: As of LTTng{nbsp}{revision}, you may _not_ enable a disabled
6394 channel once its recording session has been
6395 <<basic-tracing-session-control,started>> at least once.
6399 === Add context fields to be recorded to the event records of a channel
6401 <<event,Event record>> fields in trace files provide important
6402 information about previously emitted events, but sometimes some external
6403 context may help you solve a problem faster.
6405 Examples of context fields are:
6407 * The **process ID**, **thread ID**, **process name**, and
6408 **process priority** of the thread from which LTTng emits the event.
6410 * The **hostname** of the system on which LTTng emits the event.
6412 * The Linux kernel and user call stacks (since LTTng{nbsp}2.11).
6414 * The current values of many possible **performance counters** using
6417 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6419 ** Branch instructions, misses, and loads.
6422 * Any state defined at the application level (supported for the
6423 `java.util.logging` and Apache log4j <<domain,tracing domains>>).
6425 To get the full list of available context fields:
6427 * Use the opt:lttng-add-context(1):--list option of the
6428 man:lttng-add-context(1) command:
6432 $ lttng add-context --list
6435 .Add context fields to be recorded to the event records of all the <<channel,channels>> of the <<cur-tracing-session,current recording session>>.
6437 The following command line adds the virtual process identifier and the
6438 per-thread CPU cycles count fields to all the user space channels of the
6439 current recording session.
6443 $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6447 .Add performance counter context fields by raw ID
6449 See man:lttng-add-context(1) for the exact format of the context field
6450 type, which is partly compatible with the format used in
6455 # lttng add-context --userspace --type=perf:thread:raw:r0110:test
6456 # lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6460 .Add context fields to be recorded to the event records of a specific channel.
6462 The following command line adds the thread identifier and user call
6463 stack context fields to the Linux kernel channel named `my-channel` of
6464 the <<cur-tracing-session,current recording session>>.
6468 # lttng add-context --kernel --channel=my-channel \
6469 --type=tid --type=callstack-user
6473 .Add an <<java-application-context,application-specific context field>> to be recorded to the event records of a specific channel.
6475 The following command line makes sure LTTng writes the `cur_msg_id`
6476 context field of the `retriever` context retriever to all the Java
6477 logging <<event,event records>> of the channel named `my-channel`:
6481 # lttng add-context --kernel --channel=my-channel \
6482 --type='$app:retriever:cur_msg_id'
6485 IMPORTANT: Make sure to always single-quote the `$` character when you
6486 run man:lttng-add-context(1) from a shell.
6489 NOTE: You can't undo what the man:lttng-add-context(1) command does.
6494 === Allow specific processes to record events
6496 It's often useful to only allow processes with specific attributes to
6497 record events. For example, you may wish to record all the system calls
6498 which a given process makes (à la man:strace(1)).
6500 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6501 purpose. Both commands operate on _inclusion sets_ of process
6502 attributes. The available process attribute types are:
6504 Linux kernel <<domain,tracing domain>>::
6508 * Virtual process ID (VPID).
6510 This is the PID as seen by the application.
6512 * Unix user ID (UID).
6514 * Virtual Unix user ID (VUID).
6516 This is the UID as seen by the application.
6518 * Unix group ID (GID).
6520 * Virtual Unix group ID (VGID).
6522 This is the GID as seen by the application.
6524 User space tracing domain::
6530 A <<tracing-session,recording session>> has nine process
6531 attribute inclusion sets: six for the Linux kernel <<domain,tracing domain>>
6532 and three for the user space tracing domain.
6534 For a given recording session, a process{nbsp}__P__ is allowed to record
6535 LTTng events for a given <<domain,tracing domain>>{nbsp}__D__ if _all_
6536 the attributes of{nbsp}__P__ are part of the inclusion sets
6539 Whether a process is allowed or not to record LTTng events is an
6540 implicit condition of all <<event,recording event rules>>. Therefore, if
6541 LTTng creates an event{nbsp}__E__ for a given process, but this process
6542 may not record events, then no recording event rule matches{nbsp}__E__,
6543 which means LTTng won't emit and record{nbsp}__E__.
6545 When you <<creating-destroying-tracing-sessions,create a recording
6546 session>>, all its process attribute inclusion sets contain all the
6547 possible values. In other words, all processes are allowed to record
6550 Add values to an inclusion set with the man:lttng-track(1) command and
6551 remove values with the man:lttng-untrack(1) command.
6555 The process attribute values are _numeric_.
6557 Should a process with a given ID (part of an inclusion set), for
6558 example, exit, and then a new process be given this same ID, then the
6559 latter would also be allowed to record events.
6561 With the man:lttng-track(1) command, you can add Unix user and group
6562 _names_ to the user and group inclusion sets: the
6563 <<lttng-sessiond,session daemon>> finds the corresponding UID, VUID,
6564 GID, or VGID once on _addition_ to the inclusion set. This means that if
6565 you rename the user or group after you run the man:lttng-track(1)
6566 command, its user/group ID remains part of the inclusion sets.
6569 .Allow processes to record events based on their virtual process ID (VPID).
6571 For the sake of the following example, assume the target system has
6572 16{nbsp}possible VPIDs.
6575 <<creating-destroying-tracing-sessions,create a recording session>>,
6576 the user space VPID inclusion set contains _all_ the possible VPIDs:
6579 .The VPID inclusion set is full.
6580 image::track-all.png[]
6582 When the inclusion set is full and you run the man:lttng-track(1)
6583 command to specify some VPIDs, LTTng:
6585 . Clears the inclusion set.
6586 . Adds the specific VPIDs to the inclusion set.
6592 $ lttng track --userspace --vpid=3,4,7,10,13
6595 the VPID inclusion set is:
6598 .The VPID inclusion set contains the VPIDs 3, 4, 7, 10, and 13.
6599 image::track-3-4-7-10-13.png[]
6601 Add more VPIDs to the inclusion set afterwards:
6605 $ lttng track --userspace --vpid=1,15,16
6611 .VPIDs 1, 15, and 16 are added to the inclusion set.
6612 image::track-1-3-4-7-10-13-15-16.png[]
6614 The man:lttng-untrack(1) command removes entries from process attribute
6615 inclusion sets. Given the previous example, the following command:
6619 $ lttng untrack --userspace --vpid=3,7,10,13
6622 leads to this VPID inclusion set:
6625 .VPIDs 3, 7, 10, and 13 are removed from the inclusion set.
6626 image::track-1-4-15-16.png[]
6628 You can make the VPID inclusion set full again with the
6629 opt:lttng-track(1):--all option:
6633 $ lttng track --userspace --vpid --all
6636 The result is, again:
6639 .The VPID inclusion set is full.
6640 image::track-all.png[]
6643 .Allow specific processes to record events based on their user ID (UID).
6645 A typical use case with process attribute inclusion sets is to start
6646 with an empty inclusion set, then <<basic-tracing-session-control,start
6647 the tracers>>, and finally add values manually while the tracers are
6650 Use the opt:lttng-untrack(1):--all option of the
6651 man:lttng-untrack(1) command to clear the inclusion set after you
6652 <<creating-destroying-tracing-sessions,create a recording session>>, for
6653 example (with UIDs):
6657 # lttng untrack --kernel --uid --all
6663 .The UID inclusion set is empty.
6664 image::untrack-all.png[]
6666 If the LTTng tracer runs with this inclusion set configuration, it
6667 records no events within the <<cur-tracing-session,current recording
6668 session>> because no processes is allowed to do so. Use the
6669 man:lttng-track(1) command as usual to add specific values to the UID
6670 inclusion set when you need to, for example:
6674 # lttng track --kernel --uid=http,11
6680 .UIDs 6 (`http`) and 11 are part of the UID inclusion set.
6681 image::track-6-11.png[]
6686 [[saving-loading-tracing-session]]
6687 === Save and load recording session configurations
6689 Configuring a <<tracing-session,recording session>> can be long. Some of
6690 the tasks involved are:
6692 * <<enabling-disabling-channels,Create channels>> with
6693 specific attributes.
6695 * <<adding-context,Add context fields>> to be recorded to the
6696 <<event,event records>> of specific channels.
6698 * <<enabling-disabling-events,Create recording event rules>> with
6699 specific log level, filter, and other conditions.
6701 If you use LTTng to solve real world problems, chances are you have to
6702 record events using the same recording session setup over and over,
6703 modifying a few variables each time in your instrumented program or
6706 To avoid constant recording session reconfiguration, the man:lttng(1)
6707 command-line tool can save and load recording session configurations
6710 To save a given recording session configuration:
6712 * Use the man:lttng-save(1) command:
6717 $ lttng save SESSION
6721 Replace +__SESSION__+ with the name of the recording session to save.
6723 LTTng saves recording session configurations to
6724 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6725 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. See
6726 man:lttng-save(1) to learn more about the recording session configuration
6729 LTTng saves all configuration parameters, for example:
6731 * The recording session name.
6732 * The trace data output path.
6733 * The <<channel,channels>>, with their state and all their attributes.
6734 * The context fields you added to channels.
6735 * The <<event,recording event rules>> with their state and conditions.
6737 To load a recording session:
6739 * Use the man:lttng-load(1) command:
6744 $ lttng load SESSION
6748 Replace +__SESSION__+ with the name of the recording session to load.
6750 When LTTng loads a configuration, it restores your saved recording session
6751 as if you just configured it manually.
6753 You can also save and load many sessions at a time; see
6754 man:lttng-save(1) and man:lttng-load(1) to learn more.
6757 [[sending-trace-data-over-the-network]]
6758 === Send trace data over the network
6760 LTTng can send the recorded trace data of a <<tracing-session,recording
6761 session>> to a remote system over the network instead of writing it to
6762 the local file system.
6764 To send the trace data over the network:
6766 . On the _remote_ system (which can also be the target system),
6767 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6776 . On the _target_ system, create a recording session
6777 <<net-streaming-mode,configured>> to send trace data over the network:
6782 $ lttng create my-session --set-url=net://remote-system
6786 Replace +__remote-system__+ with the host name or IP address of the
6787 remote system. See man:lttng-create(1) for the exact URL format.
6789 . On the target system, use the man:lttng(1) command-line tool as usual.
6791 When recording is <<basic-tracing-session-control,active>>, the
6792 <<lttng-consumerd,consumer daemon>> of the target sends the contents of
6793 <<channel,sub-buffers>> to the remote relay daemon instead of flushing
6794 them to the local file system. The relay daemon writes the received
6795 packets to its local file system.
6797 See the ``Output directory'' section of man:lttng-relayd(8) to learn
6798 where a relay daemon writes its received trace data.
6803 === View events as LTTng records them (noch:{LTTng} live)
6805 _LTTng live_ is a network protocol implemented by the
6806 <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)) to allow compatible
6807 trace readers to display or analyze <<event,event records>> as LTTng
6808 records events on the target system while recording is
6809 <<basic-tracing-session-control,active>>.
6811 The relay daemon creates a _tee_: it forwards the trace data to both the
6812 local file system and to connected live readers:
6815 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live reader.
6820 . On the _target system_, create a <<tracing-session,recording session>>
6826 $ lttng create my-session --live
6830 This operation spawns a local relay daemon.
6832 . Start the live reader and configure it to connect to the relay daemon.
6834 For example, with man:babeltrace2(1):
6839 $ babeltrace2 net://localhost/host/HOSTNAME/my-session
6843 Replace +__HOSTNAME__+ with the host name of the target system.
6845 . Configure the recording session as usual with the man:lttng(1)
6846 command-line tool, and <<basic-tracing-session-control,start recording>>.
6848 List the available live recording sessions with man:babeltrace2(1):
6852 $ babeltrace2 net://localhost
6855 You can start the relay daemon on another system. In this case, you need
6856 to specify the URL of the relay daemon when you
6857 <<creating-destroying-tracing-sessions,create the recording session>> with
6858 the opt:lttng-create(1):--set-url option of the man:lttng-create(1)
6859 command. You also need to replace +__localhost__+ in the procedure above
6860 with the host name of the system on which the relay daemon runs.
6864 [[taking-a-snapshot]]
6865 === Take a snapshot of the current sub-buffers of a recording session
6867 The normal behavior of LTTng is to append full sub-buffers to growing
6868 trace data files. This is ideal to keep a full history of the events
6869 which the target system emitted, but it can represent too much data in
6872 For example, you may wish to have LTTng record your application
6873 continuously until some critical situation happens, in which case you
6874 only need the latest few recorded events to perform the desired
6875 analysis, not multi-gigabyte trace files.
6877 With the man:lttng-snapshot(1) command, you can take a _snapshot_ of the
6878 current <<channel,sub-buffers>> of a given <<tracing-session,recording
6879 session>>. LTTng can write the snapshot to the local file system or send
6880 it over the network.
6883 .A snapshot is a copy of the current sub-buffers, which LTTng does _not_ clear after the operation.
6884 image::snapshot.png[]
6886 The snapshot feature of LTTng is similar to how a
6887 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder] or the
6888 ``roll'' mode of an oscilloscope work.
6890 TIP: If you wish to create unmanaged, self-contained, non-overlapping
6891 trace chunk archives instead of a simple copy of the current
6892 sub-buffers, see the <<session-rotation,recording session rotation>>
6893 feature (available since LTTng{nbsp}2.11).
6895 To take a snapshot of the <<cur-tracing-session,current recording
6898 . Create a recording session in <<snapshot-mode,snapshot mode>>:
6903 $ lttng create my-session --snapshot
6907 The <<channel-overwrite-mode-vs-discard-mode,event record loss mode>> of
6908 <<channel,channels>> created in this mode is automatically set to
6909 <<overwrite-mode,_overwrite_>>.
6911 . Configure the recording session as usual with the man:lttng(1)
6912 command-line tool, and <<basic-tracing-session-control,start
6915 . **Optional**: When you need to take a snapshot,
6916 <<basic-tracing-session-control,stop recording>>.
6918 You can take a snapshot when the tracers are active, but if you stop
6919 them first, you're guaranteed that the trace data in the sub-buffers
6920 doesn't change before you actually take the snapshot.
6927 $ lttng snapshot record --name=my-first-snapshot
6931 LTTng writes the current sub-buffers of all the channels of the
6932 <<cur-tracing-session,current recording session>> to
6933 trace files on the local file system. Those trace files have
6934 `my-first-snapshot` in their name.
6936 There's no difference between the format of a normal trace file and the
6937 format of a snapshot: LTTng trace readers also support LTTng snapshots.
6939 By default, LTTng writes snapshot files to the path shown by
6943 $ lttng snapshot list-output
6946 You can change this path or decide to send snapshots over the network
6949 . An output path or URL that you specify when you
6950 <<creating-destroying-tracing-sessions,create the recording session>>.
6952 . A snapshot output path or URL that you add using the
6953 `add-output` action of the man:lttng-snapshot(1) command.
6955 . An output path or URL that you provide directly to the
6956 `record` action of the man:lttng-snapshot(1) command.
6958 Method{nbsp}3 overrides method{nbsp}2, which overrides method 1. When
6959 you specify a URL, a <<lttng-relayd,relay daemon>> must listen on a
6960 remote system (see ``<<sending-trace-data-over-the-network,Send trace
6961 data over the network>>'').
6963 The `snapshot-session` <<trigger,trigger>> action can also take
6964 a recording session snapshot.
6968 [[session-rotation]]
6969 === Archive the current trace chunk (rotate a recording session)
6971 The <<taking-a-snapshot,snapshot user guide>> shows how to dump the
6972 current sub-buffers of a recording session to the file system or send them
6973 over the network. When you take a snapshot, LTTng doesn't clear the ring
6974 buffers of the recording session: if you take another snapshot immediately
6975 after, both snapshots could contain overlapping trace data.
6977 Inspired by https://en.wikipedia.org/wiki/Log_rotation[log rotation],
6978 _recording session rotation_ is a feature which appends the content of the
6979 ring buffers to what's already on the file system or sent over the
6980 network since the creation of the recording session or since the last
6981 rotation, and then clears those ring buffers to avoid trace data
6984 What LTTng is about to write when performing a recording session rotation
6985 is called the _current trace chunk_. When LTTng writes or sends over the
6986 network this current trace chunk, it becomes a _trace chunk archive_.
6987 Therefore, a recording session rotation operation _archives_ the current
6991 .A recording session rotation operation _archives_ the current trace chunk.
6992 image::rotation.png[]
6994 A trace chunk archive is a self-contained LTTng trace which LTTng
6995 doesn't manage anymore: you can read it, modify it, move it, or remove
6998 As of LTTng{nbsp}{revision}, there are three methods to perform a
6999 recording session rotation:
7001 * <<immediate-rotation,Immediately>>.
7003 * With a <<rotation-schedule,rotation schedule>>.
7005 * Through the execution of a `rotate-session` <<trigger,trigger>>
7008 [[immediate-rotation]]To perform an immediate rotation of the
7009 <<cur-tracing-session,current recording session>>:
7011 . <<creating-destroying-tracing-sessions,Create a recording session>> in
7012 <<local-mode,local mode>> or <<net-streaming-mode,network streaming
7013 mode>> (only those two recording session modes support recording session
7019 # lttng create my-session
7023 . <<enabling-disabling-events,Create one or more recording event rules>>
7024 and <<basic-tracing-session-control,start recording>>:
7029 # lttng enable-event --kernel sched_'*'
7034 . When needed, immediately rotate the current recording session:
7043 The man:lttng-rotate(1) command prints the path to the created trace
7044 chunk archive. See its manual page to learn about the format of trace
7045 chunk archive directory names.
7047 Perform other immediate rotations while the recording session is active.
7048 It's guaranteed that all the trace chunk archives don't contain
7049 overlapping trace data. You can also perform an immediate rotation once
7050 you have <<basic-tracing-session-control,stopped>> the recording session.
7052 . When you're done recording,
7053 <<creating-destroying-tracing-sessions,destroy the current recording
7063 The recording session destruction operation creates one last trace chunk
7064 archive from the current trace chunk.
7066 [[rotation-schedule]]A recording session rotation schedule is a planned
7067 rotation which LTTng performs automatically based on one of the
7068 following conditions:
7070 * A timer with a configured period expires.
7072 * The total size of the _flushed_ part of the current trace chunk
7073 becomes greater than or equal to a configured value.
7075 To schedule a rotation of the <<cur-tracing-session,current recording
7076 session>>, set a _rotation schedule_:
7078 . <<creating-destroying-tracing-sessions,Create a recording session>> in
7079 <<local-mode,local mode>> or <<net-streaming-mode,network streaming
7080 mode>> (only those two creation modes support recording session
7086 # lttng create my-session
7090 . <<enabling-disabling-events,Create one or more recording event rules>>:
7095 # lttng enable-event --kernel sched_'*'
7099 . Set a recording session rotation schedule:
7104 # lttng enable-rotation --timer=10s
7108 In this example, we set a rotation schedule so that LTTng performs a
7109 recording session rotation every ten seconds.
7111 See man:lttng-enable-rotation(1) to learn more about other ways to set a
7114 . <<basic-tracing-session-control,Start recording>>:
7123 LTTng performs recording session rotations automatically while the
7124 recording session is active thanks to the rotation schedule.
7126 . When you're done recording,
7127 <<creating-destroying-tracing-sessions,destroy the current recording
7137 The recording session destruction operation creates one last trace chunk
7138 archive from the current trace chunk.
7140 Unset a recording session rotation schedule with the
7141 man:lttng-disable-rotation(1) command.
7145 [[add-event-rule-matches-trigger]]
7146 === Add an ``event rule matches'' trigger to a session daemon
7148 With the man:lttng-add-trigger(1) command, you can add a
7149 <<trigger,trigger>> to a <<lttng-sessiond,session daemon>>.
7151 A trigger associates an LTTng tracing condition to one or more actions:
7152 when the condition is satisfied, LTTng attempts to execute the actions.
7154 A trigger doesn't need any <<tracing-session,recording session>> to exist:
7155 it belongs to a session daemon.
7157 As of LTTng{nbsp}{revision}, many condition types are available through
7158 the <<liblttng-ctl-lttng,`liblttng-ctl`>> C{nbsp}API, but the
7159 man:lttng-add-trigger(1) command only accepts the ``event rule matches''
7162 An ``event rule matches'' condition is satisfied when its event rule
7165 Unlike a <<event,recording event rule>>, the event rule of an
7166 ``event rule matches'' trigger condition has no implicit conditions,
7169 * It has no enabled/disabled state.
7170 * It has no attached <<channel,channel>>.
7171 * It doesn't belong to a <<tracing-session,recording session>>.
7173 Both the man:lttng-add-trigger(1) and man:lttng-enable-event(1) commands
7174 accept command-line arguments to specify an <<event-rule,event rule>>.
7175 That being said, the former is a more recent command and therefore
7176 follows the common event rule specification format (see
7177 man:lttng-event-rule(7)).
7179 .Start a <<tracing-session,recording session>> when an event rule matches.
7181 This example shows how to add the following trigger to the root
7182 <<lttng-sessiond,session daemon>>:
7185 An event rule matches a Linux kernel system call event of which the
7186 name starts with `exec` and `*/ls` matches the `filename` payload
7189 With such an event rule, LTTng emits an event when the cmd:ls program
7193 <<basic-tracing-session-control,Start the recording session>>
7196 To add such a trigger to the root session daemon:
7198 . **If there's no currently running LTTng root session daemon**, start
7203 # lttng-sessiond --daemonize
7206 . <<creating-destroying-tracing-sessions,Create a recording session>>
7208 <<enabling-disabling-events,create a recording event rule>> matching
7209 all the system call events:
7213 # lttng create pitou
7214 # lttng enable-event --kernel --syscall --all
7217 . Add the trigger to the root session daemon:
7221 # lttng add-trigger --condition=event-rule-matches \
7222 --type=syscall --name='exec*' \
7223 --filter='filename == "*/ls"' \
7224 --action=start-session pitou
7227 Confirm that the trigger exists with the man:lttng-list-triggers(1)
7232 # lttng list-triggers
7235 . Make sure the `pitou` recording session is still inactive (stopped):
7242 The first line should be something like:
7245 Recording session pitou: [inactive]
7248 Run the cmd:ls program to fire the LTTng trigger above:
7255 At this point, the `pitou` recording session should be active
7256 (started). Confirm this with the man:lttng-list(1) command again:
7263 The first line should now look like:
7266 Recording session pitou: [active]
7269 This line confirms that the LTTng trigger you added fired, therefore
7270 starting the `pitou` recording session.
7273 .[[trigger-event-notif]]Send a notification to a user application when an event rule matches.
7275 This example shows how to add the following trigger to the root
7276 <<lttng-sessiond,session daemon>>:
7279 An event rule matches a Linux kernel tracepoint event named
7280 `sched_switch` and of which the value of the `next_comm` payload
7283 With such an event rule, LTTng emits an event when Linux gives access to
7284 the processor to a process named `bash`.
7287 Send an LTTng notification to a user application.
7289 Moreover, we'll specify a _capture descriptor_ with the
7290 `event-rule-matches` trigger condition so that the user application can
7291 get the value of a specific `sched_switch` event payload field.
7293 First, write and build the user application:
7295 . Create the C{nbsp}source file of the application:
7303 #include <stdbool.h>
7306 #include <lttng/lttng.h>
7309 * Subscribes to notifications, through the notification channel
7310 * `notification_channel`, which match the condition of the trigger
7311 * named `trigger_name`.
7313 * Returns `true` on success.
7315 static bool subscribe(struct lttng_notification_channel *notification_channel,
7316 const char *trigger_name)
7318 const struct lttng_condition *condition = NULL;
7319 struct lttng_triggers *triggers = NULL;
7320 unsigned int trigger_count;
7322 enum lttng_error_code error_code;
7323 enum lttng_trigger_status trigger_status;
7326 /* Get all LTTng triggers */
7327 error_code = lttng_list_triggers(&triggers);
7328 assert(error_code == LTTNG_OK);
7330 /* Get the number of triggers */
7331 trigger_status = lttng_triggers_get_count(triggers, &trigger_count);
7332 assert(trigger_status == LTTNG_TRIGGER_STATUS_OK);
7334 /* Find the trigger named `trigger_name` */
7335 for (i = 0; i < trigger_count; i++) {
7336 const struct lttng_trigger *trigger;
7337 const char *this_trigger_name;
7339 trigger = lttng_triggers_get_at_index(triggers, i);
7340 trigger_status = lttng_trigger_get_name(trigger, &this_trigger_name);
7341 assert(trigger_status == LTTNG_TRIGGER_STATUS_OK);
7343 if (strcmp(this_trigger_name, trigger_name) == 0) {
7344 /* Trigger found: subscribe with its condition */
7345 enum lttng_notification_channel_status notification_channel_status;
7347 notification_channel_status = lttng_notification_channel_subscribe(
7348 notification_channel,
7349 lttng_trigger_get_const_condition(trigger));
7350 assert(notification_channel_status ==
7351 LTTNG_NOTIFICATION_CHANNEL_STATUS_OK);
7357 lttng_triggers_destroy(triggers);
7362 * Handles the evaluation `evaluation` of a single notification.
7364 static void handle_evaluation(const struct lttng_evaluation *evaluation)
7366 enum lttng_evaluation_status evaluation_status;
7367 const struct lttng_event_field_value *array_field_value;
7368 const struct lttng_event_field_value *string_field_value;
7369 enum lttng_event_field_value_status event_field_value_status;
7370 const char *string_field_string_value;
7372 /* Get the value of the first captured (string) field */
7373 evaluation_status = lttng_evaluation_event_rule_matches_get_captured_values(
7374 evaluation, &array_field_value);
7375 assert(evaluation_status == LTTNG_EVALUATION_STATUS_OK);
7376 event_field_value_status =
7377 lttng_event_field_value_array_get_element_at_index(
7378 array_field_value, 0, &string_field_value);
7379 assert(event_field_value_status == LTTNG_EVENT_FIELD_VALUE_STATUS_OK);
7380 assert(lttng_event_field_value_get_type(string_field_value) ==
7381 LTTNG_EVENT_FIELD_VALUE_TYPE_STRING);
7382 event_field_value_status = lttng_event_field_value_string_get_value(
7383 string_field_value, &string_field_string_value);
7384 assert(event_field_value_status == LTTNG_EVENT_FIELD_VALUE_STATUS_OK);
7386 /* Print the string value of the field */
7387 puts(string_field_string_value);
7390 int main(int argc, char *argv[])
7392 int exit_status = EXIT_SUCCESS;
7393 struct lttng_notification_channel *notification_channel;
7394 enum lttng_notification_channel_status notification_channel_status;
7395 const struct lttng_condition *condition;
7396 const char *trigger_name;
7400 trigger_name = argv[1];
7403 * Create a notification channel.
7405 * A notification channel connects the user application to the LTTng
7408 * You can use this notification channel to listen to various types
7411 notification_channel = lttng_notification_channel_create(
7412 lttng_session_daemon_notification_endpoint);
7413 assert(notification_channel);
7416 * Subscribe to notifications which match the condition of the
7417 * trigger named `trigger_name`.
7419 if (!subscribe(notification_channel, trigger_name)) {
7421 "Error: Failed to subscribe to notifications (trigger `%s`).\n",
7423 exit_status = EXIT_FAILURE;
7428 * Notification loop.
7430 * Put this in a dedicated thread to avoid blocking the main thread.
7433 struct lttng_notification *notification;
7434 enum lttng_notification_channel_status status;
7435 const struct lttng_evaluation *notification_evaluation;
7437 /* Receive the next notification */
7438 status = lttng_notification_channel_get_next_notification(
7439 notification_channel, ¬ification);
7442 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
7444 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
7446 * The session daemon can drop notifications if a receiving
7447 * application doesn't consume the notifications fast
7451 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
7453 * The session daemon closed the notification channel.
7455 * This is typically caused by a session daemon shutting
7460 /* Unhandled conditions or errors */
7461 exit_status = EXIT_FAILURE;
7466 * Handle the condition evaluation.
7468 * A notification provides, amongst other things:
7470 * * The condition that caused LTTng to send this notification.
7472 * * The condition evaluation, which provides more specific
7473 * information on the evaluation of the condition.
7475 handle_evaluation(lttng_notification_get_evaluation(notification));
7477 /* Destroy the notification object */
7478 lttng_notification_destroy(notification);
7482 lttng_notification_channel_destroy(notification_channel);
7488 This application prints the first captured string field value of the
7489 condition evaluation of each LTTng notification it receives.
7491 . Build the `notif-app` application,
7492 using https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
7493 to provide the right compiler and linker flags:
7498 $ gcc -o notif-app notif-app.c $(pkg-config --cflags --libs lttng-ctl)
7502 Now, to add the trigger to the root session daemon:
7505 . **If there's no currently running LTTng root session daemon**, start
7510 # lttng-sessiond --daemonize
7513 . Add the trigger, naming it `sched-switch-notif`, to the root
7518 # lttng add-trigger --name=sched-switch-notif \
7519 --condition=event-rule-matches \
7520 --type=kernel --name=sched_switch \
7521 --filter='next_comm == "bash"' --capture=prev_comm \
7525 Confirm that the `sched-switch-notif` trigger exists with the
7526 man:lttng-list-triggers(1) command:
7530 # lttng list-triggers
7533 Run the cmd:notif-app application, passing the name of the trigger
7534 of which to watch the notifications:
7538 # ./notif-app sched-switch-notif
7541 Now, in an interactive Bash, type a few keys to fire the
7542 `sched-switch-notif` trigger. Watch the `notif-app` application print
7543 the previous process names.
7548 === Use the machine interface
7550 With any command of the man:lttng(1) command-line tool, set the
7551 opt:lttng(1):--mi option to `xml` (before the command name) to get an
7552 XML machine interface output, for example:
7556 $ lttng --mi=xml list my-session
7559 A schema definition (XSD) is
7560 https://github.com/lttng/lttng-tools/blob/stable-{revision}/src/common/mi-lttng-4.0.xsd[available]
7561 to ease the integration with external tools as much as possible.
7565 [[metadata-regenerate]]
7566 === Regenerate the metadata of an LTTng trace
7568 An LTTng trace, which is a https://diamon.org/ctf[CTF] trace, has both
7569 data stream files and a metadata stream file. This metadata file
7570 contains, amongst other things, information about the offset of the
7571 clock sources which LTTng uses to assign timestamps to <<event,event
7572 records>> when recording.
7574 If, once a <<tracing-session,recording session>> is
7575 <<basic-tracing-session-control,started>>, a major
7576 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
7577 happens, the clock offset of the trace also needs to be updated. Use
7578 the `metadata` item of the man:lttng-regenerate(1) command to do so.
7580 The main use case of this command is to allow a system to boot with
7581 an incorrect wall time and have LTTng trace it before its wall time
7582 is corrected. Once the system is known to be in a state where its
7583 wall time is correct, you can run `lttng regenerate metadata`.
7585 To regenerate the metadata stream files of the
7586 <<cur-tracing-session,current recording session>>:
7588 * Use the `metadata` item of the man:lttng-regenerate(1) command:
7593 $ lttng regenerate metadata
7599 [[regenerate-statedump]]
7600 === Regenerate the state dump event records of a recording session
7602 The LTTng kernel and user space tracers generate state dump
7603 <<event,event records>> when the application starts or when you
7604 <<basic-tracing-session-control,start a recording session>>.
7606 An analysis can use the state dump event records to set an initial state
7607 before it builds the rest of the state from the subsequent event
7608 records. http://tracecompass.org/[Trace Compass] and
7609 https://github.com/lttng/lttng-analyses[LTTng analyses] are notable
7610 examples of applications which use the state dump of an LTTng trace.
7612 When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
7613 state dump event records aren't included in the snapshot trace files
7614 because they were recorded to a <<channel,sub-buffer>> that has been
7615 consumed or <<overwrite-mode,overwritten>> already.
7617 Use the `statedump` item of the man:lttng-regenerate(1) command to emit
7618 and record the state dump events again.
7620 To regenerate the state dump of the <<cur-tracing-session,current
7621 recording session>>, provided you created it in <<snapshot-mode,snapshot
7622 mode>>, before you take a snapshot:
7624 . Use the `statedump` item of the man:lttng-regenerate(1) command:
7629 $ lttng regenerate statedump
7633 . <<basic-tracing-session-control,Stop the recording session>>:
7642 . <<taking-a-snapshot,Take a snapshot>>:
7647 $ lttng snapshot record --name=my-snapshot
7651 Depending on the event throughput, you should run steps{nbsp}1
7652 and{nbsp}2 as closely as possible.
7656 To record the state dump events, you need to
7657 <<enabling-disabling-events,create recording event rules>> which enable
7660 * The names of LTTng-UST state dump tracepoints start with
7661 `lttng_ust_statedump:`.
7663 * The names of LTTng-modules state dump tracepoints start with
7669 [[persistent-memory-file-systems]]
7670 === Record trace data on persistent memory file systems
7672 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile
7673 random-access memory] (NVRAM) is random-access memory that retains its
7674 information when power is turned off (non-volatile). Systems with such
7675 memory can store data structures in RAM and retrieve them after a
7676 reboot, without flushing to typical _storage_.
7678 Linux supports NVRAM file systems thanks to either
7679 http://pramfs.sourceforge.net/[PRAMFS] or
7680 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
7681 (requires Linux{nbsp}4.1+).
7683 This section doesn't describe how to operate such file systems; we
7684 assume that you have a working persistent memory file system.
7686 When you <<creating-destroying-tracing-sessions,create a recording
7687 session>>, you can specify the path of the shared memory holding the
7688 sub-buffers. If you specify a location on an NVRAM file system, then you
7689 can retrieve the latest recorded trace data when the system reboots
7692 To record trace data on a persistent memory file system and retrieve the
7693 trace data after a system crash:
7695 . Create a recording session with a <<channel,sub-buffer>> shared memory
7696 path located on an NVRAM file system:
7701 $ lttng create my-session --shm-path=/path/to/shm/on/nvram
7705 . Configure the recording session as usual with the man:lttng(1)
7706 command-line tool, and <<basic-tracing-session-control,start
7709 . After a system crash, use the man:lttng-crash(1) command-line tool to
7710 read the trace data recorded on the NVRAM file system:
7715 $ lttng-crash /path/to/shm/on/nvram
7719 The binary layout of the ring buffer files isn't exactly the same as the
7720 trace files layout. This is why you need to use man:lttng-crash(1)
7721 instead of some standard LTTng trace reader.
7723 To convert the ring buffer files to LTTng trace files:
7725 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
7730 $ lttng-crash --extract=/path/to/trace /path/to/shm/on/nvram
7736 [[notif-trigger-api]]
7737 === Get notified when the buffer usage of a channel is too high or too low
7739 With the notification and <<trigger,trigger>> C{nbsp}API of
7740 <<liblttng-ctl-lttng,`liblttng-ctl`>>, LTTng can notify your user
7741 application when the buffer usage of one or more <<channel,channels>>
7742 becomes too low or too high.
7744 Use this API and enable or disable <<event,recording event rules>> while
7745 a recording session <<basic-tracing-session-control,is active>> to avoid
7746 <<channel-overwrite-mode-vs-discard-mode,discarded event records>>, for
7749 .Send a notification to a user application when the buffer usage of an LTTng channel is too high.
7751 In this example, we create and build an application which gets notified
7752 when the buffer usage of a specific LTTng channel is higher than
7755 We only print that it's the case in this example, but we could as well
7756 use the `liblttng-ctl` C{nbsp}API to <<enabling-disabling-events,disable
7757 recording event rules>> when this happens, for example.
7759 . Create the C{nbsp}source file of the application:
7768 #include <lttng/lttng.h>
7770 int main(int argc, char *argv[])
7772 int exit_status = EXIT_SUCCESS;
7773 struct lttng_notification_channel *notification_channel;
7774 struct lttng_condition *condition;
7775 struct lttng_action *action;
7776 struct lttng_trigger *trigger;
7777 const char *recording_session_name;
7778 const char *channel_name;
7781 recording_session_name = argv[1];
7782 channel_name = argv[2];
7785 * Create a notification channel.
7787 * A notification channel connects the user application to the LTTng
7790 * You can use this notification channel to listen to various types
7793 notification_channel = lttng_notification_channel_create(
7794 lttng_session_daemon_notification_endpoint);
7797 * Create a "buffer usage becomes greater than" condition.
7799 * In this case, the condition is satisfied when the buffer usage
7800 * becomes greater than or equal to 75 %.
7802 * We create the condition for a specific recording session name,
7803 * channel name, and for the user space tracing domain.
7805 * The following condition types also exist:
7807 * * The buffer usage of a channel becomes less than a given value.
7809 * * The consumed data size of a recording session becomes greater
7810 * than a given value.
7812 * * A recording session rotation becomes ongoing.
7814 * * A recording session rotation becomes completed.
7816 * * A given event rule matches an event.
7818 condition = lttng_condition_buffer_usage_high_create();
7819 lttng_condition_buffer_usage_set_threshold_ratio(condition, .75);
7820 lttng_condition_buffer_usage_set_session_name(condition,
7821 recording_session_name);
7822 lttng_condition_buffer_usage_set_channel_name(condition,
7824 lttng_condition_buffer_usage_set_domain_type(condition,
7828 * Create an action (receive a notification) to execute when the
7829 * condition created above is satisfied.
7831 action = lttng_action_notify_create();
7836 * A trigger associates a condition to an action: LTTng executes
7837 * the action when the condition is satisfied.
7839 trigger = lttng_trigger_create(condition, action);
7841 /* Register the trigger to the LTTng session daemon. */
7842 lttng_register_trigger(trigger);
7845 * Now that we have registered a trigger, LTTng will send a
7846 * notification every time its condition is met through a
7847 * notification channel.
7849 * To receive this notification, we must subscribe to notifications
7850 * which match the same condition.
7852 lttng_notification_channel_subscribe(notification_channel,
7856 * Notification loop.
7858 * Put this in a dedicated thread to avoid blocking the main thread.
7861 struct lttng_notification *notification;
7862 enum lttng_notification_channel_status status;
7863 const struct lttng_evaluation *notification_evaluation;
7864 const struct lttng_condition *notification_condition;
7865 double buffer_usage;
7867 /* Receive the next notification. */
7868 status = lttng_notification_channel_get_next_notification(
7869 notification_channel, ¬ification);
7872 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
7874 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
7876 * The session daemon can drop notifications if a monitoring
7877 * application isn't consuming the notifications fast
7881 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
7883 * The session daemon closed the notification channel.
7885 * This is typically caused by a session daemon shutting
7890 /* Unhandled conditions or errors. */
7891 exit_status = EXIT_FAILURE;
7896 * A notification provides, amongst other things:
7898 * * The condition that caused LTTng to send this notification.
7900 * * The condition evaluation, which provides more specific
7901 * information on the evaluation of the condition.
7903 * The condition evaluation provides the buffer usage
7904 * value at the moment the condition was satisfied.
7906 notification_condition = lttng_notification_get_condition(
7908 notification_evaluation = lttng_notification_get_evaluation(
7911 /* We're subscribed to only one condition. */
7912 assert(lttng_condition_get_type(notification_condition) ==
7913 LTTNG_CONDITION_TYPE_BUFFER_USAGE_HIGH);
7916 * Get the exact sampled buffer usage from the condition
7919 lttng_evaluation_buffer_usage_get_usage_ratio(
7920 notification_evaluation, &buffer_usage);
7923 * At this point, instead of printing a message, we could do
7924 * something to reduce the buffer usage of the channel, like
7925 * disable specific events, for example.
7927 printf("Buffer usage is %f %% in recording session \"%s\", "
7928 "user space channel \"%s\".\n", buffer_usage * 100,
7929 recording_session_name, channel_name);
7931 /* Destroy the notification object. */
7932 lttng_notification_destroy(notification);
7936 lttng_action_destroy(action);
7937 lttng_condition_destroy(condition);
7938 lttng_trigger_destroy(trigger);
7939 lttng_notification_channel_destroy(notification_channel);
7945 . Build the `notif-app` application, linking it with `liblttng-ctl`:
7950 $ gcc -o notif-app notif-app.c $(pkg-config --cflags --libs lttng-ctl)
7954 . <<creating-destroying-tracing-sessions,Create a recording session>>,
7955 <<enabling-disabling-events,create a recording event rule>> matching
7956 all the user space tracepoint events, and
7957 <<basic-tracing-session-control,start recording>>:
7962 $ lttng create my-session
7963 $ lttng enable-event --userspace --all
7968 If you create the channel manually with the man:lttng-enable-channel(1)
7969 command, you can set its <<channel-monitor-timer,monitor timer>> to
7970 control how frequently LTTng samples the current values of the channel
7971 properties to evaluate user conditions.
7973 . Run the `notif-app` application.
7975 This program accepts the <<tracing-session,recording session>> and
7976 user space channel names as its two first arguments. The channel
7977 which LTTng automatically creates with the man:lttng-enable-event(1)
7978 command above is named `channel0`:
7983 $ ./notif-app my-session channel0
7987 . In another terminal, run an application with a very high event
7988 throughput so that the 75{nbsp}% buffer usage condition is reached.
7990 In the first terminal, the application should print lines like this:
7993 Buffer usage is 81.45197 % in recording session "my-session", user space
7997 If you don't see anything, try to make the threshold of the condition in
7998 path:{notif-app.c} lower (0.1{nbsp}%, for example), and then rebuild the
7999 `notif-app` application (step{nbsp}2) and run it again (step{nbsp}4).
8006 [[lttng-modules-ref]]
8007 === noch:{LTTng-modules}
8011 [[lttng-tracepoint-enum]]
8012 ==== `LTTNG_TRACEPOINT_ENUM()` usage
8014 Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
8018 LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
8023 * `name` with the name of the enumeration (C identifier, unique
8024 amongst all the defined enumerations).
8025 * `entries` with a list of enumeration entries.
8027 The available enumeration entry macros are:
8029 +ctf_enum_value(__name__, __value__)+::
8030 Entry named +__name__+ mapped to the integral value +__value__+.
8032 +ctf_enum_range(__name__, __begin__, __end__)+::
8033 Entry named +__name__+ mapped to the range of integral values between
8034 +__begin__+ (included) and +__end__+ (included).
8036 +ctf_enum_auto(__name__)+::
8037 Entry named +__name__+ mapped to the integral value following the
8040 The last value of a `ctf_enum_value()` entry is its +__value__+
8043 The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
8045 If `ctf_enum_auto()` is the first entry in the list, its integral
8048 Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
8049 to use a defined enumeration as a tracepoint field.
8051 .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
8055 LTTNG_TRACEPOINT_ENUM(
8058 ctf_enum_auto("AUTO: EXPECT 0")
8059 ctf_enum_value("VALUE: 23", 23)
8060 ctf_enum_value("VALUE: 27", 27)
8061 ctf_enum_auto("AUTO: EXPECT 28")
8062 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
8063 ctf_enum_auto("AUTO: EXPECT 304")
8071 [[lttng-modules-tp-fields]]
8072 ==== Tracepoint fields macros (for `TP_FIELDS()`)
8074 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
8075 tracepoint fields, which must be listed within `TP_FIELDS()` in
8076 `LTTNG_TRACEPOINT_EVENT()`, are:
8078 [role="func-desc growable",cols="asciidoc,asciidoc"]
8079 .Available macros to define LTTng-modules tracepoint fields
8081 |Macro |Description and parameters
8084 +ctf_integer(__t__, __n__, __e__)+
8086 +ctf_integer_nowrite(__t__, __n__, __e__)+
8088 +ctf_user_integer(__t__, __n__, __e__)+
8090 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
8092 Standard integer, displayed in base{nbsp}10.
8095 Integer C type (`int`, `long`, `size_t`, ...).
8101 Argument expression.
8104 +ctf_integer_hex(__t__, __n__, __e__)+
8106 +ctf_user_integer_hex(__t__, __n__, __e__)+
8108 Standard integer, displayed in base{nbsp}16.
8117 Argument expression.
8119 |+ctf_integer_oct(__t__, __n__, __e__)+
8121 Standard integer, displayed in base{nbsp}8.
8130 Argument expression.
8133 +ctf_integer_network(__t__, __n__, __e__)+
8135 +ctf_user_integer_network(__t__, __n__, __e__)+
8137 Integer in network byte order (big-endian), displayed in base{nbsp}10.
8146 Argument expression.
8149 +ctf_integer_network_hex(__t__, __n__, __e__)+
8151 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
8153 Integer in network byte order, displayed in base{nbsp}16.
8162 Argument expression.
8165 +ctf_enum(__N__, __t__, __n__, __e__)+
8167 +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
8169 +ctf_user_enum(__N__, __t__, __n__, __e__)+
8171 +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
8176 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
8179 Integer C type (`int`, `long`, `size_t`, ...).
8185 Argument expression.
8188 +ctf_string(__n__, __e__)+
8190 +ctf_string_nowrite(__n__, __e__)+
8192 +ctf_user_string(__n__, __e__)+
8194 +ctf_user_string_nowrite(__n__, __e__)+
8196 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
8202 Argument expression.
8205 +ctf_array(__t__, __n__, __e__, __s__)+
8207 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
8209 +ctf_user_array(__t__, __n__, __e__, __s__)+
8211 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
8213 Statically-sized array of integers.
8216 Array element C type.
8222 Argument expression.
8228 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
8230 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
8232 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
8234 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
8236 Statically-sized array of bits.
8238 The type of +__e__+ must be an integer type. +__s__+ is the number
8239 of elements of such type in +__e__+, not the number of bits.
8242 Array element C type.
8248 Argument expression.
8254 +ctf_array_text(__t__, __n__, __e__, __s__)+
8256 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
8258 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
8260 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
8262 Statically-sized array, printed as text.
8264 The string doesn't need to be null-terminated.
8267 Array element C type (always `char`).
8273 Argument expression.
8279 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
8281 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
8283 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
8285 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
8287 Dynamically-sized array of integers.
8289 The type of +__E__+ must be unsigned.
8292 Array element C type.
8298 Argument expression.
8301 Length expression C type.
8307 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
8309 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
8311 Dynamically-sized array of integers, displayed in base{nbsp}16.
8313 The type of +__E__+ must be unsigned.
8316 Array element C type.
8322 Argument expression.
8325 Length expression C type.
8330 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
8332 Dynamically-sized array of integers in network byte order (big-endian),
8333 displayed in base{nbsp}10.
8335 The type of +__E__+ must be unsigned.
8338 Array element C type.
8344 Argument expression.
8347 Length expression C type.
8353 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
8355 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
8357 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
8359 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
8361 Dynamically-sized array of bits.
8363 The type of +__e__+ must be an integer type. +__s__+ is the number
8364 of elements of such type in +__e__+, not the number of bits.
8366 The type of +__E__+ must be unsigned.
8369 Array element C type.
8375 Argument expression.
8378 Length expression C type.
8384 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
8386 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
8388 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
8390 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
8392 Dynamically-sized array, displayed as text.
8394 The string doesn't need to be null-terminated.
8396 The type of +__E__+ must be unsigned.
8398 The behaviour is undefined if +__e__+ is `NULL`.
8401 Sequence element C type (always `char`).
8407 Argument expression.
8410 Length expression C type.
8416 Use the `_user` versions when the argument expression, `e`, is
8417 a user space address. In the cases of `ctf_user_integer*()` and
8418 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
8421 The `_nowrite` versions omit themselves from the trace data, but are
8422 otherwise identical. This means LTTng won't write the `_nowrite` fields
8423 to the recorded trace. Their primary purpose is to make some of the
8424 event context available to the <<enabling-disabling-events,recording
8425 event rule filters>> without having to commit the data to
8426 <<channel,sub-buffers>>.
8432 Terms related to LTTng and to tracing in general:
8434 [[def-action]]action::
8435 The part of a <<def-trigger,trigger>> which LTTng executes when the
8436 trigger <<def-condition,condition>> is satisfied.
8439 The https://diamon.org/babeltrace[Babeltrace] project, which includes:
8442 https://babeltrace.org/docs/v2.0/man1/babeltrace2.1/[cmd:babeltrace2]
8443 command-line interface.
8444 * The libbabeltrace2 library which offers a
8445 https://babeltrace.org/docs/v2.0/libbabeltrace2/[C API].
8446 * https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings].
8449 [[def-buffering-scheme]]<<channel-buffering-schemes,buffering scheme>>::
8450 A layout of <<def-sub-buffer,sub-buffers>> applied to a given channel.
8452 [[def-channel]]<<channel,channel>>::
8453 An entity which is responsible for a set of
8454 <<def-ring-buffer,ring buffers>>.
8456 <<def-recording-event-rule,Recording event rules>> are always attached
8457 to a specific channel.
8460 A source of time for a <<def-tracer,tracer>>.
8462 [[def-condition]]condition::
8463 The part of a <<def-trigger,trigger>> which must be satisfied for
8464 LTTng to attempt to execute the trigger <<def-action,actions>>.
8466 [[def-consumer-daemon]]<<lttng-consumerd,consumer daemon>>::
8467 A program which is responsible for consuming the full
8468 <<def-sub-buffer,sub-buffers>> and write them to a file system or
8469 send them over the network.
8471 [[def-current-trace-chunk]]current trace chunk::
8472 A <<def-trace-chunk,trace chunk>> which includes the current content
8473 of all the <<def-sub-buffer,sub-buffers>> of the
8474 <<def-tracing-session,recording session>> and the stream files
8475 produced since the latest event amongst:
8477 * The creation of the recording session.
8478 * The last <<def-tracing-session-rotation,recording session rotation>>, if
8481 <<channel-overwrite-mode-vs-discard-mode,discard mode>>::
8482 The <<def-event-record-loss-mode,event record loss mode>> in which
8483 the <<def-tracer,tracer>> _discards_ new <<def-event-record,event
8484 records>> when there's no <<def-sub-buffer,sub-buffer>> space left to
8487 [[def-event]]event::
8488 The execution of an <<def-instrumentation-point,instrumentation
8489 point>>, like a <<def-tracepoint,tracepoint>> that you manually place
8490 in some source code, or a Linux kprobe.
8492 When an instrumentation point is executed, LTTng creates an event.
8494 When an <<def-event-rule,event rule>> matches the event,
8495 <<def-lttng,LTTng>> executes some action, for example:
8497 * Record its payload to a <<def-sub-buffer,sub-buffer>> as an
8498 <<def-event-record,event record>>.
8499 * Attempt to execute the user-defined actions of a
8500 <<def-trigger,trigger>> with an
8501 <<add-event-rule-matches-trigger,``event rule matches''>> condition.
8503 [[def-event-name]]event name::
8504 The name of an <<def-event,event>>, which is also the name of the
8505 <<def-event-record,event record>>.
8507 This is also called the _instrumentation point name_.
8509 [[def-event-record]]event record::
8510 A record (binary serialization), in a <<def-trace,trace>>, of the
8511 payload of an <<def-event,event>>.
8513 The payload of an event record has zero or more _fields_.
8515 [[def-event-record-loss-mode]]<<channel-overwrite-mode-vs-discard-mode,event record loss mode>>::
8516 The mechanism by which event records of a given
8517 <<def-channel,channel>> are lost (not recorded) when there's no
8518 <<def-sub-buffer,sub-buffer>> space left to store them.
8520 [[def-event-rule]]<<event-rule,event rule>>::
8521 Set of conditions which an <<def-event,event>> must satisfy
8522 for LTTng to execute some action.
8524 An event rule is said to _match_ events, like a
8525 https://en.wikipedia.org/wiki/Regular_expression[regular expression]
8528 A <<def-recording-event-rule,recording event rule>> is a specific type
8529 of event rule of which the action is to <<def-record,record>> the event
8530 to a <<def-sub-buffer,sub-buffer>>.
8532 [[def-incl-set]]inclusion set::
8533 In the <<pid-tracking,process attribute inclusion set>> context: a
8534 set of <<def-proc-attr,process attributes>> of a given type.
8536 <<instrumenting,instrumentation>>::
8537 The use of <<def-lttng,LTTng>> probes to make a kernel or
8538 <<def-user-application,user application>> traceable.
8540 [[def-instrumentation-point]]instrumentation point::
8541 A point in the execution path of a kernel or
8542 <<def-user-application,user application>> which, when executed,
8543 create an <<def-event,event>>.
8545 instrumentation point name::
8546 See _<<def-event-name,event name>>_.
8548 `java.util.logging`::
8550 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities]
8551 of the Java platform.
8554 A https://logging.apache.org/log4j/1.2/[logging library] for Java
8555 developed by the Apache Software Foundation.
8558 Level of severity of a log statement or user space
8559 <<def-instrumentation-point,instrumentation point>>.
8561 [[def-lttng]]LTTng::
8562 The _Linux Trace Toolkit: next generation_ project.
8564 <<lttng-cli,cmd:lttng>>::
8565 A command-line tool provided by the <<def-lttng-tools,LTTng-tools>>
8566 project which you can use to send and receive control messages to and
8567 from a <<def-session-daemon,session daemon>>.
8570 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
8571 which is a set of analyzing programs that you can use to obtain a
8572 higher level view of an <<def-lttng,LTTng>> <<def-trace,trace>>.
8574 cmd:lttng-consumerd::
8575 The name of the <<def-consumer-daemon,consumer daemon>> program.
8578 A utility provided by the <<def-lttng-tools,LTTng-tools>> project
8579 which can convert <<def-ring-buffer,ring buffer>> files (usually
8580 <<persistent-memory-file-systems,saved on a persistent memory file
8581 system>>) to <<def-trace,trace>> files.
8583 See man:lttng-crash(1).
8585 LTTng Documentation::
8588 <<lttng-live,LTTng live>>::
8589 A communication protocol between the <<lttng-relayd,relay daemon>> and
8590 live readers which makes it possible to show or analyze
8591 <<def-event-record,event records>> ``live'', as they're received by
8592 the <<def-relay-daemon,relay daemon>>.
8594 <<lttng-modules,LTTng-modules>>::
8595 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
8596 which contains the Linux kernel modules to make the Linux kernel
8597 <<def-instrumentation-point,instrumentation points>> available for
8598 <<def-lttng,LTTng>> tracing.
8601 The name of the <<def-relay-daemon,relay daemon>> program.
8603 cmd:lttng-sessiond::
8604 The name of the <<def-session-daemon,session daemon>> program.
8606 [[def-lttng-tools]]LTTng-tools::
8607 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
8608 contains the various programs and libraries used to
8609 <<controlling-tracing,control tracing>>.
8611 [[def-lttng-ust]]<<lttng-ust,LTTng-UST>>::
8612 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
8613 contains libraries to instrument
8614 <<def-user-application,user applications>>.
8616 <<lttng-ust-agents,LTTng-UST Java agent>>::
8617 A Java package provided by the <<def-lttng-ust,LTTng-UST>> project to
8618 allow the LTTng instrumentation of `java.util.logging` and Apache
8619 log4j{nbsp}1.2 logging statements.
8621 <<lttng-ust-agents,LTTng-UST Python agent>>::
8622 A Python package provided by the <<def-lttng-ust,LTTng-UST>> project
8623 to allow the <<def-lttng,LTTng>> instrumentation of Python logging
8626 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
8627 The <<def-event-record-loss-mode,event record loss mode>> in which new
8628 <<def-event-record,event records>> _overwrite_ older event records
8629 when there's no <<def-sub-buffer,sub-buffer>> space left to store
8632 <<channel-buffering-schemes,per-process buffering>>::
8633 A <<def-buffering-scheme,buffering scheme>> in which each instrumented
8634 process has its own <<def-sub-buffer,sub-buffers>> for a given user
8635 space <<def-channel,channel>>.
8637 <<channel-buffering-schemes,per-user buffering>>::
8638 A <<def-buffering-scheme,buffering scheme>> in which all the processes
8639 of a Unix user share the same <<def-sub-buffer,sub-buffers>> for a
8640 given user space <<def-channel,channel>>.
8642 [[def-proc-attr]]process attribute::
8643 In the <<pid-tracking,process attribute inclusion set>> context:
8646 * A virtual process ID.
8648 * A virtual Unix user ID.
8650 * A virtual Unix group ID.
8653 See <<def-event-record,_event record_>>.
8655 [[def-record]]record (_verb_)::
8656 Serialize the binary payload of an <<def-event,event>> to a
8657 <<def-sub-buffer,sub-buffer>>.
8659 [[def-recording-event-rule]]<<event,recording event rule>>::
8660 Specific type of <<def-event-rule,event rule>> of which the action is
8661 to <<def-record,record>> the matched event to a
8662 <<def-sub-buffer,sub-buffer>>.
8664 [[def-tracing-session]][[def-recording-session]]<<tracing-session,recording session>>::
8665 A stateful dialogue between you and a <<lttng-sessiond,session daemon>>.
8667 [[def-tracing-session-rotation]]<<session-rotation,recording session rotation>>::
8668 The action of archiving the
8669 <<def-current-trace-chunk,current trace chunk>> of a
8670 <<def-tracing-session,recording session>>.
8672 [[def-relay-daemon]]<<lttng-relayd,relay daemon>>::
8673 A process which is responsible for receiving the <<def-trace,trace>>
8674 data which a distant <<def-consumer-daemon,consumer daemon>> sends.
8676 [[def-ring-buffer]]ring buffer::
8677 A set of <<def-sub-buffer,sub-buffers>>.
8680 See _<<def-tracing-session-rotation,recording session rotation>>_.
8682 [[def-session-daemon]]<<lttng-sessiond,session daemon>>::
8683 A process which receives control commands from you and orchestrates
8684 the <<def-tracer,tracers>> and various <<def-lttng,LTTng>> daemons.
8686 <<taking-a-snapshot,snapshot>>::
8687 A copy of the current data of all the <<def-sub-buffer,sub-buffers>>
8688 of a given <<def-tracing-session,recording session>>, saved as
8689 <<def-trace,trace>> files.
8691 [[def-sub-buffer]]sub-buffer::
8692 One part of an <<def-lttng,LTTng>> <<def-ring-buffer,ring buffer>>
8693 which contains <<def-event-record,event records>>.
8696 The time information attached to an <<def-event,event>> when LTTng
8699 [[def-trace]]trace (_noun_)::
8702 * One https://diamon.org/ctf/[CTF] metadata stream file.
8703 * One or more CTF data stream files which are the concatenations of one
8704 or more flushed <<def-sub-buffer,sub-buffers>>.
8706 [[def-trace-verb]]trace (_verb_)::
8707 From the perspective of a <<def-tracer,tracer>>: attempt to execute
8708 one or more actions when emitting an <<def-event,event>> in an
8709 application or in a system.
8711 [[def-trace-chunk]]trace chunk::
8712 A self-contained <<def-trace,trace>> which is part of a
8713 <<def-tracing-session,recording session>>. Each
8714 <<def-tracing-session-rotation, recording session rotation>> produces a
8715 <<def-trace-chunk-archive,trace chunk archive>>.
8717 [[def-trace-chunk-archive]]trace chunk archive::
8718 The result of a <<def-tracing-session-rotation, recording session
8721 <<def-lttng,LTTng>> doesn't manage any trace chunk archive, even if its
8722 containing <<def-tracing-session,recording session>> is still active: you
8723 are free to read it, modify it, move it, or remove it.
8726 The http://tracecompass.org[Trace Compass] project and application.
8728 [[def-tracepoint]]tracepoint::
8729 An instrumentation point using the tracepoint mechanism of the Linux
8730 kernel or of <<def-lttng-ust,LTTng-UST>>.
8732 tracepoint definition::
8733 The definition of a single <<def-tracepoint,tracepoint>>.
8736 The name of a <<def-tracepoint,tracepoint>>.
8738 [[def-tracepoint-provider]]tracepoint provider::
8739 A set of functions providing <<def-tracepoint,tracepoints>> to an
8740 instrumented <<def-user-application,user application>>.
8742 Not to be confused with a <<def-tracepoint-provider-package,tracepoint
8743 provider package>>: many tracepoint providers can exist within a
8744 tracepoint provider package.
8746 [[def-tracepoint-provider-package]]tracepoint provider package::
8747 One or more <<def-tracepoint-provider,tracepoint providers>> compiled
8748 as an https://en.wikipedia.org/wiki/Object_file[object file] or as a
8749 link:https://en.wikipedia.org/wiki/Library_(computing)#Shared_libraries[shared
8752 [[def-tracer]]tracer::
8753 A piece of software which executes some action when it emits
8754 an <<def-event,event>>, like <<def-record,record>> it to some
8757 <<domain,tracing domain>>::
8758 A type of LTTng <<def-tracer,tracer>>.
8760 <<tracing-group,tracing group>>::
8761 The Unix group which a Unix user can be part of to be allowed to
8762 control the Linux kernel LTTng <<def-tracer,tracer>>.
8764 [[def-trigger]]<<trigger,trigger>>::
8765 A <<def-condition,condition>>-<<def-action,actions>> pair; when the
8766 condition of a trigger is satisfied, LTTng attempts to execute its
8769 [[def-user-application]]user application::
8770 An application (program or library) running in user space, as opposed
8771 to a Linux kernel module, for example.