1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.13, 28 November 2023
7 include::../common/copyright.txt[]
10 include::../common/welcome.txt[]
13 include::../common/audience.txt[]
17 === What's in this documentation?
19 The LTTng Documentation is divided into the following sections:
21 * ``**<<nuts-and-bolts,Nuts and bolts>>**'' explains the
22 rudiments of software tracing and the rationale behind the
25 Skip this section if you’re familiar with software tracing and with the
28 * ``**<<installing-lttng,Installation>>**'' describes the steps to
29 install the LTTng packages on common Linux distributions and from
32 Skip this section if you already properly installed LTTng on your target
35 * ``**<<getting-started,Quick start>>**'' is a concise guide to
36 get started quickly with LTTng kernel and user space tracing.
38 We recommend this section if you're new to LTTng or to software tracing
41 Skip this section if you're not new to LTTng.
43 * ``**<<core-concepts,Core concepts>>**'' explains the concepts at
46 It's a good idea to become familiar with the core concepts
47 before attempting to use the toolkit.
49 * ``**<<plumbing,Components of LTTng>>**'' describes the various
50 components of the LTTng machinery, like the daemons, the libraries,
51 and the command-line interface.
53 * ``**<<instrumenting,Instrumentation>>**'' shows different ways to
54 instrument user applications and the Linux kernel for LTTng tracing.
56 Instrumenting source code is essential to provide a meaningful
59 Skip this section if you don't have a programming background.
61 * ``**<<controlling-tracing,Tracing control>>**'' is divided into topics
62 which demonstrate how to use the vast array of features that
63 LTTng{nbsp}{revision} offers.
65 * ``**<<reference,Reference>>**'' contains API reference tables.
67 * ``**<<glossary,Glossary>>**'' is a specialized dictionary of terms
68 related to LTTng or to the field of software tracing.
71 include::../common/convention.txt[]
74 include::../common/acknowledgements.txt[]
78 == What's new in LTTng{nbsp}{revision}?
80 LTTng{nbsp}{revision} bears the name _Nordicité_, the product of a
81 collaboration between https://champlibre.co/[Champ Libre] and
82 https://www.boreale.com/[Boréale]. This farmhouse IPA is brewed with
83 https://en.wikipedia.org/wiki/Kveik[Kveik] yeast and Québec-grown
84 barley, oats, and juniper branches. The result is a remarkable, fruity,
85 hazy golden IPA that offers a balanced touch of resinous and woodsy
88 New features and changes in LTTng{nbsp}{revision}:
92 * The LTTng trigger API of <<liblttng-ctl-lttng,`liblttng-ctl`>> now
93 offers the ``__event rule matches__'' condition (an <<event-rule,event
94 rule>> matches an event) as well as the following new actions:
97 * <<basic-tracing-session-control,Start or stop>> a recording session.
98 * <<session-rotation,Archive the current trace chunk>> of a
99 recording session (rotate).
100 * <<taking-a-snapshot,Take a snapshot>> of a recording session.
103 As a reminder, a <<trigger,trigger>> is a condition-actions pair. When
104 the condition of a trigger is satisfied, LTTng attempts to execute its
107 This feature is also available with the new man:lttng-add-trigger(1),
108 man:lttng-remove-trigger(1), and man:lttng-list-triggers(1)
109 <<lttng-cli,cmd:lttng>> commands.
111 Starting from LTTng{nbsp}{revision}, a trigger may have more than one
114 See “<<add-event-rule-matches-trigger,Add an ``event rule matches''
115 trigger to a session daemon>>” to learn more.
117 * The LTTng <<lttng-ust,user space>> and <<lttng-modules,kernel>>
118 tracers offer the new namespace context field `time_ns`, which is the
119 inode number, in the proc file system, of the current clock namespace.
121 See man:lttng-add-context(1), man:lttng-ust(3), and
122 man:time_namespaces(7).
124 * The link:/man[manual pages] of LTTng-tools now have a terminology and
125 style which match the LTTng Documentation, many fixes, more internal
126 and manual page links, clearer lists and procedures, superior
127 consistency, and usage examples.
129 The new man:lttng-event-rule(7) manual page explains the new, common
130 way to specify an event rule on the command line.
132 The new man:lttng-concepts(7) manual page explains the core concepts of
133 LTTng. Its contents is essentially the ``<<core-concepts,Core
134 concepts>>'' section of this documentation, but more adapted to the
141 The major version part of the `liblttng-ust`
142 https://en.wikipedia.org/wiki/Soname[soname] is bumped, which means you
143 **must recompile** your instrumented applications/libraries and
144 <<tracepoint-provider,tracepoint provider packages>> to use
145 LTTng-UST{nbsp}{revision}.
147 This change became a necessity to clean up the library and for
148 `liblttng-ust` to stop exporting private symbols.
150 Also, LTTng{nbsp}{revision} prepends the `lttng_ust_` and `LTTNG_UST_`
151 prefix to all public macro/definition/function names to offer a
152 consistent API namespace. The LTTng{nbsp}2.12 API is still available;
153 see the ``Compatibility with previous APIs'' section of
157 Other notable changes:
159 * The `liblttng-ust` C{nbsp}API offers the new man:lttng_ust_vtracef(3)
160 and man:lttng_ust_vtracelog(3) macros which are to
161 man:lttng_ust_tracef(3) and man:lttng_ust_tracelog(3) what
162 man:vprintf(3) is to man:printf(3).
164 * LTTng-UST now only depends on https://liburcu.org/[`liburcu`] at build
165 time, not at run time.
169 * The preferred display base of event record integer fields which
170 contain memory addresses is now hexadecimal instead of decimal.
172 * The `pid` field is removed from `lttng_statedump_file_descriptor`
173 event records and the `file_table_address` field is added.
175 This new field is the address of the `files_struct` structure which
176 contains the file descriptor.
179 ``https://github.com/lttng/lttng-modules/commit/e7a0ca7205fd4be7c829d171baa8823fe4784c90[statedump: introduce `file_table_address`]''
182 * The `flags` field of `syscall_entry_clone` event records is now a
183 structure containing two enumerations (exit signal and options).
185 This change makes the flag values more readable and meaningful.
188 ``https://github.com/lttng/lttng-modules/commit/d775625e2ba4825b73b5897e7701ad6e2bdba115[syscalls: Make `clone()`'s `flags` field a 2 enum struct]''
191 * The memory footprint of the kernel tracer is improved: the latter only
192 generates metadata for the specific system call recording event rules
193 that you <<enabling-disabling-events,create>>.
199 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
200 generation_ is a modern toolkit for tracing Linux systems and
201 applications. So your first question might be:
208 As the history of software engineering progressed and led to what
209 we now take for granted--complex, numerous and
210 interdependent software applications running in parallel on
211 sophisticated operating systems like Linux--the authors of such
212 components, software developers, began feeling a natural
213 urge to have tools that would ensure the robustness and good performance
214 of their masterpieces.
216 One major achievement in this field is, inarguably, the
217 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
218 an essential tool for developers to find and fix bugs. But even the best
219 debugger won't help make your software run faster, and nowadays, faster
220 software means either more work done by the same hardware, or cheaper
221 hardware for the same work.
223 A _profiler_ is often the tool of choice to identify performance
224 bottlenecks. Profiling is suitable to identify _where_ performance is
225 lost in a given piece of software. The profiler outputs a profile, a
226 statistical summary of observed events, which you may use to discover
227 which functions took the most time to execute. However, a profiler won't
228 report _why_ some identified functions are the bottleneck. Bottlenecks
229 might only occur when specific conditions are met, conditions that are
230 sometimes impossible to capture by a statistical profiler, or impossible
231 to reproduce with an application altered by the overhead of an
232 event-based profiler. For a thorough investigation of software
233 performance issues, a history of execution is essential, with the
234 recorded values of variables and context fields you choose, and with as
235 little influence as possible on the instrumented application. This is
236 where tracing comes in handy.
238 _Tracing_ is a technique used to understand what goes on in a running
239 software system. The piece of software used for tracing is called a
240 _tracer_, which is conceptually similar to a tape recorder. When
241 recording, specific instrumentation points placed in the software source
242 code generate events that are saved on a giant tape: a _trace_ file. You
243 can record user application and operating system events at the same
244 time, opening the possibility of resolving a wide range of problems that
245 would otherwise be extremely challenging.
247 Tracing is often compared to _logging_. However, tracers and loggers are
248 two different tools, serving two different purposes. Tracers are
249 designed to record much lower-level events that occur much more
250 frequently than log messages, often in the range of thousands per
251 second, with very little execution overhead. Logging is more appropriate
252 for a very high-level analysis of less frequent events: user accesses,
253 exceptional conditions (errors and warnings, for example), database
254 transactions, instant messaging communications, and such. Simply put,
255 logging is one of the many use cases that can be satisfied with tracing.
257 The list of recorded events inside a trace file can be read manually
258 like a log file for the maximum level of detail, but it's generally
259 much more interesting to perform application-specific analyses to
260 produce reduced statistics and graphs that are useful to resolve a
261 given problem. Trace viewers and analyzers are specialized tools
264 In the end, this is what LTTng is: a powerful, open source set of
265 tools to trace the Linux kernel and user applications at the same time.
266 LTTng is composed of several components actively maintained and
267 developed by its link:/community/#where[community].
270 [[lttng-alternatives]]
271 === Alternatives to noch:{LTTng}
273 Excluding proprietary solutions, a few competing software tracers
276 https://github.com/dtrace4linux/linux[dtrace4linux]::
277 A port of Sun Microsystems' DTrace to Linux.
279 The cmd:dtrace tool interprets user scripts and is responsible for
280 loading code into the Linux kernel for further execution and collecting
283 https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF]::
284 A subsystem in the Linux kernel in which a virtual machine can
285 execute programs passed from the user space to the kernel.
287 You can attach such programs to tracepoints and kprobes thanks to a
288 system call, and they can output data to the user space when executed
289 thanks to different mechanisms (pipe, VM register values, and eBPF maps,
292 https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]::
293 The de facto function tracer of the Linux kernel.
295 Its user interface is a set of special files in sysfs.
297 https://perf.wiki.kernel.org/[perf]::
298 A performance analysis tool for Linux which supports hardware
299 performance counters, tracepoints, as well as other counters and
302 The controlling utility of perf is the cmd:perf command line/text UI
305 https://linux.die.net/man/1/strace[strace]::
306 A command-line utility which records system calls made by a
307 user process, as well as signal deliveries and changes of process
310 strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace] to
311 fulfill its function.
313 https://www.sysdig.org/[sysdig]::
314 Like SystemTap, uses scripts to analyze Linux kernel events.
316 You write scripts, or _chisels_ in the jargon of sysdig, in Lua and
317 sysdig executes them while it traces the system or afterwards. The
318 interface of sysdig is the cmd:sysdig command-line tool as well as the
319 text UI-based cmd:csysdig tool.
321 https://sourceware.org/systemtap/[SystemTap]::
322 A Linux kernel and user space tracer which uses custom user scripts
323 to produce plain text traces.
325 SystemTap converts the scripts to the C language, and then compiles them
326 as Linux kernel modules which are loaded to produce trace data. The
327 primary user interface of SystemTap is the cmd:stap command-line tool.
329 The main distinctive features of LTTng is that it produces correlated
330 kernel and user space traces, as well as doing so with the lowest
331 overhead amongst other solutions. It produces trace files in the
332 https://diamon.org/ctf[CTF] format, a file format optimized
333 for the production and analyses of multi-gigabyte data.
335 LTTng is the result of more than 10{nbsp}years of active open source
336 development by a community of passionate developers. LTTng is currently
337 available on major desktop and server Linux distributions.
339 The main interface for tracing control is a single command-line tool
340 named cmd:lttng. The latter can create several recording sessions, enable
341 and disable recording event rules on the fly, filter events efficiently
342 with custom user expressions, start and stop tracing, and much more.
343 LTTng can write the traces on the file system or send them over the
344 network, and keep them totally or partially. You can make LTTng execute
345 user-defined actions when LTTng emits an event. You can view the traces
346 once tracing becomes inactive or as LTTng records events.
348 <<installing-lttng,Install LTTng now>> and
349 <<getting-started,start tracing>>!
355 **LTTng** is a set of software <<plumbing,components>> which interact to
356 <<instrumenting,instrument>> the Linux kernel and user applications, and
357 to <<controlling-tracing,control tracing>> (start and stop
358 recording, create recording event rules, and the rest). Those
359 components are bundled into the following packages:
362 Libraries and command-line interface to control tracing.
365 Linux kernel modules to instrument and trace the kernel.
368 Libraries and Java/Python packages to instrument and trace user
371 Most distributions mark the LTTng-modules and LTTng-UST packages as
372 optional when installing LTTng-tools (which is always required). In the
373 following sections, we always provide the steps to install all three,
376 * You only need to install LTTng-modules if you intend to use
377 the Linux kernel LTTng tracer.
379 * You only need to install LTTng-UST if you intend to use the user
383 .Availability of LTTng{nbsp}{revision} for major Linux distributions as of 17{nbsp}October{nbsp}2023.
386 |Distribution |Available in releases
388 |https://www.ubuntu.com/[Ubuntu]
389 |xref:ubuntu[Ubuntu 22.04 LTS _Jammy Jellyfish_, Ubuntu 23.04 _Lunar Lobster_, and Ubuntu 23.10 _Mantic Minotaur_].
391 Ubuntu{nbsp}18.04 LTS _Bionic Beaver_ and Ubuntu{nbsp}20.04 LTS _Focal Fossa_:
392 <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
394 |https://www.debian.org/[Debian]
395 |<<debian,Debian{nbsp}12 _bookworm_>>.
397 |https://getfedora.org/[Fedora]
398 |xref:fedora[Fedora{nbsp}37, Fedora{nbsp}38, and Fedora{nbsp}39].
400 |https://www.archlinux.org/[Arch Linux]
401 |<<arch-linux,_extra_ repository and AUR>>.
403 |https://alpinelinux.org/[Alpine Linux]
404 |xref:alpine-linux[Alpine Linux{nbsp}3.16, Alpine Linux{nbsp}3.17, and Alpine Linux{nbsp}3.18].
406 |https://buildroot.org/[Buildroot]
407 |xref:buildroot[Buildroot{nbsp}2022.02, Buildroot{nbsp}2022.05,
408 Buildroot{nbsp}2022.08, Buildroot{nbsp}2022.11, Buildroot{nbsp}2023.02,
409 Buildroot{nbsp}2023.05, and Buildroot{nbsp}2023.08].
411 |https://www.openembedded.org/wiki/Main_Page[OpenEmbedded] and
412 https://www.yoctoproject.org/[Yocto]
413 |xref:oe-yocto[Yocto Project{nbsp}3.3 _Honister_, Yocto Project{nbsp}4.0 _Kirkstone_,
414 Yocto Project{nbsp}4.1 _Langdale_, Yocto Project{nbsp}4.2 _Mickledore_, and
415 Yocto Project{nbsp}4.3 _Nanbield_].
421 For https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
422 packages, see https://packages.efficios.com/[EfficiOS Enterprise
425 For other distributions, <<building-from-source,build LTTng from
430 === [[ubuntu-official-repository]]Ubuntu
432 LTTng{nbsp}{revision} is available on Ubuntu 22.04 LTS _Jammy Jellyfish_, Ubuntu 23.04 _Lunar Lobster_, and Ubuntu 23.10 _Mantic Minotaur_. For previous supported releases of Ubuntu, <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
434 To install LTTng{nbsp}{revision} on Ubuntu{nbsp}22.04 LTS _Jammy Jellyfish_:
436 . Install the main LTTng{nbsp}{revision} packages:
441 # apt-get install lttng-tools
442 # apt-get install lttng-modules-dkms
443 # apt-get install liblttng-ust-dev
447 . **If you need to instrument and trace <<java-application,Java applications>>**,
448 install the LTTng-UST Java agent:
453 # apt-get install liblttng-ust-agent-java
457 . **If you need to instrument and trace <<python-application,Python{nbsp}3
458 applications>>**, install the LTTng-UST Python agent:
463 # apt-get install python3-lttngust
468 === Ubuntu: noch:{LTTng} Stable {revision} PPA
470 The https://launchpad.net/~lttng/+archive/ubuntu/stable-{revision}[LTTng
471 Stable{nbsp}{revision} PPA] offers the latest stable LTTng{nbsp}{revision}
472 packages for Ubuntu{nbsp}18.04 LTS _Bionic Beaver_, Ubuntu{nbsp}20.04 LTS _Focal Fossa_,
473 and Ubuntu{nbsp}22.04 LTS _Jammy Jellyfish_.
475 To install LTTng{nbsp}{revision} from the LTTng Stable{nbsp}{revision}
478 . Add the LTTng Stable{nbsp}{revision} PPA repository and update the
482 [role="term",subs="attributes"]
484 # apt-add-repository ppa:lttng/stable-{revision}
489 . Install the main LTTng{nbsp}{revision} packages:
494 # apt-get install lttng-tools
495 # apt-get install lttng-modules-dkms
496 # apt-get install liblttng-ust-dev
500 . **If you need to instrument and trace
501 <<java-application,Java applications>>**, install the LTTng-UST
507 # apt-get install liblttng-ust-agent-java
511 . **If you need to instrument and trace
512 <<python-application,Python{nbsp}3 applications>>**, install the
513 LTTng-UST Python agent:
518 # apt-get install python3-lttngust
525 To install LTTng{nbsp}{revision} on Debian{nbsp}12 _bookworm_:
527 . Install the main LTTng{nbsp}{revision} packages:
532 # apt install lttng-modules-dkms
533 # apt install liblttng-ust-dev
534 # apt install lttng-tools
538 . **If you need to instrument and trace <<java-application,Java
539 applications>>**, install the LTTng-UST Java agent:
544 # apt install liblttng-ust-agent-java
548 . **If you need to instrument and trace <<python-application,Python
549 applications>>**, install the LTTng-UST Python agent:
554 # apt install python3-lttngust
561 To install LTTng{nbsp}{revision} on Fedora{nbsp}37, Fedora{nbsp}38, or
564 . Install the LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision}
570 # yum install lttng-tools
571 # yum install lttng-ust
575 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
578 [role="term",subs="attributes,specialcharacters"]
581 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-{revision}.tar.bz2 &&
582 tar -xf lttng-modules-latest-{revision}.tar.bz2 &&
583 cd lttng-modules-{revision}.* &&
585 sudo make modules_install &&
591 .Java and Python application instrumentation and tracing
593 If you need to instrument and trace <<java-application,Java
594 applications>> on Fedora, you need to build and install
595 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
596 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
597 `--enable-java-agent-all` options to the `configure` script, depending
598 on which Java logging framework you use.
600 If you need to instrument and trace <<python-application,Python
601 applications>> on Fedora, you need to build and install
602 LTTng-UST{nbsp}{revision} from source and pass the
603 `--enable-python-agent` option to the `configure` script.
610 LTTng-UST{nbsp}{revision} is available in the _extra_
611 repository of Arch Linux, while LTTng-tools{nbsp}{revision} and
612 LTTng-modules{nbsp}{revision} are available in the
613 https://aur.archlinux.org/[AUR].
615 To install LTTng{nbsp}{revision} on Arch Linux, using
616 https://github.com/Jguer/yay[yay] for the AUR packages:
618 . Install the main LTTng{nbsp}{revision} packages:
623 # pacman -Sy lttng-ust
624 $ yay -Sy lttng-tools
625 $ yay -Sy lttng-modules
629 . **If you need to instrument and trace <<python-application,Python
630 applications>>**, install the LTTng-UST Python agent:
635 # pacman -Sy python-lttngust
643 To install LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision} on
644 Alpine Linux{nbsp}3.16, Alpine Linux{nbsp}3.17, or Alpine Linux{nbsp}3.18:
646 . Add the LTTng packages:
651 # apk add lttng-tools
652 # apk add lttng-ust-dev
656 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
659 [role="term",subs="attributes,specialcharacters"]
662 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-{revision}.tar.bz2 &&
663 tar -xf lttng-modules-latest-{revision}.tar.bz2 &&
664 cd lttng-modules-{revision}.* &&
666 sudo make modules_install &&
675 To install LTTng{nbsp}{revision} on Buildroot{nbsp}2022.02, Buildroot{nbsp}2022.05,
676 Buildroot{nbsp}2022.08, Buildroot{nbsp}2022.11, Buildroot{nbsp}2023.02,
677 Buildroot{nbsp}2023.05, or Buildroot{nbsp}2023.08:
679 . Launch the Buildroot configuration tool:
688 . In **Kernel**, check **Linux kernel**.
689 . In **Toolchain**, check **Enable WCHAR support**.
690 . In **Target packages**{nbsp}→ **Debugging, profiling and benchmark**,
691 check **lttng-modules** and **lttng-tools**.
692 . In **Target packages**{nbsp}→ **Libraries**{nbsp}→
693 **Other**, check **lttng-libust**.
697 === OpenEmbedded and Yocto
699 LTTng{nbsp}{revision} recipes are available in the
700 https://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
701 layer for Yocto Project{nbsp}3.3 _Honister_, Yocto Project{nbsp}4.0 _Kirkstone_,
702 Yocto Project{nbsp}4.1 _Langdale_, Yocto Project{nbsp}4.2 _Mickledore_, and
703 Yocto Project{nbsp}4.3 _Nanbield_ under the following names:
709 With BitBake, the simplest way to include LTTng recipes in your target
710 image is to add them to `IMAGE_INSTALL_append` in path:{conf/local.conf}:
713 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
718 . Select a machine and an image recipe.
719 . Click **Edit image recipe**.
720 . Under the **All recipes** tab, search for **lttng**.
721 . Check the desired LTTng recipes.
724 [[building-from-source]]
725 === Build from source
727 To build and install LTTng{nbsp}{revision} from source:
729 . Using the package manager of your distribution, or from source,
730 install the following dependencies of LTTng-tools and LTTng-UST:
733 * https://sourceforge.net/projects/libuuid/[libuuid]
734 * https://directory.fsf.org/wiki/Popt[popt]
735 * https://liburcu.org/[Userspace RCU]
736 * http://www.xmlsoft.org/[libxml2]
737 * **Optional**: https://github.com/numactl/numactl[numactl]
740 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
746 wget https://lttng.org/files/lttng-modules/lttng-modules-latest-2.13.tar.bz2 &&
747 tar -xf lttng-modules-latest-2.13.tar.bz2 &&
748 cd lttng-modules-2.13.* &&
750 sudo make modules_install &&
755 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
761 wget https://lttng.org/files/lttng-ust/lttng-ust-latest-2.13.tar.bz2 &&
762 tar -xf lttng-ust-latest-2.13.tar.bz2 &&
763 cd lttng-ust-2.13.* &&
771 Add `--disable-numa` to `./configure` if you don't have
772 https://github.com/numactl/numactl[numactl].
776 .Java and Python application tracing
778 If you need to instrument and have LTTng trace <<java-application,Java
779 applications>>, pass the `--enable-java-agent-jul`,
780 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
781 `configure` script, depending on which Java logging framework you use.
783 If you need to instrument and have LTTng trace
784 <<python-application,Python applications>>, pass the
785 `--enable-python-agent` option to the `configure` script. You can set
786 the env:PYTHON environment variable to the path to the Python interpreter
787 for which to install the LTTng-UST Python agent package.
794 By default, LTTng-UST libraries are installed to
795 dir:{/usr/local/lib}, which is the de facto directory in which to
796 keep self-compiled and third-party libraries.
798 When <<building-tracepoint-providers-and-user-application,linking an
799 instrumented user application with `liblttng-ust`>>:
801 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
804 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
805 man:gcc(1), man:g++(1), or man:clang(1).
809 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
815 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
816 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
817 cd lttng-tools-2.13.* &&
825 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
826 previous steps automatically for a given version of LTTng and confine
827 the installed files to a specific directory. This can be useful to try
828 LTTng without installing it on your system.
831 === Linux kernel module signature
833 Linux kernel modules require trusted signatures in order to be loaded
834 when any of the following is true:
836 * The system boots with
837 https://uefi.org/specs/UEFI/2.10/32_Secure_Boot_and_Driver_Signing.html#secure-boot-and-driver-signing[Secure Boot]
840 * The Linux kernel which boots is configured with
841 `CONFIG_MODULE_SIG_FORCE`.
843 * The Linux kernel boots with a command line containing
844 `module.sig_enforce=1`.
846 .`root` user running <<lttng-sessiond,`lttng-sessiond`>> which fails to load a required <<lttng-modules,kernel module>> due to the signature enforcement policies.
851 Warning: No tracing group detected
852 modprobe: ERROR: could not insert 'lttng_ring_buffer_client_discard': Key was rejected by service
853 Error: Unable to load required module lttng-ring-buffer-client-discard
854 Warning: No kernel tracer available
858 There are several methods to enroll trusted keys for signing modules
859 that are built from source. The precise details vary from one Linux
860 version to another, and distributions may have their own mechanisms. For
861 example, https://github.com/dell/dkms[DKMS] may autogenerate a key and
862 sign modules, but the key isn't automatically enrolled.
865 https://www.kernel.org/doc/html/latest/admin-guide/module-signing.html[Kernel
866 module signing facility] and the documentation of your distribution
867 to learn more about signing Linux kernel modules.
872 This is a short guide to get started quickly with LTTng kernel and user
875 Before you follow this guide, make sure to <<installing-lttng,install>>
878 This tutorial walks you through the steps to:
880 . <<tracing-the-linux-kernel,Record Linux kernel events>>.
882 . <<tracing-your-own-user-application,Record the events of a user
883 application>> written in C.
885 . <<viewing-and-analyzing-your-traces,View and analyze the
889 [[tracing-the-linux-kernel]]
890 === Record Linux kernel events
892 NOTE: The following command lines start with the `#` prompt because you
893 need root privileges to control the Linux kernel LTTng tracer. You can
894 also control the kernel tracer as a regular user if your Unix user is a
895 member of the <<tracing-group,tracing group>>.
897 . Create a <<tracing-session,recording session>> to write LTTng traces
898 to dir:{/tmp/my-kernel-trace}:
903 # lttng create my-kernel-session --output=/tmp/my-kernel-trace
907 . List the available kernel tracepoints and system calls:
912 # lttng list --kernel
913 # lttng list --kernel --syscall
917 . Create <<event,recording event rules>> which match events having
918 the desired names, for example the `sched_switch` and
919 `sched_process_fork` tracepoints, and the man:open(2) and man:close(2)
925 # lttng enable-event --kernel sched_switch,sched_process_fork
926 # lttng enable-event --kernel --syscall open,close
930 Create a recording event rule which matches _all_ the Linux kernel
931 tracepoint events with the opt:lttng-enable-event(1):--all option
932 (recording with such a recording event rule generates a lot of data):
937 # lttng enable-event --kernel --all
941 . <<basic-tracing-session-control,Start recording>>:
950 . Do some operation on your system for a few seconds. For example,
951 load a website, or list the files of a directory.
953 . <<creating-destroying-tracing-sessions,Destroy>> the current
963 The man:lttng-destroy(1) command doesn't destroy the trace data; it
964 only destroys the state of the recording session.
966 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
967 implicitly (see ``<<basic-tracing-session-control,Start and stop a
968 recording session>>''). You need to stop recording to make LTTng flush
969 the remaining trace data and make the trace readable.
971 . For the sake of this example, make the recorded trace accessible to
977 # chown -R $(whoami) /tmp/my-kernel-trace
981 See ``<<viewing-and-analyzing-your-traces,View and analyze the
982 recorded events>>'' to view the recorded events.
985 [[tracing-your-own-user-application]]
986 === Record user application events
988 This section walks you through a simple example to record the events of
989 a _Hello world_ program written in{nbsp}C.
991 To create the traceable user application:
993 . Create the tracepoint provider header file, which defines the
994 tracepoints and the events they can generate:
1000 #undef LTTNG_UST_TRACEPOINT_PROVIDER
1001 #define LTTNG_UST_TRACEPOINT_PROVIDER hello_world
1003 #undef LTTNG_UST_TRACEPOINT_INCLUDE
1004 #define LTTNG_UST_TRACEPOINT_INCLUDE "./hello-tp.h"
1006 #if !defined(_HELLO_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ)
1009 #include <lttng/tracepoint.h>
1011 LTTNG_UST_TRACEPOINT_EVENT(
1013 my_first_tracepoint,
1015 int, my_integer_arg,
1016 char *, my_string_arg
1018 LTTNG_UST_TP_FIELDS(
1019 lttng_ust_field_string(my_string_field, my_string_arg)
1020 lttng_ust_field_integer(int, my_integer_field, my_integer_arg)
1024 #endif /* _HELLO_TP_H */
1026 #include <lttng/tracepoint-event.h>
1030 . Create the tracepoint provider package source file:
1036 #define LTTNG_UST_TRACEPOINT_CREATE_PROBES
1037 #define LTTNG_UST_TRACEPOINT_DEFINE
1039 #include "hello-tp.h"
1043 . Build the tracepoint provider package:
1048 $ gcc -c -I. hello-tp.c
1052 . Create the _Hello World_ application source file:
1059 #include "hello-tp.h"
1061 int main(int argc, char *argv[])
1065 puts("Hello, World!\nPress Enter to continue...");
1068 * The following getchar() call only exists for the purpose of this
1069 * demonstration, to pause the application in order for you to have
1070 * time to list its tracepoints. You don't need it otherwise.
1075 * An lttng_ust_tracepoint() call.
1077 * Arguments, as defined in `hello-tp.h`:
1079 * 1. Tracepoint provider name (required)
1080 * 2. Tracepoint name (required)
1081 * 3. `my_integer_arg` (first user-defined argument)
1082 * 4. `my_string_arg` (second user-defined argument)
1084 * Notice the tracepoint provider and tracepoint names are
1085 * C identifiers, NOT strings: they're in fact parts of variables
1086 * that the macros in `hello-tp.h` create.
1088 lttng_ust_tracepoint(hello_world, my_first_tracepoint, 23,
1091 for (i = 0; i < argc; i++) {
1092 lttng_ust_tracepoint(hello_world, my_first_tracepoint,
1096 puts("Quitting now!");
1097 lttng_ust_tracepoint(hello_world, my_first_tracepoint,
1104 . Build the application:
1113 . Link the application with the tracepoint provider package,
1114 `liblttng-ust` and `libdl`:
1119 $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
1123 Here's the whole build process:
1126 .Build steps of the user space tracing tutorial.
1127 image::ust-flow.png[]
1129 To record the events of the user application:
1131 . Run the application with a few arguments:
1136 $ ./hello world and beyond
1145 Press Enter to continue...
1149 . Start an LTTng <<lttng-sessiond,session daemon>>:
1154 $ lttng-sessiond --daemonize
1158 NOTE: A session daemon might already be running, for example as a
1159 service that the service manager of your distribution started.
1161 . List the available user space tracepoints:
1166 $ lttng list --userspace
1170 You see the `hello_world:my_first_tracepoint` tracepoint listed
1171 under the `./hello` process.
1173 . Create a <<tracing-session,recording session>>:
1178 $ lttng create my-user-space-session
1182 . Create a <<event,recording event rule>> which matches user space
1183 tracepoint events named `hello_world:my_first_tracepoint`:
1188 $ lttng enable-event --userspace hello_world:my_first_tracepoint
1192 . <<basic-tracing-session-control,Start recording>>:
1201 . Go back to the running `hello` application and press **Enter**.
1203 The program executes all `lttng_ust_tracepoint()` instrumentation
1204 points, emitting events as the event rule you created in step{nbsp}5
1208 . <<creating-destroying-tracing-sessions,Destroy>> the current
1218 The man:lttng-destroy(1) command doesn't destroy the trace data; it
1219 only destroys the state of the recording session.
1221 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
1222 implicitly (see ``<<basic-tracing-session-control,Start and stop a
1223 recording session>>''). You need to stop recording to make LTTng flush
1224 the remaining trace data and make the trace readable.
1226 By default, LTTng saves the traces to the
1227 +$LTTNG_HOME/lttng-traces/__NAME__-__DATE__-__TIME__+ directory, where
1228 +__NAME__+ is the recording session name. The env:LTTNG_HOME environment
1229 variable defaults to `$HOME` if not set.
1232 [[viewing-and-analyzing-your-traces]]
1233 === View and analyze the recorded events
1235 Once you have completed the <<tracing-the-linux-kernel,Record Linux
1236 kernel events>> and <<tracing-your-own-user-application,Record user
1237 application events>> tutorials, you can inspect the recorded events.
1239 There are tools you can use to read LTTng traces:
1241 https://babeltrace.org/[Babeltrace{nbsp}2]::
1242 A rich, flexible trace manipulation toolkit which includes
1243 a versatile command-line interface
1244 (man:babeltrace2(1)),
1245 a https://babeltrace.org/docs/v2.0/libbabeltrace2/[C{nbsp}library],
1246 and https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings]
1247 so that you can easily process or convert an LTTng trace with
1250 The Babeltrace{nbsp}2 project ships with a plugin
1251 (man:babeltrace2-plugin-ctf(7)) which supports the format of the traces
1252 which LTTng produces, https://diamon.org/ctf/[CTF].
1254 http://tracecompass.org/[Trace Compass]::
1255 A graphical user interface for viewing and analyzing any type of
1256 logs or traces, including those of LTTng.
1258 NOTE: This section assumes that LTTng wrote the traces it recorded
1259 during the previous tutorials to their default location, in the
1260 dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
1261 environment variable defaults to `$HOME` if not set.
1264 [[viewing-and-analyzing-your-traces-bt]]
1265 ==== Use the cmd:babeltrace2 command-line tool
1267 The simplest way to list all the recorded events of an LTTng trace is to
1268 pass its path to man:babeltrace2(1), without options:
1272 $ babeltrace2 ~/lttng-traces/my-user-space-session*
1275 The cmd:babeltrace2 command finds all traces recursively within the
1276 given path and prints all their events, sorting them chronologically.
1278 Pipe the output of cmd:babeltrace2 into a tool like man:grep(1) for
1283 $ babeltrace2 /tmp/my-kernel-trace | grep _switch
1286 Pipe the output of cmd:babeltrace2 into a tool like man:wc(1) to count
1287 the recorded events:
1291 $ babeltrace2 /tmp/my-kernel-trace | grep _open | wc --lines
1295 [[viewing-and-analyzing-your-traces-bt-python]]
1296 ==== Use the Babeltrace{nbsp}2 Python bindings
1298 The <<viewing-and-analyzing-your-traces-bt,text output of
1299 cmd:babeltrace2>> is useful to isolate event records by simple matching
1300 using man:grep(1) and similar utilities. However, more elaborate
1301 filters, such as keeping only event records with a field value falling
1302 within a specific range, are not trivial to write using a shell.
1303 Moreover, reductions and even the most basic computations involving
1304 multiple event records are virtually impossible to implement.
1306 Fortunately, Babeltrace{nbsp}2 ships with
1307 https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings]
1308 which make it easy to read the event records of an LTTng trace
1309 sequentially and compute the desired information.
1311 The following script accepts an LTTng Linux kernel trace path as its
1312 first argument and prints the short names of the top five running
1313 processes on CPU{nbsp}0 during the whole trace:
1324 # Get the trace path from the first command-line argument
1325 it = bt2.TraceCollectionMessageIterator(sys.argv[1])
1327 # This counter dictionary will hold execution times:
1329 # Task command name -> Total execution time (ns)
1330 exec_times = collections.Counter()
1332 # This holds the last `sched_switch` timestamp
1336 # We only care about event messages
1337 if type(msg) is not bt2._EventMessageConst:
1340 # Event of the event message
1343 # Keep only `sched_switch` events
1344 if event.cls.name != 'sched_switch':
1347 # Keep only records of events which LTTng emitted from CPU 0
1348 if event.packet.context_field['cpu_id'] != 0:
1351 # Event timestamp (ns)
1352 cur_ts = msg.default_clock_snapshot.ns_from_origin
1358 # (Short) name of the previous task command
1359 prev_comm = str(event.payload_field['prev_comm'])
1361 # Initialize an entry in our dictionary if not done yet
1362 if prev_comm not in exec_times:
1363 exec_times[prev_comm] = 0
1365 # Compute previous command execution time
1366 diff = cur_ts - last_ts
1368 # Update execution time of this command
1369 exec_times[prev_comm] += diff
1371 # Update last timestamp
1375 for name, ns in exec_times.most_common(5):
1376 print('{:20}{} s'.format(name, ns / 1e9))
1379 if __name__ == '__main__':
1387 $ python3 top5proc.py /tmp/my-kernel-trace/kernel
1393 swapper/0 48.607245889 s
1394 chromium 7.192738188 s
1395 pavucontrol 0.709894415 s
1396 Compositor 0.660867933 s
1397 Xorg.bin 0.616753786 s
1400 Note that `swapper/0` is the ``idle'' process of CPU{nbsp}0 on Linux;
1401 since we weren't using the CPU that much when recording, its first
1402 position in the list makes sense.
1406 == [[understanding-lttng]]Core concepts
1408 From a user's perspective, the LTTng system is built on a few concepts,
1409 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1410 operates by sending commands to the <<lttng-sessiond,session daemon>>
1411 (through <<liblttng-ctl-lttng,`liblttng-ctl`>>).
1413 Understanding how those objects relate to each other is key to master
1416 The core concepts of LTTng are:
1418 * <<"event-rule","Instrumentation point, event rule, and event">>
1419 * <<trigger,Trigger>>
1420 * <<tracing-session,Recording session>>
1421 * <<domain,Tracing domain>>
1422 * <<channel,Channel and ring buffer>>
1423 * <<event,Recording event rule and event record>>
1425 NOTE: The man:lttng-concepts(7) manual page also documents the core
1426 concepts of LTTng, with more links to other LTTng-tools manual pages.
1430 === Instrumentation point, event rule, and event
1432 An _instrumentation point_ is a point, within a piece of software,
1433 which, when executed, creates an LTTng _event_.
1435 LTTng offers various <<instrumentation-point-types,types of
1438 An _event rule_ is a set of conditions to match a set of events.
1440 When LTTng creates an event{nbsp}__E__, an event rule{nbsp}__ER__ is
1441 said to __match__{nbsp}__E__ when{nbsp}__E__ satisfies _all_ the
1442 conditions of{nbsp}__ER__. This concept is similar to a
1443 https://en.wikipedia.org/wiki/Regular_expression[regular expression]
1444 which matches a set of strings.
1446 When an event rule matches an event, LTTng _emits_ the event, therefore
1447 attempting to execute one or more actions.
1451 [[event-creation-emission-opti]]The event creation and emission
1452 processes are documentation concepts to help understand the journey from
1453 an instrumentation point to the execution of actions.
1455 The actual creation of an event can be costly because LTTng needs to
1456 evaluate the arguments of the instrumentation point.
1458 In practice, LTTng implements various optimizations for the Linux kernel
1459 and user space <<domain,tracing domains>> to avoid actually creating an
1460 event when the tracer knows, thanks to properties which are independent
1461 from the event payload and current context, that it would never emit
1462 such an event. Those properties are:
1464 * The <<instrumentation-point-types,instrumentation point type>>.
1466 * The instrumentation point name.
1468 * The instrumentation point log level.
1470 * For a <<event,recording event rule>>:
1471 ** The status of the rule itself.
1472 ** The status of the <<channel,channel>>.
1473 ** The activity of the <<tracing-session,recording session>>.
1474 ** Whether or not the process for which LTTng would create the event is
1475 <<pid-tracking,allowed to record events>>.
1477 In other words: if, for a given instrumentation point{nbsp}__IP__, the
1478 LTTng tracer knows that it would never emit an event,
1479 executing{nbsp}__IP__ represents a simple boolean variable check and,
1480 for a Linux kernel recording event rule, a few process attribute checks.
1483 As of LTTng{nbsp}{revision}, there are two places where you can find an
1486 <<event,Recording event rule>>::
1487 A specific type of event rule of which the action is to record the
1488 matched event as an event record.
1490 See ``<<enabling-disabling-events,Create and enable a recording event
1491 rule>>'' to learn more.
1493 ``Event rule matches'' <<trigger,trigger>> condition (since LTTng{nbsp}2.13)::
1494 When the event rule of the trigger condition matches an event, LTTng
1495 can execute user-defined actions such as sending an LTTng
1497 <<basic-tracing-session-control,starting a recording session>>,
1500 See “<<add-event-rule-matches-trigger,Add an ``event rule matches''
1501 trigger to a session daemon>>” to learn more.
1503 For LTTng to emit an event{nbsp}__E__,{nbsp}__E__ must satisfy _all_ the
1504 basic conditions of an event rule{nbsp}__ER__, that is:
1506 * The instrumentation point from which LTTng
1507 creates{nbsp}__E__ has a specific
1508 <<instrumentation-point-types,type>>.
1510 * A pattern matches the name of{nbsp}__E__ while another pattern
1513 * The log level of the instrumentation point from which LTTng
1514 creates{nbsp}__E__ is at least as severe as some value, or is exactly
1517 * The fields of the payload of{nbsp}__E__ and the current context fields
1518 satisfy a filter expression.
1520 A <<event,recording event rule>> has additional, implicit conditions to
1524 [[instrumentation-point-types]]
1525 ==== Instrumentation point types
1527 As of LTTng{nbsp}{revision}, the available instrumentation point
1528 types are, depending on the <<domain,tracing domain>>:
1532 A statically defined point in the source code of the kernel
1533 image or of a kernel module using the
1534 <<lttng-modules,LTTng-modules>> macros.
1536 Linux kernel system call:::
1537 Entry, exit, or both of a Linux kernel system call.
1539 Linux https://www.kernel.org/doc/html/latest/trace/kprobes.html[kprobe]:::
1540 A single probe dynamically placed in the compiled kernel code.
1542 When you create such an instrumentation point, you set its memory
1543 address or symbol name.
1545 Linux user space probe:::
1546 A single probe dynamically placed at the entry of a compiled
1547 user space application/library function through the kernel.
1549 When you create such an instrumentation point, you set:
1552 With the ELF method::
1553 Its application/library path and its symbol name.
1555 With the USDT method::
1556 Its application/library path, its provider name, and its probe name.
1558 ``USDT'' stands for _SystemTap User-level Statically Defined Tracing_,
1559 a http://dtrace.org/blogs/about/[DTrace]-style marker.
1562 As of LTTng{nbsp}{revision}, LTTng only supports USDT probes which
1563 are _not_ reference-counted.
1565 Linux https://www.kernel.org/doc/html/latest/trace/kprobes.html[kretprobe]:::
1566 Entry, exit, or both of a Linux kernel function.
1568 When you create such an instrumentation point, you set the memory
1569 address or symbol name of its function.
1573 A statically defined point in the source code of a C/$$C++$$
1574 application/library using the
1575 <<lttng-ust,LTTng-UST>> macros.
1577 `java.util.logging`, Apache log4j, and Python::
1578 Java or Python logging statement:::
1579 A method call on a Java or Python logger attached to an
1582 See ``<<list-instrumentation-points,List the available instrumentation
1583 points>>'' to learn how to list available Linux kernel, user space, and
1584 logging instrumentation points.
1590 A _trigger_ associates a condition to one or more actions.
1592 When the condition of a trigger is satisfied, LTTng attempts to execute
1595 As of LTTng{nbsp}{revision}, the available trigger conditions and
1600 * The consumed buffer size of a given <<tracing-session,recording
1601 session>> becomes greater than some value.
1603 * The buffer usage of a given <<channel,channel>> becomes greater than
1606 * The buffer usage of a given channel becomes less than some value.
1608 * There's an ongoing <<session-rotation,recording session rotation>>.
1610 * A recording session rotation becomes completed.
1612 * An <<add-event-rule-matches-trigger,event rule matches>> an event.
1616 * <<trigger-event-notif,Send a notification>> to a user application.
1617 * <<basic-tracing-session-control,Start>> a given recording session.
1618 * <<basic-tracing-session-control,Stop>> a given recording session.
1619 * <<session-rotation,Archive the current trace chunk>> of a given
1620 recording session (rotate).
1621 * <<taking-a-snapshot,Take a snapshot>> of a given recording session.
1623 A trigger belongs to a <<lttng-sessiond,session daemon>>, not to a
1624 specific recording session. For a given session daemon, each Unix user has
1625 its own, private triggers. Note, however, that the `root` Unix user may,
1626 for the root session daemon:
1628 * Add a trigger as another Unix user.
1630 * List all the triggers, regardless of their owner.
1632 * Remove a trigger which belongs to another Unix user.
1634 For a given session daemon and Unix user, a trigger has a unique name.
1638 === Recording session
1640 A _recording session_ (named ``tracing session'' prior to
1641 LTTng{nbsp}2.13) is a stateful dialogue between you and a
1642 <<lttng-sessiond,session daemon>> for everything related to
1643 <<event,event recording>>.
1645 Everything that you do when you control LTTng tracers to record events
1646 happens within a recording session. In particular, a recording session:
1648 * Has its own name, unique for a given session daemon.
1650 * Has its own set of trace files, if any.
1652 * Has its own state of activity (started or stopped).
1654 An active recording session is an implicit <<event,recording event rule>>
1657 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
1660 * Has its own <<channel,channels>> to which are attached their own
1661 recording event rules.
1663 * Has its own <<pid-tracking,process attribute inclusion sets>>.
1666 .A _recording session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,recording event rules>>.
1667 image::concepts.png[]
1669 Those attributes and objects are completely isolated between different
1672 A recording session is like an
1673 https://en.wikipedia.org/wiki/Automated_teller_machine[ATM] session: the
1674 operations you do on the banking system through the ATM don't alter the
1675 data of other users of the same system. In the case of the ATM, a
1676 session lasts as long as your bank card is inside. In the case of LTTng,
1677 a recording session lasts from the man:lttng-create(1) command to the
1678 man:lttng-destroy(1) command.
1681 .Each Unix user has its own set of recording sessions.
1682 image::many-sessions.png[]
1684 A recording session belongs to a <<lttng-sessiond,session daemon>>. For a
1685 given session daemon, each Unix user has its own, private recording
1686 sessions. Note, however, that the `root` Unix user may operate on or
1687 destroy another user's recording session.
1690 [[tracing-session-mode]]
1691 ==== Recording session mode
1693 LTTng offers four recording session modes:
1695 [[local-mode]]Local mode::
1696 Write the trace data to the local file system.
1698 [[net-streaming-mode]]Network streaming mode::
1699 Send the trace data over the network to a listening
1700 <<lttng-relayd,relay daemon>>.
1702 [[snapshot-mode]]Snapshot mode::
1703 Only write the trace data to the local file system or send it to a
1704 listening relay daemon when LTTng <<taking-a-snapshot,takes a
1707 LTTng forces all the <<channel,channels>>
1708 to be created to be configured to be snapshot-ready.
1710 LTTng takes a snapshot of such a recording session when:
1713 * You run the man:lttng-snapshot(1) command.
1715 * LTTng executes a `snapshot-session` <<trigger,trigger>> action.
1718 [[live-mode]]Live mode::
1719 Send the trace data over the network to a listening relay daemon
1720 for <<lttng-live,live reading>>.
1722 An LTTng live reader (for example, man:babeltrace2(1)) can connect to
1723 the same relay daemon to receive trace data while the recording session is
1730 A _tracing domain_ identifies a type of LTTng tracer.
1732 A tracing domain has its own properties and features.
1734 There are currently five available tracing domains:
1738 * `java.util.logging` (JUL)
1742 You must specify a tracing domain to target a type of LTTng tracer when
1743 using some <<lttng-cli,cmd:lttng>> commands to avoid ambiguity. For
1744 example, because the Linux kernel and user space tracing domains support
1745 named tracepoints as <<event-rule,instrumentation points>>, you need to
1746 specify a tracing domain when you <<enabling-disabling-events,create
1747 an event rule>> because both tracing domains could have tracepoints
1748 sharing the same name.
1750 You can create <<channel,channels>> in the Linux kernel and user space
1751 tracing domains. The other tracing domains have a single, default
1756 === Channel and ring buffer
1758 A _channel_ is an object which is responsible for a set of
1761 Each ring buffer is divided into multiple _sub-buffers_. When a
1762 <<event,recording event rule>>
1763 matches an event, LTTng can record it to one or more sub-buffers of one
1766 When you <<enabling-disabling-channels,create a channel>>, you set its
1767 final attributes, that is:
1769 * Its <<channel-buffering-schemes,buffering scheme>>.
1771 * What to do <<channel-overwrite-mode-vs-discard-mode,when there's no
1772 space left>> for a new event record because all sub-buffers are full.
1774 * The <<channel-subbuf-size-vs-subbuf-count,size of each ring buffer and
1775 how many sub-buffers>> a ring buffer has.
1777 * The <<tracefile-rotation,size of each trace file LTTng writes for this
1778 channel and the maximum count>> of trace files.
1780 * The periods of its <<channel-read-timer,read>>,
1781 <<channel-switch-timer,switch>>, and <<channel-monitor-timer,monitor>>
1784 * For a Linux kernel channel: its output type.
1786 See the opt:lttng-enable-channel(1):--output option of the
1787 man:lttng-enable-channel(1) command.
1789 * For a user space channel: the value of its
1790 <<blocking-timeout-example,blocking timeout>>.
1792 A channel is always associated to a <<domain,tracing domain>>. The
1793 `java.util.logging` (JUL), log4j, and Python tracing domains each have a
1794 default channel which you can't configure.
1796 A channel owns <<event,recording event rules>>.
1799 [[channel-buffering-schemes]]
1800 ==== Buffering scheme
1802 A channel has at least one ring buffer _per CPU_. LTTng always records
1803 an event to the ring buffer dedicated to the CPU which emits it.
1805 The buffering scheme of a user space channel determines what has its own
1806 set of per-CPU ring buffers:
1808 Per-user buffering::
1809 Allocate one set of ring buffers--one per CPU--shared by all the
1810 instrumented processes of:
1811 If your Unix user is `root`:::
1816 .Per-user buffering scheme (recording session belongs to the `root` Unix user).
1817 image::per-user-buffering-root.png[]
1825 .Per-user buffering scheme (recording session belongs to the `Bob` Unix user).
1826 image::per-user-buffering.png[]
1829 Per-process buffering::
1830 Allocate one set of ring buffers--one per CPU--for each
1831 instrumented process of:
1832 If your Unix user is `root`:::
1837 .Per-process buffering scheme (recording session belongs to the `root` Unix user).
1838 image::per-process-buffering-root.png[]
1846 .Per-process buffering scheme (recording session belongs to the `Bob` Unix user).
1847 image::per-process-buffering.png[]
1850 The per-process buffering scheme tends to consume more memory than the
1851 per-user option because systems generally have more instrumented
1852 processes than Unix users running instrumented processes. However, the
1853 per-process buffering scheme ensures that one process having a high
1854 event throughput won't fill all the shared sub-buffers of the same Unix
1857 The buffering scheme of a Linux kernel channel is always to allocate a
1858 single set of ring buffers for the whole system. This scheme is similar
1859 to the per-user option, but with a single, global user ``running'' the
1863 [[channel-overwrite-mode-vs-discard-mode]]
1864 ==== Event record loss mode
1866 When LTTng emits an event, LTTng can record it to a specific, available
1867 sub-buffer within the ring buffers of specific channels. When there's no
1868 space left in a sub-buffer, the tracer marks it as consumable and
1869 another, available sub-buffer starts receiving the following event
1870 records. An LTTng <<lttng-consumerd,consumer daemon>> eventually
1871 consumes the marked sub-buffer, which returns to the available state.
1874 [role="docsvg-channel-subbuf-anim"]
1879 In an ideal world, sub-buffers are consumed faster than they're filled.
1880 In the real world, however, all sub-buffers can be full at some point,
1881 leaving no space to record the following events.
1883 In an ideal world, sub-buffers are consumed faster than they're filled,
1884 as it's the case in the previous animation. In the real world,
1885 however, all sub-buffers can be full at some point, leaving no space to
1886 record the following events.
1888 By default, <<lttng-modules,LTTng-modules>> and <<lttng-ust,LTTng-UST>>
1889 are _non-blocking_ tracers: when there's no available sub-buffer to
1890 record an event, it's acceptable to lose event records when the
1891 alternative would be to cause substantial delays in the execution of the
1892 instrumented application. LTTng privileges performance over integrity;
1893 it aims at perturbing the instrumented application as little as possible
1894 in order to make the detection of subtle race conditions and rare
1895 interrupt cascades possible.
1897 Since LTTng{nbsp}2.10, the LTTng user space tracer, LTTng-UST, supports
1898 a _blocking mode_. See the <<blocking-timeout-example,blocking timeout
1899 example>> to learn how to use the blocking mode.
1901 When it comes to losing event records because there's no available
1902 sub-buffer, or because the blocking timeout of
1903 the channel is reached, the _event record loss mode_ of the channel
1904 determines what to do. The available event record loss modes are:
1906 [[discard-mode]]Discard mode::
1907 Drop the newest event records until a sub-buffer becomes available.
1909 This is the only available mode when you specify a blocking timeout.
1911 With this mode, LTTng increments a count of lost event records when an
1912 event record is lost and saves this count to the trace. A trace reader
1913 can use the saved discarded event record count of the trace to decide
1914 whether or not to perform some analysis even if trace data is known to
1917 [[overwrite-mode]]Overwrite mode::
1918 Clear the sub-buffer containing the oldest event records and start
1919 writing the newest event records there.
1921 This mode is sometimes called _flight recorder mode_ because it's
1922 similar to a https://en.wikipedia.org/wiki/Flight_recorder[flight
1923 recorder]: always keep a fixed amount of the latest data. It's also
1924 similar to the roll mode of an oscilloscope.
1926 Since LTTng{nbsp}2.8, with this mode, LTTng writes to a given sub-buffer
1927 its sequence number within its data stream. With a <<local-mode,local>>,
1928 <<net-streaming-mode,network streaming>>, or <<live-mode,live>> recording
1929 session, a trace reader can use such sequence numbers to report lost
1930 packets. A trace reader can use the saved discarded sub-buffer (packet)
1931 count of the trace to decide whether or not to perform some analysis
1932 even if trace data is known to be missing.
1934 With this mode, LTTng doesn't write to the trace the exact number of
1935 lost event records in the lost sub-buffers.
1937 Which mechanism you should choose depends on your context: prioritize
1938 the newest or the oldest event records in the ring buffer?
1940 Beware that, in overwrite mode, the tracer abandons a _whole sub-buffer_
1941 as soon as a there's no space left for a new event record, whereas in
1942 discard mode, the tracer only discards the event record that doesn't
1945 There are a few ways to decrease your probability of losing event
1946 records. The ``<<channel-subbuf-size-vs-subbuf-count,Sub-buffer size and
1947 count>>'' section shows how to fine-tune the sub-buffer size and count
1948 of a channel to virtually stop losing event records, though at the cost
1949 of greater memory usage.
1952 [[channel-subbuf-size-vs-subbuf-count]]
1953 ==== Sub-buffer size and count
1955 A channel has one or more ring buffer for each CPU of the target system.
1957 See the ``<<channel-buffering-schemes,Buffering scheme>>'' section to
1958 learn how many ring buffers of a given channel are dedicated to each CPU
1959 depending on its buffering scheme.
1961 Set the size of each sub-buffer the ring buffers of a channel contain
1962 and how many there are
1963 when you <<enabling-disabling-channels,create it>>.
1965 Note that LTTng switching the current sub-buffer of a ring buffer
1966 (marking a full one as consumable and switching to an available one for
1967 LTTng to record the next events) introduces noticeable CPU overhead.
1968 Knowing this, the following list presents a few practical situations
1969 along with how to configure the sub-buffer size and count for them:
1971 High event throughput::
1972 In general, prefer large sub-buffers to lower the risk of losing
1975 Having larger sub-buffers also ensures a lower sub-buffer switching
1978 The sub-buffer count is only meaningful if you create the channel in
1979 <<overwrite-mode,overwrite mode>>: in this case, if LTTng overwrites a
1980 sub-buffer, then the other sub-buffers are left unaltered.
1982 Low event throughput::
1983 In general, prefer smaller sub-buffers since the risk of losing
1984 event records is low.
1986 Because LTTng emits events less frequently, the sub-buffer switching
1987 frequency should remain low and therefore the overhead of the tracer
1988 shouldn't be a problem.
1991 If your target system has a low memory limit, prefer fewer first,
1992 then smaller sub-buffers.
1994 Even if the system is limited in memory, you want to keep the
1995 sub-buffers as large as possible to avoid a high sub-buffer switching
1998 Note that LTTng uses https://diamon.org/ctf/[CTF] as its trace format,
1999 which means event record data is very compact. For example, the average
2000 LTTng kernel event record weights about 32{nbsp}bytes. Therefore, a
2001 sub-buffer size of 1{nbsp}MiB is considered large.
2003 The previous scenarios highlight the major trade-off between a few large
2004 sub-buffers and more, smaller sub-buffers: sub-buffer switching
2005 frequency vs. how many event records are lost in overwrite mode.
2006 Assuming a constant event throughput and using the overwrite mode, the
2007 two following configurations have the same ring buffer total size:
2010 [role="docsvg-channel-subbuf-size-vs-count-anim"]
2015 Two sub-buffers of 4{nbsp}MiB each::
2016 Expect a very low sub-buffer switching frequency, but if LTTng
2017 ever needs to overwrite a sub-buffer, half of the event records so
2018 far (4{nbsp}MiB) are definitely lost.
2020 Eight sub-buffers of 1{nbsp}MiB each::
2021 Expect four times the tracer overhead of the configuration above,
2022 but if LTTng needs to overwrite a sub-buffer, only the eighth of
2023 event records so far (1{nbsp}MiB) are definitely lost.
2025 In <<discard-mode,discard mode>>, the sub-buffer count parameter is
2026 pointless: use two sub-buffers and set their size according to your
2030 [[tracefile-rotation]]
2031 ==== Maximum trace file size and count (trace file rotation)
2033 By default, trace files can grow as large as needed.
2035 Set the maximum size of each trace file that LTTng writes of a given
2036 channel when you <<enabling-disabling-channels,create it>>.
2038 When the size of a trace file reaches the fixed maximum size of the
2039 channel, LTTng creates another file to contain the next event records.
2040 LTTng appends a file count to each trace file name in this case.
2042 If you set the trace file size attribute when you create a channel, the
2043 maximum number of trace files that LTTng creates is _unlimited_ by
2044 default. To limit them, set a maximum number of trace files. When the
2045 number of trace files reaches the fixed maximum count of the channel,
2046 LTTng overwrites the oldest trace file. This mechanism is called _trace
2051 Even if you don't limit the trace file count, always assume that LTTng
2052 manages all the trace files of the recording session.
2054 In other words, there's no safe way to know if LTTng still holds a given
2055 trace file open with the trace file rotation feature.
2057 The only way to obtain an unmanaged, self-contained LTTng trace before
2058 you <<creating-destroying-tracing-sessions,destroy the recording session>>
2059 is with the <<session-rotation,recording session rotation>> feature, which
2060 is available since LTTng{nbsp}2.11.
2067 Each channel can have up to three optional timers:
2069 [[channel-switch-timer]]Switch timer::
2070 When this timer expires, a sub-buffer switch happens: for each ring
2071 buffer of the channel, LTTng marks the current sub-buffer as
2072 consumable and _switches_ to an available one to record the next
2076 [role="docsvg-channel-switch-timer"]
2081 A switch timer is useful to ensure that LTTng consumes and commits trace
2082 data to trace files or to a distant <<lttng-relayd,relay daemon>>
2083 periodically in case of a low event throughput.
2085 Such a timer is also convenient when you use large
2086 <<channel-subbuf-size-vs-subbuf-count,sub-buffers>> to cope with a
2087 sporadic high event throughput, even if the throughput is otherwise low.
2089 Set the period of the switch timer of a channel when you
2090 <<enabling-disabling-channels,create it>> with
2091 the opt:lttng-enable-channel(1):--switch-timer option.
2093 [[channel-read-timer]]Read timer::
2094 When this timer expires, LTTng checks for full, consumable
2097 By default, the LTTng tracers use an asynchronous message mechanism to
2098 signal a full sub-buffer so that a <<lttng-consumerd,consumer daemon>>
2101 When such messages must be avoided, for example in real-time
2102 applications, use this timer instead.
2104 Set the period of the read timer of a channel when you
2105 <<enabling-disabling-channels,create it>> with the
2106 opt:lttng-enable-channel(1):--read-timer option.
2108 [[channel-monitor-timer]]Monitor timer::
2109 When this timer expires, the consumer daemon samples some channel
2110 statistics to evaluate the following <<trigger,trigger>>
2114 . The consumed buffer size of a given <<tracing-session,recording
2115 session>> becomes greater than some value.
2116 . The buffer usage of a given channel becomes greater than some value.
2117 . The buffer usage of a given channel becomes less than some value.
2120 If you disable the monitor timer of a channel{nbsp}__C__:
2123 * The consumed buffer size value of the recording session of{nbsp}__C__
2124 could be wrong for trigger condition type{nbsp}1: the consumed buffer
2125 size of{nbsp}__C__ won't be part of the grand total.
2127 * The buffer usage trigger conditions (types{nbsp}2 and{nbsp}3)
2128 for{nbsp}__C__ will never be satisfied.
2131 Set the period of the monitor timer of a channel when you
2132 <<enabling-disabling-channels,create it>> with the
2133 opt:lttng-enable-channel(1):--monitor-timer option.
2137 === Recording event rule and event record
2139 A _recording event rule_ is a specific type of <<event-rule,event rule>>
2140 of which the action is to serialize and record the matched event as an
2143 Set the explicit conditions of a recording event rule when you
2144 <<enabling-disabling-events,create it>>. A recording event rule also has
2145 the following implicit conditions:
2147 * The recording event rule itself is enabled.
2149 A recording event rule is enabled on creation.
2151 * The <<channel,channel>> to which the recording event rule is attached
2154 A channel is enabled on creation.
2156 * The <<tracing-session,recording session>> of the recording event rule is
2157 <<basic-tracing-session-control,active>> (started).
2159 A recording session is inactive (stopped) on creation.
2161 * The process for which LTTng creates an event to match is
2162 <<pid-tracking,allowed to record events>>.
2164 All processes are allowed to record events on recording session
2167 You always attach a recording event rule to a channel, which belongs to
2168 a recording session, when you create it.
2170 When a recording event rule{nbsp}__ER__ matches an event{nbsp}__E__,
2171 LTTng attempts to serialize and record{nbsp}__E__ to one of the
2172 available sub-buffers of the channel to which{nbsp}__E__ is attached.
2174 When multiple matching recording event rules are attached to the same
2175 channel, LTTng attempts to serialize and record the matched event
2176 _once_. In the following example, the second recording event rule is
2177 redundant when both are enabled:
2181 $ lttng enable-event --userspace hello:world
2182 $ lttng enable-event --userspace hello:world --loglevel=INFO
2186 .Logical path from an instrumentation point to an event record.
2187 image::event-rule.png[]
2189 As of LTTng{nbsp}{revision}, you cannot remove a recording event
2190 rule: it exists as long as its recording session exists.
2194 == Components of noch:{LTTng}
2196 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
2197 to call LTTng a simple _tool_ since it's composed of multiple
2198 interacting components.
2200 This section describes those components, explains their respective
2201 roles, and shows how they connect together to form the LTTng ecosystem.
2203 The following diagram shows how the most important components of LTTng
2204 interact with user applications, the Linux kernel, and you:
2207 .Control and trace data paths between LTTng components.
2208 image::plumbing.png[]
2210 The LTTng project integrates:
2213 Libraries and command-line interface to control recording sessions:
2215 * <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
2216 * <<lttng-consumerd,Consumer daemon>> (cmd:lttng-consumerd).
2217 * <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
2218 * <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
2219 * <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
2220 * <<persistent-memory-file-systems,`lttng-crash` command-line tool>>
2221 (man:lttng-crash(1)).
2224 Libraries and Java/Python packages to instrument and trace user
2227 * <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
2228 headers to instrument and trace any native user application.
2229 * <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
2230 ** `liblttng-ust-libc-wrapper`
2231 ** `liblttng-ust-pthread-wrapper`
2232 ** `liblttng-ust-cyg-profile`
2233 ** `liblttng-ust-cyg-profile-fast`
2234 ** `liblttng-ust-dl`
2235 * <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
2236 Java applications using `java.util.logging` or
2237 Apache log4j{nbsp}1.2 logging.
2238 * <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
2239 Python applications using the standard `logging` package.
2242 <<lttng-modules,Linux kernel modules>> to instrument and trace the
2245 * LTTng kernel tracer module.
2246 * Recording ring buffer kernel modules.
2247 * Probe kernel modules.
2248 * LTTng logger kernel module.
2252 === Tracing control command-line interface
2254 The _man:lttng(1) command-line tool_ is the standard user interface to
2255 control LTTng <<tracing-session,recording sessions>>.
2257 The cmd:lttng tool is part of LTTng-tools.
2259 The cmd:lttng tool is linked with
2260 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
2261 one or more <<lttng-sessiond,session daemons>> behind the scenes.
2263 The cmd:lttng tool has a Git-like interface:
2267 $ lttng [GENERAL OPTIONS] <COMMAND> [COMMAND OPTIONS]
2270 The ``<<controlling-tracing,Tracing control>>'' section explores the
2271 available features of LTTng through its cmd:lttng tool.
2274 [[liblttng-ctl-lttng]]
2275 === Tracing control library
2278 .The tracing control library.
2279 image::plumbing-liblttng-ctl.png[]
2281 The _LTTng control library_, `liblttng-ctl`, is used to communicate with
2282 a <<lttng-sessiond,session daemon>> using a C{nbsp}API that hides the
2283 underlying details of the protocol.
2285 `liblttng-ctl` is part of LTTng-tools.
2287 The <<lttng-cli,cmd:lttng command-line tool>> is linked with
2290 Use `liblttng-ctl` in C or $$C++$$ source code by including its
2295 #include <lttng/lttng.h>
2298 As of LTTng{nbsp}{revision}, the best available developer documentation
2299 for `liblttng-ctl` is its installed header files. Functions and
2300 structures are documented with header comments.
2304 === User space tracing library
2307 .The user space tracing library.
2308 image::plumbing-liblttng-ust.png[]
2310 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
2311 is the LTTng user space tracer.
2313 `liblttng-ust` receives commands from a <<lttng-sessiond,session
2314 daemon>>, for example to allow specific instrumentation points to emit
2315 LTTng <<event-rule,events>>, and writes event records to <<channel,ring
2316 buffers>> shared with a <<lttng-consumerd,consumer daemon>>.
2318 `liblttng-ust` is part of LTTng-UST.
2320 `liblttng-ust` can also send asynchronous messages to the session daemon
2321 when it emits an event. This supports the ``event rule matches''
2322 <<trigger,trigger>> condition feature (see
2323 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
2324 to a session daemon>>”).
2326 Public C{nbsp}header files are installed beside `liblttng-ust` to
2327 instrument any <<c-application,C or $$C++$$ application>>.
2329 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
2330 packages, use their own <<tracepoint-provider,tracepoint provider
2331 package>> which is linked with `liblttng-ust`.
2333 An application or library doesn't have to initialize `liblttng-ust`
2334 manually: its constructor does the necessary tasks to register the
2335 application to a session daemon. The initialization phase also
2336 configures instrumentation points depending on the <<event-rule,event
2337 rules>> that you already created.
2340 [[lttng-ust-agents]]
2341 === User space tracing agents
2344 .The user space tracing agents.
2345 image::plumbing-lttng-ust-agents.png[]
2347 The _LTTng-UST Java and Python agents_ are regular Java and Python
2348 packages which add LTTng tracing capabilities to the
2349 native logging frameworks.
2351 The LTTng-UST agents are part of LTTng-UST.
2353 In the case of Java, the
2354 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
2355 core logging facilities] and
2356 https://logging.apache.org/log4j/1.2/[Apache log4j{nbsp}1.2] are supported.
2357 Note that Apache Log4j{nbsp}2 isn't supported.
2359 In the case of Python, the standard
2360 https://docs.python.org/3/library/logging.html[`logging`] package
2361 is supported. Both Python{nbsp}2 and Python{nbsp}3 modules can import the
2362 LTTng-UST Python agent package.
2364 The applications using the LTTng-UST agents are in the
2365 `java.util.logging` (JUL), log4j, and Python <<domain,tracing domains>>.
2367 Both agents use the same mechanism to convert log statements to LTTng
2368 events. When an agent initializes, it creates a log handler that
2369 attaches to the root logger. The agent also registers to a
2370 <<lttng-sessiond,session daemon>>. When the user application executes a
2371 log statement, the root logger passes it to the log handler of the
2372 agent. The custom log handler of the agent calls a native function in a
2373 tracepoint provider package shared library linked with
2374 <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
2375 other fields, like its logger name and its log level. This native
2376 function contains a user space instrumentation point, therefore tracing
2379 The log level condition of a <<event,recording event rule>> is
2380 considered when tracing a Java or a Python application, and it's
2381 compatible with the standard `java.util.logging`, log4j, and Python log
2386 === LTTng kernel modules
2389 .The LTTng kernel modules.
2390 image::plumbing-lttng-modules.png[]
2392 The _LTTng kernel modules_ are a set of Linux kernel modules
2393 which implement the kernel tracer of the LTTng project.
2395 The LTTng kernel modules are part of LTTng-modules.
2397 The LTTng kernel modules include:
2399 * A set of _probe_ modules.
2401 Each module attaches to a specific subsystem
2402 of the Linux kernel using its tracepoint instrument points.
2404 There are also modules to attach to the entry and return points of the
2405 Linux system call functions.
2407 * _Ring buffer_ modules.
2409 A ring buffer implementation is provided as kernel modules. The LTTng
2410 kernel tracer writes to ring buffers; a
2411 <<lttng-consumerd,consumer daemon>> reads from ring buffers.
2413 * The _LTTng kernel tracer_ module.
2414 * The <<proc-lttng-logger-abi,_LTTng logger_>> module.
2416 The LTTng logger module implements the special path:{/proc/lttng-logger}
2417 (and path:{/dev/lttng-logger}, since LTTng{nbsp}2.11) files so that any
2418 executable can generate LTTng events by opening those files and
2421 The LTTng kernel tracer can also send asynchronous messages to the
2422 <<lttng-sessiond,session daemon>> when it emits an event.
2423 This supports the ``event rule matches''
2424 <<trigger,trigger>> condition feature (see
2425 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
2426 to a session daemon>>”).
2428 Generally, you don't have to load the LTTng kernel modules manually
2429 (using man:modprobe(8), for example): a root session daemon loads the
2430 necessary modules when starting. If you have extra probe modules, you
2431 can specify to load them to the session daemon on the command line
2432 (see the opt:lttng-sessiond(8):--extra-kmod-probes option). See also
2433 <<linux-kernel-sig,Linux kernel module signature>>.
2435 The LTTng kernel modules are installed in
2436 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
2437 the kernel release (output of `uname --kernel-release`).
2444 .The session daemon.
2445 image::plumbing-sessiond.png[]
2447 The _session daemon_, man:lttng-sessiond(8), is a
2448 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] which:
2450 * Manages <<tracing-session,recording sessions>>.
2452 * Controls the various components (like tracers and
2453 <<lttng-consumerd,consumer daemons>>) of LTTng.
2455 * Sends <<notif-trigger-api,asynchronous notifications>> to user
2458 The session daemon is part of LTTng-tools.
2460 The session daemon sends control requests to and receives control
2463 * The <<lttng-ust,user space tracing library>>.
2465 Any instance of the user space tracing library first registers to
2466 a session daemon. Then, the session daemon can send requests to
2467 this instance, such as:
2470 ** Get the list of tracepoints.
2471 ** Share a <<event,recording event rule>> so that the user space tracing
2472 library can decide whether or not a given tracepoint can emit events.
2473 Amongst the possible conditions of a recording event rule is a filter
2474 expression which `liblttng-ust` evaluates before it emits an event.
2475 ** Share <<channel,channel>> attributes and ring buffer locations.
2478 The session daemon and the user space tracing library use a Unix
2479 domain socket to communicate.
2481 * The <<lttng-ust-agents,user space tracing agents>>.
2483 Any instance of a user space tracing agent first registers to
2484 a session daemon. Then, the session daemon can send requests to
2485 this instance, such as:
2488 ** Get the list of loggers.
2489 ** Enable or disable a specific logger.
2492 The session daemon and the user space tracing agent use a TCP connection
2495 * The <<lttng-modules,LTTng kernel tracer>>.
2496 * The <<lttng-consumerd,consumer daemon>>.
2498 The session daemon sends requests to the consumer daemon to instruct
2499 it where to send the trace data streams, amongst other information.
2501 * The <<lttng-relayd,relay daemon>>.
2503 The session daemon receives commands from the
2504 <<liblttng-ctl-lttng,tracing control library>>.
2506 The session daemon can receive asynchronous messages from the
2507 <<lttng-ust,user space>> and <<lttng-modules,kernel>> tracers
2508 when they emit events. This supports the ``event rule matches''
2509 <<trigger,trigger>> condition feature (see
2510 “<<add-event-rule-matches-trigger,Add an ``event rule matches'' trigger
2511 to a session daemon>>”).
2513 The root session daemon loads the appropriate
2514 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
2515 one or more <<lttng-consumerd,consumer daemons>> as soon as you create
2516 a <<event,recording event rule>>.
2518 The session daemon doesn't send and receive trace data: this is the
2519 role of the <<lttng-consumerd,consumer daemon>> and
2520 <<lttng-relayd,relay daemon>>. It does, however, generate the
2521 https://diamon.org/ctf/[CTF] metadata stream.
2523 Each Unix user can have its own session daemon instance. The
2524 recording sessions which different session daemons manage are completely
2527 The root user's session daemon is the only one which is
2528 allowed to control the LTTng kernel tracer, and its spawned consumer
2529 daemon is the only one which is allowed to consume trace data from the
2530 LTTng kernel tracer. Note, however, that any Unix user which is a member
2531 of the <<tracing-group,tracing group>> is allowed
2532 to create <<channel,channels>> in the
2533 Linux kernel <<domain,tracing domain>>, and therefore to use the Linux
2534 kernel LTTng tracer.
2536 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
2537 session daemon when using its `create` command if none is currently
2538 running. You can also start the session daemon manually.
2545 .The consumer daemon.
2546 image::plumbing-consumerd.png[]
2548 The _consumer daemon_, cmd:lttng-consumerd, is a
2549 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] which shares
2550 ring buffers with user applications or with the LTTng kernel modules to
2551 collect trace data and send it to some location (file system or to a
2552 <<lttng-relayd,relay daemon>> over the network).
2554 The consumer daemon is part of LTTng-tools.
2556 You don't start a consumer daemon manually: a consumer daemon is always
2557 spawned by a <<lttng-sessiond,session daemon>> as soon as you create a
2558 <<event,recording event rule>>, that is, before you start recording. When
2559 you kill its owner session daemon, the consumer daemon also exits
2560 because it's the child process of the session daemon. Command-line
2561 options of man:lttng-sessiond(8) target the consumer daemon process.
2563 There are up to two running consumer daemons per Unix user, whereas only
2564 one session daemon can run per user. This is because each process can be
2565 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
2566 and 64-bit processes, it's more efficient to have separate
2567 corresponding 32-bit and 64-bit consumer daemons. The root user is an
2568 exception: it can have up to _three_ running consumer daemons: 32-bit
2569 and 64-bit instances for its user applications, and one more
2570 reserved for collecting kernel trace data.
2578 image::plumbing-relayd.png[]
2580 The _relay daemon_, man:lttng-relayd(8), is a
2581 https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] acting as a bridge
2582 between remote session and consumer daemons, local trace files, and a
2583 remote live trace reader.
2585 The relay daemon is part of LTTng-tools.
2587 The main purpose of the relay daemon is to implement a receiver of
2588 <<sending-trace-data-over-the-network,trace data over the network>>.
2589 This is useful when the target system doesn't have much file system
2590 space to write trace files locally.
2592 The relay daemon is also a server to which a
2593 <<lttng-live,live trace reader>> can
2594 connect. The live trace reader sends requests to the relay daemon to
2595 receive trace data as the target system records events. The
2596 communication protocol is named _LTTng live_; it's used over TCP
2599 Note that you can start the relay daemon on the target system directly.
2600 This is the setup of choice when the use case is to view/analyze events
2601 as the target system records them without the need of a remote system.
2605 == [[using-lttng]]Instrumentation
2607 There are many examples of tracing and monitoring in our everyday life:
2609 * You have access to real-time and historical weather reports and
2610 forecasts thanks to weather stations installed around the country.
2611 * You know your heart is safe thanks to an electrocardiogram.
2612 * You make sure not to drive your car too fast and to have enough fuel
2613 to reach your destination thanks to gauges visible on your dashboard.
2615 All the previous examples have something in common: they rely on
2616 **instruments**. Without the electrodes attached to the surface of your
2617 body skin, cardiac monitoring is futile.
2619 LTTng, as a tracer, is no different from those real life examples. If
2620 you're about to trace a software system or, in other words, record its
2621 history of execution, you better have **instrumentation points** in the
2622 subject you're tracing, that is, the actual software system.
2624 <<instrumentation-point-types,Various ways>> were developed to
2625 instrument a piece of software for LTTng tracing. The most
2626 straightforward one is to manually place static instrumentation points,
2627 called _tracepoints_, in the source code of the application. The Linux
2628 kernel <<domain,tracing domain>> also makes it possible to dynamically
2629 add instrumentation points.
2631 If you're only interested in tracing the Linux kernel, your
2632 instrumentation needs are probably already covered by the built-in
2633 <<lttng-modules,Linux kernel instrumentation points>> of LTTng. You may
2634 also wish to have LTTng trace a user application which is already
2635 instrumented for LTTng tracing. In such cases, skip this whole section
2636 and read the topics of the ``<<controlling-tracing,Tracing control>>''
2639 Many methods are available to instrument a piece of software for LTTng
2642 * <<c-application,Instrument a C/$$C++$$ user application>>.
2643 * <<prebuilt-ust-helpers,Load a prebuilt user space tracing helper>>.
2644 * <<java-application,Instrument a Java application>>.
2645 * <<python-application,Instrument a Python application>>.
2646 * <<proc-lttng-logger-abi,Use the LTTng logger>>.
2647 * <<instrumenting-linux-kernel,Instrument a Linux kernel image or module>>.
2651 === [[cxx-application]]Instrument a C/$$C++$$ user application
2653 The high level procedure to instrument a C or $$C++$$ user application
2654 with the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`,
2657 . <<tracepoint-provider,Create the source files of a tracepoint provider
2660 . <<probing-the-application-source-code,Add tracepoints to
2661 the source code of the application>>.
2663 . <<building-tracepoint-providers-and-user-application,Build and link
2664 a tracepoint provider package and the user application>>.
2666 If you need quick, man:printf(3)-like instrumentation, skip those steps
2667 and use <<tracef,`lttng_ust_tracef()`>> or
2668 <<tracelog,`lttng_ust_tracelog()`>> instead.
2670 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
2671 instrument a user application with `liblttng-ust`.
2674 [[tracepoint-provider]]
2675 ==== Create the source files of a tracepoint provider package
2677 A _tracepoint provider_ is a set of compiled functions which provide
2678 **tracepoints** to an application, the type of instrumentation point
2679 which LTTng-UST provides.
2681 Those functions can make LTTng emit events with user-defined fields and
2682 serialize those events as event records to one or more LTTng-UST
2683 <<channel,channel>> sub-buffers. The `lttng_ust_tracepoint()` macro,
2684 which you <<probing-the-application-source-code,insert in the source
2685 code of a user application>>, calls those functions.
2687 A _tracepoint provider package_ is an object file (`.o`) or a shared
2688 library (`.so`) which contains one or more tracepoint providers. Its
2691 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
2692 * A <<tpp-source,tracepoint provider package source>> (`.c`).
2694 A tracepoint provider package is dynamically linked with `liblttng-ust`,
2695 the LTTng user space tracer, at run time.
2698 .User application linked with `liblttng-ust` and containing a tracepoint provider.
2699 image::ust-app.png[]
2701 NOTE: If you need quick, man:printf(3)-like instrumentation, skip
2702 creating and using a tracepoint provider and use
2703 <<tracef,`lttng_ust_tracef()`>> or <<tracelog,`lttng_ust_tracelog()`>>
2708 ===== Create a tracepoint provider header file template
2710 A _tracepoint provider header file_ contains the tracepoint definitions
2711 of a tracepoint provider.
2713 To create a tracepoint provider header file:
2715 . Start from this template:
2719 .Tracepoint provider header file template (`.h` file extension).
2721 #undef LTTNG_UST_TRACEPOINT_PROVIDER
2722 #define LTTNG_UST_TRACEPOINT_PROVIDER provider_name
2724 #undef LTTNG_UST_TRACEPOINT_INCLUDE
2725 #define LTTNG_UST_TRACEPOINT_INCLUDE "./tp.h"
2727 #if !defined(_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ)
2730 #include <lttng/tracepoint.h>
2733 * Use LTTNG_UST_TRACEPOINT_EVENT(), LTTNG_UST_TRACEPOINT_EVENT_CLASS(),
2734 * LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(), and
2735 * LTTNG_UST_TRACEPOINT_LOGLEVEL() here.
2740 #include <lttng/tracepoint-event.h>
2746 * +__provider_name__+ with the name of your tracepoint provider.
2747 * `"tp.h"` with the name of your tracepoint provider header file.
2749 . Below the `#include <lttng/tracepoint.h>` line, put your
2750 <<defining-tracepoints,tracepoint definitions>>.
2752 Your tracepoint provider name must be unique amongst all the possible
2753 tracepoint provider names used on the same target system. We suggest to
2754 include the name of your project or company in the name, for example,
2755 `org_lttng_my_project_tpp`.
2758 [[defining-tracepoints]]
2759 ===== Create a tracepoint definition
2761 A _tracepoint definition_ defines, for a given tracepoint:
2763 * Its **input arguments**.
2765 They're the macro parameters that the `lttng_ust_tracepoint()` macro
2766 accepts for this particular tracepoint in the source code of the user
2769 * Its **output event fields**.
2771 They're the sources of event fields that form the payload of any event
2772 that the execution of the `lttng_ust_tracepoint()` macro emits for this
2773 particular tracepoint.
2775 Create a tracepoint definition with the
2776 `LTTNG_UST_TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2778 <<tpp-header,tracepoint provider header file template>>.
2780 The syntax of the `LTTNG_UST_TRACEPOINT_EVENT()` macro is:
2783 .`LTTNG_UST_TRACEPOINT_EVENT()` macro syntax.
2785 LTTNG_UST_TRACEPOINT_EVENT(
2786 /* Tracepoint provider name */
2789 /* Tracepoint name */
2792 /* Input arguments */
2797 /* Output event fields */
2798 LTTNG_UST_TP_FIELDS(
2806 * +__provider_name__+ with your tracepoint provider name.
2807 * +__tracepoint_name__+ with your tracepoint name.
2808 * +__arguments__+ with the <<tpp-def-input-args,input arguments>>.
2809 * +__fields__+ with the <<tpp-def-output-fields,output event field>>
2812 The full name of this tracepoint is `provider_name:tracepoint_name`.
2815 .Event name length limitation
2817 The concatenation of the tracepoint provider name and the tracepoint
2818 name must not exceed **254{nbsp}characters**. If it does, the
2819 instrumented application compiles and runs, but LTTng throws multiple
2820 warnings and you could experience serious issues.
2823 [[tpp-def-input-args]]The syntax of the `LTTNG_UST_TP_ARGS()` macro is:
2826 .`LTTNG_UST_TP_ARGS()` macro syntax.
2835 * +__type__+ with the C{nbsp}type of the argument.
2836 * +__arg_name__+ with the argument name.
2838 You can repeat +__type__+ and +__arg_name__+ up to 10{nbsp}times to have
2839 more than one argument.
2841 .`LTTNG_UST_TP_ARGS()` usage with three arguments.
2853 The `LTTNG_UST_TP_ARGS()` and `LTTNG_UST_TP_ARGS(void)` forms are valid
2854 to create a tracepoint definition with no input arguments.
2856 [[tpp-def-output-fields]]The `LTTNG_UST_TP_FIELDS()` macro contains a
2857 list of `lttng_ust_field_*()` macros. Each `lttng_ust_field_*()` macro
2858 defines one event field. See man:lttng-ust(3) for a complete description
2859 of the available `lttng_ust_field_*()` macros. A `lttng_ust_field_*()`
2860 macro specifies the type, size, and byte order of one event field.
2862 Each `lttng_ust_field_*()` macro takes an _argument expression_
2863 parameter. This is a C{nbsp}expression that the tracer evaluates at the
2864 `lttng_ust_tracepoint()` macro site in the source code of the
2865 application. This expression provides the source of data of a field. The
2866 argument expression can include input argument names listed in the
2867 `LTTNG_UST_TP_ARGS()` macro.
2869 Each `lttng_ust_field_*()` macro also takes a _field name_ parameter.
2870 Field names must be unique within a given tracepoint definition.
2872 Here's a complete tracepoint definition example:
2874 .Tracepoint definition.
2876 The following tracepoint definition defines a tracepoint which takes
2877 three input arguments and has four output event fields.
2881 #include "my-custom-structure.h"
2883 LTTNG_UST_TRACEPOINT_EVENT(
2887 const struct my_custom_structure *, my_custom_structure,
2891 LTTNG_UST_TP_FIELDS(
2892 lttng_ust_field_string(query_field, query)
2893 lttng_ust_field_float(double, ratio_field, ratio)
2894 lttng_ust_field_integer(int, recv_size,
2895 my_custom_structure->recv_size)
2896 lttng_ust_field_integer(int, send_size,
2897 my_custom_structure->send_size)
2902 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
2903 macro in the source code of your application like this:
2907 lttng_ust_tracepoint(my_provider, my_tracepoint,
2908 my_structure, some_ratio, the_query);
2912 NOTE: The LTTng-UST tracer only evaluates the arguments of a tracepoint
2913 at run time when such a tracepoint _could_ emit an event. See
2914 <<event-creation-emission-opti,this note>> to learn more.
2917 [[using-tracepoint-classes]]
2918 ===== Use a tracepoint class
2920 A _tracepoint class_ is a class of tracepoints which share the same
2921 output event field definitions. A _tracepoint instance_ is one
2922 instance of such a defined tracepoint class, with its own tracepoint
2925 The <<defining-tracepoints,`LTTNG_UST_TRACEPOINT_EVENT()` macro>> is
2926 actually a shorthand which defines both a tracepoint class and a
2927 tracepoint instance at the same time.
2929 When you build a tracepoint provider package, the C or $$C++$$ compiler
2930 creates one serialization function for each **tracepoint class**. A
2931 serialization function is responsible for serializing the event fields
2932 of a tracepoint to a sub-buffer when recording.
2934 For various performance reasons, when your situation requires multiple
2935 tracepoint definitions with different names, but with the same event
2936 fields, we recommend that you manually create a tracepoint class and
2937 instantiate as many tracepoint instances as needed. One positive effect
2938 of such a design, amongst other advantages, is that all tracepoint
2939 instances of the same tracepoint class reuse the same serialization
2940 function, thus reducing
2941 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2943 .Use a tracepoint class and tracepoint instances.
2945 Consider the following three tracepoint definitions:
2949 LTTNG_UST_TRACEPOINT_EVENT(
2956 LTTNG_UST_TP_FIELDS(
2957 lttng_ust_field_integer(int, userid, userid)
2958 lttng_ust_field_integer(size_t, len, len)
2962 LTTNG_UST_TRACEPOINT_EVENT(
2969 LTTNG_UST_TP_FIELDS(
2970 lttng_ust_field_integer(int, userid, userid)
2971 lttng_ust_field_integer(size_t, len, len)
2975 LTTNG_UST_TRACEPOINT_EVENT(
2982 LTTNG_UST_TP_FIELDS(
2983 lttng_ust_field_integer(int, userid, userid)
2984 lttng_ust_field_integer(size_t, len, len)
2989 In this case, we create three tracepoint classes, with one implicit
2990 tracepoint instance for each of them: `get_account`, `get_settings`, and
2991 `get_transaction`. However, they all share the same event field names
2992 and types. Hence three identical, yet independent serialization
2993 functions are created when you build the tracepoint provider package.
2995 A better design choice is to define a single tracepoint class and three
2996 tracepoint instances:
3000 /* The tracepoint class */
3001 LTTNG_UST_TRACEPOINT_EVENT_CLASS(
3002 /* Tracepoint class provider name */
3005 /* Tracepoint class name */
3008 /* Input arguments */
3014 /* Output event fields */
3015 LTTNG_UST_TP_FIELDS(
3016 lttng_ust_field_integer(int, userid, userid)
3017 lttng_ust_field_integer(size_t, len, len)
3021 /* The tracepoint instances */
3022 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
3023 /* Tracepoint class provider name */
3026 /* Tracepoint class name */
3029 /* Instance provider name */
3032 /* Tracepoint name */
3035 /* Input arguments */
3041 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
3050 LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(
3062 The tracepoint class and instance provider names must be the same if the
3063 `LTTNG_UST_TRACEPOINT_EVENT_CLASS()` and
3064 `LTTNG_UST_TRACEPOINT_EVENT_INSTANCE()` expansions are part of the same
3065 translation unit. See man:lttng-ust(3) to learn more.
3068 [[assigning-log-levels]]
3069 ===== Assign a log level to a tracepoint definition
3071 Assign a _log level_ to a <<defining-tracepoints,tracepoint definition>>
3072 with the `LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro.
3074 Assigning different levels of severity to tracepoint definitions can be
3075 useful: when you <<enabling-disabling-events,create a recording event
3076 rule>>, you can target tracepoints having a log level at least as severe
3077 as a specific value.
3079 The concept of LTTng-UST log levels is similar to the levels found
3080 in typical logging frameworks:
3082 * In a logging framework, the log level is given by the function
3083 or method name you use at the log statement site: `debug()`,
3084 `info()`, `warn()`, `error()`, and so on.
3086 * In LTTng-UST, you statically assign the log level to a tracepoint
3087 definition; any `lttng_ust_tracepoint()` macro invocation which refers
3088 to this definition has this log level.
3090 You must use `LTTNG_UST_TRACEPOINT_LOGLEVEL()` _after_ the
3091 <<defining-tracepoints,`LTTNG_UST_TRACEPOINT_EVENT()`>> or
3092 <<using-tracepoint-classes,`LTTNG_UST_TRACEPOINT_INSTANCE()`>> macro for
3095 The syntax of the `LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro is:
3098 .`LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro syntax.
3100 LTTNG_UST_TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
3105 * +__provider_name__+ with the tracepoint provider name.
3106 * +__tracepoint_name__+ with the tracepoint name.
3107 * +__log_level__+ with the log level to assign to the tracepoint
3108 definition named +__tracepoint_name__+ in the +__provider_name__+
3109 tracepoint provider.
3111 See man:lttng-ust(3) for a list of available log level names.
3113 .Assign the `LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_UNIT` log level to a tracepoint definition.
3117 /* Tracepoint definition */
3118 LTTNG_UST_TRACEPOINT_EVENT(
3125 LTTNG_UST_TP_FIELDS(
3126 lttng_ust_field_integer(int, userid, userid)
3127 lttng_ust_field_integer(size_t, len, len)
3131 /* Log level assignment */
3132 LTTNG_UST_TRACEPOINT_LOGLEVEL(my_app, get_transaction,
3133 LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_UNIT)
3139 ===== Create a tracepoint provider package source file
3141 A _tracepoint provider package source file_ is a C source file which
3142 includes a <<tpp-header,tracepoint provider header file>> to expand its
3143 macros into event serialization and other functions.
3145 Use the following tracepoint provider package source file template:
3148 .Tracepoint provider package source file template.
3150 #define LTTNG_UST_TRACEPOINT_CREATE_PROBES
3155 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
3156 header file>> name. You may also include more than one tracepoint
3157 provider header file here to create a tracepoint provider package
3158 holding more than one tracepoint providers.
3161 [[probing-the-application-source-code]]
3162 ==== Add tracepoints to the source code of an application
3164 Once you <<tpp-header,create a tracepoint provider header file>>, use
3165 the `lttng_ust_tracepoint()` macro in the source code of your
3166 application to insert the tracepoints that this header
3167 <<defining-tracepoints,defines>>.
3169 The `lttng_ust_tracepoint()` macro takes at least two parameters: the
3170 tracepoint provider name and the tracepoint name. The corresponding
3171 tracepoint definition defines the other parameters.
3173 .`lttng_ust_tracepoint()` usage.
3175 The following <<defining-tracepoints,tracepoint definition>> defines a
3176 tracepoint which takes two input arguments and has two output event
3180 .Tracepoint provider header file.
3182 #include "my-custom-structure.h"
3184 LTTNG_UST_TRACEPOINT_EVENT(
3189 const char *, cmd_name
3191 LTTNG_UST_TP_FIELDS(
3192 lttng_ust_field_string(cmd_name, cmd_name)
3193 lttng_ust_field_integer(int, number_of_args, argc)
3198 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
3199 macro in the source code of your application like this:
3202 .Application source file.
3206 int main(int argc, char* argv[])
3208 lttng_ust_tracepoint(my_provider, my_tracepoint, argc, argv[0]);
3213 Note how the source code of the application includes
3214 the tracepoint provider header file containing the tracepoint
3215 definitions to use, path:{tp.h}.
3218 .`lttng_ust_tracepoint()` usage with a complex tracepoint definition.
3220 Consider this complex tracepoint definition, where multiple event
3221 fields refer to the same input arguments in their argument expression
3225 .Tracepoint provider header file.
3227 /* For `struct stat` */
3228 #include <sys/types.h>
3229 #include <sys/stat.h>
3232 LTTNG_UST_TRACEPOINT_EVENT(
3240 LTTNG_UST_TP_FIELDS(
3241 lttng_ust_field_integer(int, my_constant_field, 23 + 17)
3242 lttng_ust_field_integer(int, my_int_arg_field, my_int_arg)
3243 lttng_ust_field_integer(int, my_int_arg_field2,
3244 my_int_arg * my_int_arg)
3245 lttng_ust_field_integer(int, sum4_field,
3246 my_str_arg[0] + my_str_arg[1] +
3247 my_str_arg[2] + my_str_arg[3])
3248 lttng_ust_field_string(my_str_arg_field, my_str_arg)
3249 lttng_ust_field_integer_hex(off_t, size_field, st->st_size)
3250 lttng_ust_field_float(double, size_dbl_field, (double) st->st_size)
3251 lttng_ust_field_sequence_text(char, half_my_str_arg_field,
3253 strlen(my_str_arg) / 2)
3258 Refer to this tracepoint definition with the `lttng_ust_tracepoint()`
3259 macro in the source code of your application like this:
3262 .Application source file.
3264 #define LTTNG_UST_TRACEPOINT_DEFINE
3271 stat("/etc/fstab", &s);
3272 lttng_ust_tracepoint(my_provider, my_tracepoint, 23,
3273 "Hello, World!", &s);
3279 If you look at the event record that LTTng writes when recording this
3280 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
3281 it should look like this:
3283 .Event record fields
3285 |Field name |Field value
3286 |`my_constant_field` |40
3287 |`my_int_arg_field` |23
3288 |`my_int_arg_field2` |529
3290 |`my_str_arg_field` |`Hello, World!`
3291 |`size_field` |0x12d
3292 |`size_dbl_field` |301.0
3293 |`half_my_str_arg_field` |`Hello,`
3297 Sometimes, the arguments you pass to `lttng_ust_tracepoint()` are
3298 expensive to evaluate--they use the call stack, for example. To avoid
3299 this computation when LTTng wouldn't emit any event anyway, use the
3300 `lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros.
3302 The syntax of the `lttng_ust_tracepoint_enabled()` and
3303 `lttng_ust_do_tracepoint()` macros is:
3306 .`lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros syntax.
3308 lttng_ust_tracepoint_enabled(provider_name, tracepoint_name)
3310 lttng_ust_do_tracepoint(provider_name, tracepoint_name, ...)
3315 * +__provider_name__+ with the tracepoint provider name.
3316 * +__tracepoint_name__+ with the tracepoint name.
3318 `lttng_ust_tracepoint_enabled()` returns a non-zero value if executing
3319 the tracepoint named `tracepoint_name` from the provider named
3320 `provider_name` _could_ make LTTng emit an event, depending on the
3321 payload of said event.
3323 `lttng_ust_do_tracepoint()` is like `lttng_ust_tracepoint()`, except
3324 that it doesn't check what `lttng_ust_tracepoint_enabled()` checks.
3325 Using `lttng_ust_tracepoint()` with `lttng_ust_tracepoint_enabled()` is
3326 dangerous because `lttng_ust_tracepoint()` also contains the
3327 `lttng_ust_tracepoint_enabled()` check; therefore, a race condition is
3328 possible in this situation:
3331 .Possible race condition when using `lttng_ust_tracepoint_enabled()` with `lttng_ust_tracepoint()`.
3333 if (lttng_ust_tracepoint_enabled(my_provider, my_tracepoint)) {
3334 stuff = prepare_stuff();
3337 lttng_ust_tracepoint(my_provider, my_tracepoint, stuff);
3340 If `lttng_ust_tracepoint_enabled()` is false, but would be true after
3341 the conditional block, then `stuff` isn't prepared: the emitted event
3342 will either contain wrong data, or the whole application could crash
3343 (with a segmentation fault, for example).
3345 NOTE: Neither `lttng_ust_tracepoint_enabled()` nor
3346 `lttng_ust_do_tracepoint()` have an `STAP_PROBEV()` call. If you need
3347 it, you must emit this call yourself.
3350 [[building-tracepoint-providers-and-user-application]]
3351 ==== Build and link a tracepoint provider package and an application
3353 Once you have one or more <<tpp-header,tracepoint provider header
3354 files>> and a <<tpp-source,tracepoint provider package source file>>,
3355 create the tracepoint provider package by compiling its source
3356 file. From here, multiple build and run scenarios are possible. The
3357 following table shows common application and library configurations
3358 along with the required command lines to achieve them.
3360 In the following diagrams, we use the following file names:
3363 Executable application.
3366 Application object file.
3369 Tracepoint provider package object file.
3372 Tracepoint provider package archive file.
3375 Tracepoint provider package shared object file.
3378 User library object file.
3381 User library shared object file.
3383 We use the following symbols in the diagrams of table below:
3386 .Symbols used in the build scenario diagrams.
3387 image::ust-sit-symbols.png[]
3389 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
3390 variable in the following instructions.
3392 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
3393 .Common tracepoint provider package scenarios.
3395 |Scenario |Instructions
3398 The instrumented application is statically linked with
3399 the tracepoint provider package object.
3401 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
3404 include::../common/ust-sit-step-tp-o.txt[]
3406 To build the instrumented application:
3408 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3413 #define LTTNG_UST_TRACEPOINT_DEFINE
3417 . Compile the application source file:
3426 . Build the application:
3431 $ gcc -o app app.o tpp.o -llttng-ust -ldl
3435 To run the instrumented application:
3437 * Start the application:
3447 The instrumented application is statically linked with the
3448 tracepoint provider package archive file.
3450 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
3453 To create the tracepoint provider package archive file:
3455 . Compile the <<tpp-source,tracepoint provider package source file>>:
3464 . Create the tracepoint provider package archive file:
3469 $ ar rcs tpp.a tpp.o
3473 To build the instrumented application:
3475 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3480 #define LTTNG_UST_TRACEPOINT_DEFINE
3484 . Compile the application source file:
3493 . Build the application:
3498 $ gcc -o app app.o tpp.a -llttng-ust -ldl
3502 To run the instrumented application:
3504 * Start the application:
3514 The instrumented application is linked with the tracepoint provider
3515 package shared object.
3517 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
3520 include::../common/ust-sit-step-tp-so.txt[]
3522 To build the instrumented application:
3524 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3529 #define LTTNG_UST_TRACEPOINT_DEFINE
3533 . Compile the application source file:
3542 . Build the application:
3547 $ gcc -o app app.o -ldl -L. -ltpp
3551 To run the instrumented application:
3553 * Start the application:
3563 The tracepoint provider package shared object is preloaded before the
3564 instrumented application starts.
3566 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
3569 include::../common/ust-sit-step-tp-so.txt[]
3571 To build the instrumented application:
3573 . In path:{app.c}, before including path:{tpp.h}, add the
3579 #define LTTNG_UST_TRACEPOINT_DEFINE
3580 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3584 . Compile the application source file:
3593 . Build the application:
3598 $ gcc -o app app.o -ldl
3602 To run the instrumented application with tracing support:
3604 * Preload the tracepoint provider package shared object and
3605 start the application:
3610 $ LD_PRELOAD=./libtpp.so ./app
3614 To run the instrumented application without tracing support:
3616 * Start the application:
3626 The instrumented application dynamically loads the tracepoint provider
3627 package shared object.
3629 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
3632 include::../common/ust-sit-step-tp-so.txt[]
3634 To build the instrumented application:
3636 . In path:{app.c}, before including path:{tpp.h}, add the
3642 #define LTTNG_UST_TRACEPOINT_DEFINE
3643 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3647 . Compile the application source file:
3656 . Build the application:
3661 $ gcc -o app app.o -ldl
3665 To run the instrumented application:
3667 * Start the application:
3677 The application is linked with the instrumented user library.
3679 The instrumented user library is statically linked with the tracepoint
3680 provider package object file.
3682 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
3685 include::../common/ust-sit-step-tp-o-fpic.txt[]
3687 To build the instrumented user library:
3689 . In path:{emon.c}, before including path:{tpp.h}, add the
3695 #define LTTNG_UST_TRACEPOINT_DEFINE
3699 . Compile the user library source file:
3704 $ gcc -I. -fpic -c emon.c
3708 . Build the user library shared object:
3713 $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
3717 To build the application:
3719 . Compile the application source file:
3728 . Build the application:
3733 $ gcc -o app app.o -L. -lemon
3737 To run the application:
3739 * Start the application:
3749 The application is linked with the instrumented user library.
3751 The instrumented user library is linked with the tracepoint provider
3752 package shared object.
3754 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3757 include::../common/ust-sit-step-tp-so.txt[]
3759 To build the instrumented user library:
3761 . In path:{emon.c}, before including path:{tpp.h}, add the
3767 #define LTTNG_UST_TRACEPOINT_DEFINE
3771 . Compile the user library source file:
3776 $ gcc -I. -fpic -c emon.c
3780 . Build the user library shared object:
3785 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3789 To build the application:
3791 . Compile the application source file:
3800 . Build the application:
3805 $ gcc -o app app.o -L. -lemon
3809 To run the application:
3811 * Start the application:
3821 The tracepoint provider package shared object is preloaded before the
3824 The application is linked with the instrumented user library.
3826 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3829 include::../common/ust-sit-step-tp-so.txt[]
3831 To build the instrumented user library:
3833 . In path:{emon.c}, before including path:{tpp.h}, add the
3839 #define LTTNG_UST_TRACEPOINT_DEFINE
3840 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3844 . Compile the user library source file:
3849 $ gcc -I. -fpic -c emon.c
3853 . Build the user library shared object:
3858 $ gcc -shared -o libemon.so emon.o -ldl
3862 To build the application:
3864 . Compile the application source file:
3873 . Build the application:
3878 $ gcc -o app app.o -L. -lemon
3882 To run the application with tracing support:
3884 * Preload the tracepoint provider package shared object and
3885 start the application:
3890 $ LD_PRELOAD=./libtpp.so ./app
3894 To run the application without tracing support:
3896 * Start the application:
3906 The application is linked with the instrumented user library.
3908 The instrumented user library dynamically loads the tracepoint provider
3909 package shared object.
3911 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3914 include::../common/ust-sit-step-tp-so.txt[]
3916 To build the instrumented user library:
3918 . In path:{emon.c}, before including path:{tpp.h}, add the
3924 #define LTTNG_UST_TRACEPOINT_DEFINE
3925 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3929 . Compile the user library source file:
3934 $ gcc -I. -fpic -c emon.c
3938 . Build the user library shared object:
3943 $ gcc -shared -o libemon.so emon.o -ldl
3947 To build the application:
3949 . Compile the application source file:
3958 . Build the application:
3963 $ gcc -o app app.o -L. -lemon
3967 To run the application:
3969 * Start the application:
3979 The application dynamically loads the instrumented user library.
3981 The instrumented user library is linked with the tracepoint provider
3982 package shared object.
3984 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3987 include::../common/ust-sit-step-tp-so.txt[]
3989 To build the instrumented user library:
3991 . In path:{emon.c}, before including path:{tpp.h}, add the
3997 #define LTTNG_UST_TRACEPOINT_DEFINE
4001 . Compile the user library source file:
4006 $ gcc -I. -fpic -c emon.c
4010 . Build the user library shared object:
4015 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
4019 To build the application:
4021 . Compile the application source file:
4030 . Build the application:
4035 $ gcc -o app app.o -ldl -L. -lemon
4039 To run the application:
4041 * Start the application:
4051 The application dynamically loads the instrumented user library.
4053 The instrumented user library dynamically loads the tracepoint provider
4054 package shared object.
4056 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
4059 include::../common/ust-sit-step-tp-so.txt[]
4061 To build the instrumented user library:
4063 . In path:{emon.c}, before including path:{tpp.h}, add the
4069 #define LTTNG_UST_TRACEPOINT_DEFINE
4070 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
4074 . Compile the user library source file:
4079 $ gcc -I. -fpic -c emon.c
4083 . Build the user library shared object:
4088 $ gcc -shared -o libemon.so emon.o -ldl
4092 To build the application:
4094 . Compile the application source file:
4103 . Build the application:
4108 $ gcc -o app app.o -ldl -L. -lemon
4112 To run the application:
4114 * Start the application:
4124 The tracepoint provider package shared object is preloaded before the
4127 The application dynamically loads the instrumented user library.
4129 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
4132 include::../common/ust-sit-step-tp-so.txt[]
4134 To build the instrumented user library:
4136 . In path:{emon.c}, before including path:{tpp.h}, add the
4142 #define LTTNG_UST_TRACEPOINT_DEFINE
4143 #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE
4147 . Compile the user library source file:
4152 $ gcc -I. -fpic -c emon.c
4156 . Build the user library shared object:
4161 $ gcc -shared -o libemon.so emon.o -ldl
4165 To build the application:
4167 . Compile the application source file:
4176 . Build the application:
4181 $ gcc -o app app.o -L. -lemon
4185 To run the application with tracing support:
4187 * Preload the tracepoint provider package shared object and
4188 start the application:
4193 $ LD_PRELOAD=./libtpp.so ./app
4197 To run the application without tracing support:
4199 * Start the application:
4209 The application is statically linked with the tracepoint provider
4210 package object file.
4212 The application is linked with the instrumented user library.
4214 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
4217 include::../common/ust-sit-step-tp-o.txt[]
4219 To build the instrumented user library:
4221 . In path:{emon.c}, before including path:{tpp.h}, add the
4227 #define LTTNG_UST_TRACEPOINT_DEFINE
4231 . Compile the user library source file:
4236 $ gcc -I. -fpic -c emon.c
4240 . Build the user library shared object:
4245 $ gcc -shared -o libemon.so emon.o
4249 To build the application:
4251 . Compile the application source file:
4260 . Build the application:
4265 $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
4269 To run the instrumented application:
4271 * Start the application:
4281 The application is statically linked with the tracepoint provider
4282 package object file.
4284 The application dynamically loads the instrumented user library.
4286 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
4289 include::../common/ust-sit-step-tp-o.txt[]
4291 To build the application:
4293 . In path:{app.c}, before including path:{tpp.h}, add the following line:
4298 #define LTTNG_UST_TRACEPOINT_DEFINE
4302 . Compile the application source file:
4311 . Build the application:
4316 $ gcc -Wl,--export-dynamic -o app app.o tpp.o \
4321 The `--export-dynamic` option passed to the linker is necessary for the
4322 dynamically loaded library to ``see'' the tracepoint symbols defined in
4325 To build the instrumented user library:
4327 . Compile the user library source file:
4332 $ gcc -I. -fpic -c emon.c
4336 . Build the user library shared object:
4341 $ gcc -shared -o libemon.so emon.o
4345 To run the application:
4347 * Start the application:
4358 [[using-lttng-ust-with-daemons]]
4359 ===== Use noch:{LTTng-UST} with daemons
4361 If your instrumented application calls man:fork(2), man:clone(2),
4362 or BSD's man:rfork(2), without a following man:exec(3)-family
4363 system call, you must preload the path:{liblttng-ust-fork.so} shared
4364 object when you start the application.
4368 $ LD_PRELOAD=liblttng-ust-fork.so ./my-app
4371 If your tracepoint provider package is
4372 a shared library which you also preload, you must put both
4373 shared objects in env:LD_PRELOAD:
4377 $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
4383 ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
4385 If your instrumented application closes one or more file descriptors
4386 which it did not open itself, you must preload the
4387 path:{liblttng-ust-fd.so} shared object when you start the application:
4391 $ LD_PRELOAD=liblttng-ust-fd.so ./my-app
4394 Typical use cases include closing all the file descriptors after
4395 man:fork(2) or man:rfork(2) and buggy applications doing
4399 [[lttng-ust-pkg-config]]
4400 ===== Use noch:{pkg-config}
4402 On some distributions, LTTng-UST ships with a
4403 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
4404 metadata file. If this is your case, then use cmd:pkg-config to
4405 build an application on the command line:
4409 $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
4413 [[instrumenting-32-bit-app-on-64-bit-system]]
4414 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
4416 In order to trace a 32-bit application running on a 64-bit system,
4417 LTTng must use a dedicated 32-bit
4418 <<lttng-consumerd,consumer daemon>>.
4420 The following steps show how to build and install a 32-bit consumer
4421 daemon, which is _not_ part of the default 64-bit LTTng build, how to
4422 build and install the 32-bit LTTng-UST libraries, and how to build and
4423 link an instrumented 32-bit application in that context.
4425 To build a 32-bit instrumented application for a 64-bit target system,
4426 assuming you have a fresh target system with no installed Userspace RCU
4429 . Download, build, and install a 32-bit version of Userspace RCU:
4434 $ cd $(mktemp -d) &&
4435 wget https://lttng.org/files/urcu/userspace-rcu-latest-0.13.tar.bz2 &&
4436 tar -xf userspace-rcu-latest-0.13.tar.bz2 &&
4437 cd userspace-rcu-0.13.* &&
4438 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
4440 sudo make install &&
4445 . Using the package manager of your distribution, or from source,
4446 install the 32-bit versions of the following dependencies of
4447 LTTng-tools and LTTng-UST:
4450 * https://sourceforge.net/projects/libuuid/[libuuid]
4451 * https://directory.fsf.org/wiki/Popt[popt]
4452 * https://www.xmlsoft.org/[libxml2]
4453 * **Optional**: https://github.com/numactl/numactl[numactl]
4456 . Download, build, and install a 32-bit version of the latest
4457 LTTng-UST{nbsp}{revision}:
4462 $ cd $(mktemp -d) &&
4463 wget https://lttng.org/files/lttng-ust/lttng-ust-latest-2.13.tar.bz2 &&
4464 tar -xf lttng-ust-latest-2.13.tar.bz2 &&
4465 cd lttng-ust-2.13.* &&
4466 ./configure --libdir=/usr/local/lib32 \
4467 CFLAGS=-m32 CXXFLAGS=-m32 \
4468 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
4470 sudo make install &&
4475 Add `--disable-numa` to `./configure` if you don't have
4476 https://github.com/numactl/numactl[numactl].
4480 Depending on your distribution, 32-bit libraries could be installed at a
4481 different location than `/usr/lib32`. For example, Debian is known to
4482 install some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
4484 In this case, make sure to set `LDFLAGS` to all the
4485 relevant 32-bit library paths, for example:
4489 $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
4493 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
4494 the 32-bit consumer daemon:
4499 $ cd $(mktemp -d) &&
4500 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
4501 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
4502 cd lttng-tools-2.13.* &&
4503 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
4504 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
4505 --disable-bin-lttng --disable-bin-lttng-crash \
4506 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
4508 cd src/bin/lttng-consumerd &&
4509 sudo make install &&
4514 . From your distribution or from source, <<installing-lttng,install>>
4515 the 64-bit versions of LTTng-UST and Userspace RCU.
4517 . Download, build, and install the 64-bit version of the
4518 latest LTTng-tools{nbsp}{revision}:
4523 $ cd $(mktemp -d) &&
4524 wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 &&
4525 tar -xf lttng-tools-latest-2.13.tar.bz2 &&
4526 cd lttng-tools-2.13.* &&
4527 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
4528 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
4530 sudo make install &&
4535 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
4536 when linking your 32-bit application:
4539 -m32 -L/usr/lib32 -L/usr/local/lib32 \
4540 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
4543 For example, let's rebuild the quick start example in
4544 ``<<tracing-your-own-user-application,Record user application events>>''
4545 as an instrumented 32-bit application:
4550 $ gcc -m32 -c -I. hello-tp.c
4551 $ gcc -m32 -c hello.c
4552 $ gcc -m32 -o hello hello.o hello-tp.o \
4553 -L/usr/lib32 -L/usr/local/lib32 \
4554 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
4559 No special action is required to execute the 32-bit application and
4560 for LTTng to trace it: use the command-line man:lttng(1) tool as usual.
4565 ==== Use `lttng_ust_tracef()`
4567 man:lttng_ust_tracef(3) is a small LTTng-UST API designed for quick,
4568 man:printf(3)-like instrumentation without the burden of
4569 <<tracepoint-provider,creating>> and
4570 <<building-tracepoint-providers-and-user-application,building>>
4571 a tracepoint provider package.
4573 To use `lttng_ust_tracef()` in your application:
4575 . In the C or $$C++$$ source files where you need to use
4576 `lttng_ust_tracef()`, include `<lttng/tracef.h>`:
4581 #include <lttng/tracef.h>
4585 . In the source code of the application, use `lttng_ust_tracef()` like
4586 you would use man:printf(3):
4593 lttng_ust_tracef("my message: %d (%s)", my_integer, my_string);
4599 . Link your application with `liblttng-ust`:
4604 $ gcc -o app app.c -llttng-ust
4608 To record the events that `lttng_ust_tracef()` calls emit:
4610 * <<enabling-disabling-events,Create a recording event rule>> which
4611 matches user space events named `lttng_ust_tracef:*`:
4616 $ lttng enable-event --userspace 'lttng_ust_tracef:*'
4621 .Limitations of `lttng_ust_tracef()`
4623 The `lttng_ust_tracef()` utility function was developed to make user
4624 space tracing super simple, albeit with notable disadvantages compared
4625 to <<defining-tracepoints,user-defined tracepoints>>:
4627 * All the created events have the same tracepoint provider and
4628 tracepoint names, respectively `lttng_ust_tracef` and `event`.
4629 * There's no static type checking.
4630 * The only event record field you actually get, named `msg`, is a string
4631 potentially containing the values you passed to `lttng_ust_tracef()`
4632 using your own format string. This also means that you can't filter
4633 events with a custom expression at run time because there are no
4635 * Since `lttng_ust_tracef()` uses the man:vasprintf(3) function of the
4636 C{nbsp}standard library behind the scenes to format the strings at run
4637 time, its expected performance is lower than with user-defined
4638 tracepoints, which don't require a conversion to a string.
4640 Taking this into consideration, `lttng_ust_tracef()` is useful for some
4641 quick prototyping and debugging, but you shouldn't consider it for any
4642 permanent and serious applicative instrumentation.
4648 ==== Use `lttng_ust_tracelog()`
4650 The man:tracelog(3) API is very similar to
4651 <<tracef,`lttng_ust_tracef()`>>, with the difference that it accepts an
4652 additional log level parameter.
4654 The goal of `lttng_ust_tracelog()` is to ease the migration from logging
4657 To use `lttng_ust_tracelog()` in your application:
4659 . In the C or $$C++$$ source files where you need to use `tracelog()`,
4660 include `<lttng/tracelog.h>`:
4665 #include <lttng/tracelog.h>
4669 . In the source code of the application, use `lttng_ust_tracelog()` like
4670 you would use man:printf(3), except for the first parameter which is
4678 tracelog(LTTNG_UST_TRACEPOINT_LOGLEVEL_WARNING,
4679 "my message: %d (%s)", my_integer, my_string);
4685 See man:lttng-ust(3) for a list of available log level names.
4687 . Link your application with `liblttng-ust`:
4692 $ gcc -o app app.c -llttng-ust
4696 To record the events that `lttng_ust_tracelog()` calls emit with a log
4697 level _at least as severe as_ a specific log level:
4699 * <<enabling-disabling-events,Create a recording event rule>> which
4700 matches user space tracepoint events named `lttng_ust_tracelog:*` and
4701 with some minimum level of severity:
4706 $ lttng enable-event --userspace 'lttng_ust_tracelog:*' \
4711 To record the events that `lttng_ust_tracelog()` calls emit with a
4712 _specific log level_:
4714 * Create a recording event rule which matches tracepoint events named
4715 `lttng_ust_tracelog:*` and with a specific log level:
4720 $ lttng enable-event --userspace 'lttng_ust_tracelog:*' \
4721 --loglevel-only=INFO
4726 [[prebuilt-ust-helpers]]
4727 === Load a prebuilt user space tracing helper
4729 The LTTng-UST package provides a few helpers in the form of preloadable
4730 shared objects which automatically instrument system functions and
4733 The helper shared objects are normally found in dir:{/usr/lib}. If you
4734 built LTTng-UST <<building-from-source,from source>>, they're probably
4735 located in dir:{/usr/local/lib}.
4737 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4740 path:{liblttng-ust-libc-wrapper.so}::
4741 path:{liblttng-ust-pthread-wrapper.so}::
4742 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4743 memory and POSIX threads function tracing>>.
4745 path:{liblttng-ust-cyg-profile.so}::
4746 path:{liblttng-ust-cyg-profile-fast.so}::
4747 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4749 path:{liblttng-ust-dl.so}::
4750 <<liblttng-ust-dl,Dynamic linker tracing>>.
4752 To use a user space tracing helper with any user application:
4754 * Preload the helper shared object when you start the application:
4759 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4763 You can preload more than one helper:
4768 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4774 [[liblttng-ust-libc-pthread-wrapper]]
4775 ==== Instrument C standard library memory and POSIX threads functions
4777 The path:{liblttng-ust-libc-wrapper.so} and
4778 path:{liblttng-ust-pthread-wrapper.so} helpers
4779 add instrumentation to some C standard library and POSIX
4783 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4785 |TP provider name |TP name |Instrumented function
4787 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4788 |`calloc` |man:calloc(3)
4789 |`realloc` |man:realloc(3)
4790 |`free` |man:free(3)
4791 |`memalign` |man:memalign(3)
4792 |`posix_memalign` |man:posix_memalign(3)
4796 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4798 |TP provider name |TP name |Instrumented function
4800 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4801 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4802 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4803 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4806 When you preload the shared object, it replaces the functions listed
4807 in the previous tables by wrappers which contain tracepoints and call
4808 the replaced functions.
4811 [[liblttng-ust-cyg-profile]]
4812 ==== Instrument function entry and exit
4814 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4815 to the entry and exit points of functions.
4817 man:gcc(1) and man:clang(1) have an option named
4818 https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
4819 which generates instrumentation calls for entry and exit to functions.
4820 The LTTng-UST function tracing helpers,
4821 path:{liblttng-ust-cyg-profile.so} and
4822 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4823 to add tracepoints to the two generated functions (which contain
4824 `cyg_profile` in their names, hence the name of the helper).
4826 To use the LTTng-UST function tracing helper, the source files to
4827 instrument must be built using the `-finstrument-functions` compiler
4830 There are two versions of the LTTng-UST function tracing helper:
4832 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4833 that you should only use when it can be _guaranteed_ that the
4834 complete event stream is recorded without any lost event record.
4835 Any kind of duplicate information is left out.
4837 Assuming no event record is lost, having only the function addresses on
4838 entry is enough to create a call graph, since an event record always
4839 contains the ID of the CPU that generated it.
4841 Use a tool like man:addr2line(1) to convert function addresses back to
4842 source file names and line numbers.
4844 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4845 which also works in use cases where event records might get discarded or
4846 not recorded from application startup.
4847 In these cases, the trace analyzer needs more information to be
4848 able to reconstruct the program flow.
4850 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4851 points of this helper.
4853 All the tracepoints that this helper provides have the log level
4854 `LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4856 TIP: It's sometimes a good idea to limit the number of source files that
4857 you compile with the `-finstrument-functions` option to prevent LTTng
4858 from writing an excessive amount of trace data at run time. When using
4860 `-finstrument-functions-exclude-function-list` option to avoid
4861 instrument entries and exits of specific function names.
4866 ==== Instrument the dynamic linker
4868 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4869 man:dlopen(3) and man:dlclose(3) function calls.
4871 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4876 [[java-application]]
4877 === Instrument a Java application
4879 You can instrument any Java application which uses one of the following
4882 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4883 (JUL) core logging facilities.
4885 * https://logging.apache.org/log4j/1.2/[**Apache log4j{nbsp}1.2**], since
4886 LTTng{nbsp}2.6. Note that Apache Log4j{nbsp}2 isn't supported.
4889 .LTTng-UST Java agent imported by a Java application.
4890 image::java-app.png[]
4892 Note that the methods described below are new in LTTng{nbsp}2.8.
4893 Previous LTTng versions use another technique.
4895 NOTE: We use https://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4896 and https://ci.lttng.org/[continuous integration], thus this version is
4897 directly supported. However, the LTTng-UST Java agent is also tested
4898 with OpenJDK{nbsp}7.
4903 ==== Use the LTTng-UST Java agent for `java.util.logging`
4905 To use the LTTng-UST Java agent in a Java application which uses
4906 `java.util.logging` (JUL):
4908 . In the source code of the Java application, import the LTTng-UST log
4909 handler package for `java.util.logging`:
4914 import org.lttng.ust.agent.jul.LttngLogHandler;
4918 . Create an LTTng-UST `java.util.logging` log handler:
4923 Handler lttngUstLogHandler = new LttngLogHandler();
4927 . Add this handler to the `java.util.logging` loggers which should emit
4933 Logger myLogger = Logger.getLogger("some-logger");
4935 myLogger.addHandler(lttngUstLogHandler);
4939 . Use `java.util.logging` log statements and configuration as usual.
4940 The loggers with an attached LTTng-UST log handler can emit
4943 . Before exiting the application, remove the LTTng-UST log handler from
4944 the loggers attached to it and call its `close()` method:
4949 myLogger.removeHandler(lttngUstLogHandler);
4950 lttngUstLogHandler.close();
4954 This isn't strictly necessary, but it's recommended for a clean
4955 disposal of the resources of the handler.
4957 . Include the common and JUL-specific JAR files of the LTTng-UST Java agent,
4958 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4960 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4961 path] when you build the Java application.
4963 The JAR files are typically located in dir:{/usr/share/java}.
4965 IMPORTANT: The LTTng-UST Java agent must be
4966 <<installing-lttng,installed>> for the logging framework your
4969 .Use the LTTng-UST Java agent for `java.util.logging`.
4974 import java.io.IOException;
4975 import java.util.logging.Handler;
4976 import java.util.logging.Logger;
4977 import org.lttng.ust.agent.jul.LttngLogHandler;
4981 private static final int answer = 42;
4983 public static void main(String[] argv) throws Exception
4986 Logger logger = Logger.getLogger("jello");
4988 // Create an LTTng-UST log handler
4989 Handler lttngUstLogHandler = new LttngLogHandler();
4991 // Add the LTTng-UST log handler to our logger
4992 logger.addHandler(lttngUstLogHandler);
4995 logger.info("some info");
4996 logger.warning("some warning");
4998 logger.finer("finer information; the answer is " + answer);
5000 logger.severe("error!");
5002 // Not mandatory, but cleaner
5003 logger.removeHandler(lttngUstLogHandler);
5004 lttngUstLogHandler.close();
5013 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
5016 <<creating-destroying-tracing-sessions,Create a recording session>>,
5017 <<enabling-disabling-events,create a recording event rule>> matching JUL
5018 events named `jello`, and <<basic-tracing-session-control,start
5024 $ lttng enable-event --jul jello
5028 Run the compiled class:
5032 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
5035 <<basic-tracing-session-control,Stop recording>> and inspect the
5045 In the resulting trace, an <<event,event record>> which a Java
5046 application using `java.util.logging` generated is named
5047 `lttng_jul:event` and has the following fields:
5056 Name of the class in which the log statement was executed.
5059 Name of the method in which the log statement was executed.
5062 Logging time (timestamp in milliseconds).
5065 Log level integer value.
5068 ID of the thread in which the log statement was executed.
5070 Use the opt:lttng-enable-event(1):--loglevel or
5071 opt:lttng-enable-event(1):--loglevel-only option of the
5072 man:lttng-enable-event(1) command to target a range of
5073 `java.util.logging` log levels or a specific `java.util.logging` log
5079 ==== Use the LTTng-UST Java agent for Apache log4j
5081 To use the LTTng-UST Java agent in a Java application which uses
5082 Apache log4j{nbsp}1.2:
5084 . In the source code of the Java application, import the LTTng-UST log
5085 appender package for Apache log4j:
5090 import org.lttng.ust.agent.log4j.LttngLogAppender;
5094 . Create an LTTng-UST log4j log appender:
5099 Appender lttngUstLogAppender = new LttngLogAppender();
5103 . Add this appender to the log4j loggers which should emit LTTng events:
5108 Logger myLogger = Logger.getLogger("some-logger");
5110 myLogger.addAppender(lttngUstLogAppender);
5114 . Use Apache log4j log statements and configuration as usual. The
5115 loggers with an attached LTTng-UST log appender can emit LTTng events.
5117 . Before exiting the application, remove the LTTng-UST log appender from
5118 the loggers attached to it and call its `close()` method:
5123 myLogger.removeAppender(lttngUstLogAppender);
5124 lttngUstLogAppender.close();
5128 This isn't strictly necessary, but it's recommended for a clean
5129 disposal of the resources of the appender.
5131 . Include the common and log4j-specific JAR
5132 files of the LTTng-UST Java agent, path:{lttng-ust-agent-common.jar} and
5133 path:{lttng-ust-agent-log4j.jar}, in the
5134 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
5135 path] when you build the Java application.
5137 The JAR files are typically located in dir:{/usr/share/java}.
5139 IMPORTANT: The LTTng-UST Java agent must be
5140 <<installing-lttng,installed>> for the logging framework your
5143 .Use the LTTng-UST Java agent for Apache log4j.
5148 import org.apache.log4j.Appender;
5149 import org.apache.log4j.Logger;
5150 import org.lttng.ust.agent.log4j.LttngLogAppender;
5154 private static final int answer = 42;
5156 public static void main(String[] argv) throws Exception
5159 Logger logger = Logger.getLogger("jello");
5161 // Create an LTTng-UST log appender
5162 Appender lttngUstLogAppender = new LttngLogAppender();
5164 // Add the LTTng-UST log appender to our logger
5165 logger.addAppender(lttngUstLogAppender);
5168 logger.info("some info");
5169 logger.warn("some warning");
5171 logger.debug("debug information; the answer is " + answer);
5173 logger.fatal("error!");
5175 // Not mandatory, but cleaner
5176 logger.removeAppender(lttngUstLogAppender);
5177 lttngUstLogAppender.close();
5183 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
5188 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
5191 <<creating-destroying-tracing-sessions,Create a recording session>>,
5192 <<enabling-disabling-events,create a recording event rule>> matching
5193 log4j events named `jello`, and <<basic-tracing-session-control,start
5199 $ lttng enable-event --log4j jello
5203 Run the compiled class:
5207 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
5210 <<basic-tracing-session-control,Stop recording>> and inspect the
5220 In the resulting trace, an <<event,event record>> which a Java
5221 application using log4j generated is named `lttng_log4j:event` and
5222 has the following fields:
5231 Name of the class in which the log statement was executed.
5234 Name of the method in which the log statement was executed.
5237 Name of the file in which the executed log statement is located.
5240 Line number at which the log statement was executed.
5246 Log level integer value.
5249 Name of the Java thread in which the log statement was executed.
5251 Use the opt:lttng-enable-event(1):--loglevel or
5252 opt:lttng-enable-event(1):--loglevel-only option of the
5253 man:lttng-enable-event(1) command to target a range of Apache log4j
5254 log levels or a specific log4j log level.
5258 [[java-application-context]]
5259 ==== Provide application-specific context fields in a Java application
5261 A Java application-specific context field is a piece of state which
5262 the Java application provides. You can <<adding-context,add>> such
5263 a context field to be recorded, using the
5264 man:lttng-add-context(1) command, to each <<event,event record>>
5265 which the log statements of this application produce.
5267 For example, a given object might have a current request ID variable.
5268 You can create a context information retriever for this object and
5269 assign a name to this current request ID. You can then, using the
5270 man:lttng-add-context(1) command, add this context field by name so that
5271 LTTng writes it to the event records of a given `java.util.logging` or
5272 log4j <<channel,channel>>.
5274 To provide application-specific context fields in a Java application:
5276 . In the source code of the Java application, import the LTTng-UST
5277 Java agent context classes and interfaces:
5282 import org.lttng.ust.agent.context.ContextInfoManager;
5283 import org.lttng.ust.agent.context.IContextInfoRetriever;
5287 . Create a context information retriever class, that is, a class which
5288 implements the `IContextInfoRetriever` interface:
5293 class MyContextInfoRetriever implements IContextInfoRetriever
5296 public Object retrieveContextInfo(String key)
5298 if (key.equals("intCtx")) {
5300 } else if (key.equals("strContext")) {
5301 return "context value!";
5310 This `retrieveContextInfo()` method is the only member of the
5311 `IContextInfoRetriever` interface. Its role is to return the current
5312 value of a state by name to create a context field. The names of the
5313 context fields and which state variables they return depends on your
5316 All primitive types and objects are supported as context fields.
5317 When `retrieveContextInfo()` returns an object, the context field
5318 serializer calls its `toString()` method to add a string field to
5319 event records. The method can also return `null`, which means that
5320 no context field is available for the required name.
5322 . Register an instance of your context information retriever class to
5323 the context information manager singleton:
5328 IContextInfoRetriever cir = new MyContextInfoRetriever();
5329 ContextInfoManager cim = ContextInfoManager.getInstance();
5330 cim.registerContextInfoRetriever("retrieverName", cir);
5334 . Before exiting the application, remove your context information
5335 retriever from the context information manager singleton:
5340 ContextInfoManager cim = ContextInfoManager.getInstance();
5341 cim.unregisterContextInfoRetriever("retrieverName");
5345 This isn't strictly necessary, but it's recommended for a clean
5346 disposal of some resources of the manager.
5348 . Build your Java application with LTTng-UST Java agent support as
5349 usual, following the procedure for either the
5350 <<jul,`java.util.logging`>> or <<log4j,Apache log4j>> framework.
5352 .Provide application-specific context fields in a Java application.
5357 import java.util.logging.Handler;
5358 import java.util.logging.Logger;
5359 import org.lttng.ust.agent.jul.LttngLogHandler;
5360 import org.lttng.ust.agent.context.ContextInfoManager;
5361 import org.lttng.ust.agent.context.IContextInfoRetriever;
5365 // Our context information retriever class
5366 private static class MyContextInfoRetriever
5367 implements IContextInfoRetriever
5370 public Object retrieveContextInfo(String key) {
5371 if (key.equals("intCtx")) {
5373 } else if (key.equals("strContext")) {
5374 return "context value!";
5381 private static final int answer = 42;
5383 public static void main(String args[]) throws Exception
5385 // Get the context information manager instance
5386 ContextInfoManager cim = ContextInfoManager.getInstance();
5388 // Create and register our context information retriever
5389 IContextInfoRetriever cir = new MyContextInfoRetriever();
5390 cim.registerContextInfoRetriever("myRetriever", cir);
5393 Logger logger = Logger.getLogger("jello");
5395 // Create an LTTng-UST log handler
5396 Handler lttngUstLogHandler = new LttngLogHandler();
5398 // Add the LTTng-UST log handler to our logger
5399 logger.addHandler(lttngUstLogHandler);
5402 logger.info("some info");
5403 logger.warning("some warning");
5405 logger.finer("finer information; the answer is " + answer);
5407 logger.severe("error!");
5409 // Not mandatory, but cleaner
5410 logger.removeHandler(lttngUstLogHandler);
5411 lttngUstLogHandler.close();
5412 cim.unregisterContextInfoRetriever("myRetriever");
5421 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
5424 <<creating-destroying-tracing-sessions,Create a recording session>> and
5425 <<enabling-disabling-events,create a recording event rule>> matching
5426 `java.util.logging` events named `jello`:
5431 $ lttng enable-event --jul jello
5434 <<adding-context,Add the application-specific context fields>> to be
5435 recorded to the event records of the `java.util.logging` channel:
5439 $ lttng add-context --jul --type='$app.myRetriever:intCtx'
5440 $ lttng add-context --jul --type='$app.myRetriever:strContext'
5443 <<basic-tracing-session-control,Start recording>>:
5450 Run the compiled class:
5454 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
5457 <<basic-tracing-session-control,Stop recording>> and inspect the
5469 [[python-application]]
5470 === Instrument a Python application
5472 You can instrument a Python{nbsp}2 or Python{nbsp}3 application which
5474 https://docs.python.org/3/library/logging.html[`logging`] package.
5476 Each log statement creates an LTTng event once the application module
5477 imports the <<lttng-ust-agents,LTTng-UST Python agent>> package.
5480 .A Python application importing the LTTng-UST Python agent.
5481 image::python-app.png[]
5483 To use the LTTng-UST Python agent:
5485 . In the source code of the Python application, import the LTTng-UST
5495 The LTTng-UST Python agent automatically adds its logging handler to the
5496 root logger at import time.
5498 A log statement that the application executes before this import doesn't
5499 create an LTTng event.
5501 IMPORTANT: The LTTng-UST Python agent must be
5502 <<installing-lttng,installed>>.
5504 . Use log statements and logging configuration as usual.
5505 Since the LTTng-UST Python agent adds a handler to the _root_
5506 logger, any log statement from any logger can emit an LTTng event.
5508 .Use the LTTng-UST Python agent.
5519 logging.basicConfig()
5520 logger = logging.getLogger('my-logger')
5523 logger.debug('debug message')
5524 logger.info('info message')
5525 logger.warn('warn message')
5526 logger.error('error message')
5527 logger.critical('critical message')
5531 if __name__ == '__main__':
5535 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
5536 logging handler which prints to the standard error stream, isn't
5537 strictly required for LTTng-UST tracing to work, but in versions of
5538 Python preceding{nbsp}3.2, you could see a warning message which
5539 indicates that no handler exists for the logger `my-logger`.
5541 <<creating-destroying-tracing-sessions,Create a recording session>>,
5542 <<enabling-disabling-events,create a recording event rule>> matching
5543 Python logging events named `my-logger`, and
5544 <<basic-tracing-session-control,start recording>>:
5549 $ lttng enable-event --python my-logger
5553 Run the Python script:
5560 <<basic-tracing-session-control,Stop recording>> and inspect the
5570 In the resulting trace, an <<event,event record>> which a Python
5571 application generated is named `lttng_python:event` and has the
5575 Logging time (string).
5584 Name of the function in which the log statement was executed.
5587 Line number at which the log statement was executed.
5590 Log level integer value.
5593 ID of the Python thread in which the log statement was executed.
5596 Name of the Python thread in which the log statement was executed.
5598 Use the opt:lttng-enable-event(1):--loglevel or
5599 opt:lttng-enable-event(1):--loglevel-only option of the
5600 man:lttng-enable-event(1) command to target a range of Python log levels
5601 or a specific Python log level.
5603 When an application imports the LTTng-UST Python agent, the agent tries
5604 to register to a <<lttng-sessiond,session daemon>>. Note that you must
5605 <<start-sessiond,start the session daemon>> _before_ you run the Python
5606 application. If a session daemon is found, the agent tries to register
5607 to it during five seconds, after which the application continues
5608 without LTTng tracing support. Override this timeout value with
5609 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
5612 If the session daemon stops while a Python application with an imported
5613 LTTng-UST Python agent runs, the agent retries to connect and to
5614 register to a session daemon every three seconds. Override this
5615 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
5620 [[proc-lttng-logger-abi]]
5621 === Use the LTTng logger
5623 The `lttng-tracer` Linux kernel module, part of
5624 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger files
5625 path:{/proc/lttng-logger} and path:{/dev/lttng-logger} (since
5626 LTTng{nbsp}2.11) when it's loaded. Any application can write text data
5627 to any of those files to create one or more LTTng events.
5630 .An application writes to the LTTng logger file to create one or more LTTng events.
5631 image::lttng-logger.png[]
5633 The LTTng logger is the quickest method--not the most efficient,
5634 however--to add instrumentation to an application. It's designed
5635 mostly to instrument shell scripts:
5639 $ echo "Some message, some $variable" > /dev/lttng-logger
5642 Any event that the LTTng logger creates is named `lttng_logger` and
5643 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
5644 other instrumentation points in the kernel tracing domain, **any Unix
5645 user** can <<enabling-disabling-events,create a recording event rule>>
5646 which matches events named `lttng_logger`, not only the root user or
5647 users in the <<tracing-group,tracing group>>.
5649 To use the LTTng logger:
5651 * From any application, write text data to the path:{/dev/lttng-logger}
5654 The `msg` field of `lttng_logger` event records contains the
5657 NOTE: The maximum message length of an LTTng logger event is
5658 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
5659 than one event to contain the remaining data.
5661 You shouldn't use the LTTng logger to trace a user application which you
5662 can instrument in a more efficient way, namely:
5664 * <<c-application,C and $$C++$$ applications>>.
5665 * <<java-application,Java applications>>.
5666 * <<python-application,Python applications>>.
5668 .Use the LTTng logger.
5673 echo 'Hello, World!' > /dev/lttng-logger
5675 df --human-readable --print-type / > /dev/lttng-logger
5678 <<creating-destroying-tracing-sessions,Create a recording session>>,
5679 <<enabling-disabling-events,create a recording event rule>> matching
5680 Linux kernel tracepoint events named `lttng_logger`, and
5681 <<basic-tracing-session-control,start recording>>:
5686 $ lttng enable-event --kernel lttng_logger
5690 Run the Bash script:
5697 <<basic-tracing-session-control,Stop recording>> and inspect the recorded
5708 [[instrumenting-linux-kernel]]
5709 === Instrument a Linux kernel image or module
5711 NOTE: This section shows how to _add_ instrumentation points to the
5712 Linux kernel. The subsystems of the kernel are already thoroughly
5713 instrumented at strategic points for LTTng when you
5714 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
5718 [[linux-add-lttng-layer]]
5719 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5721 This section shows how to add an LTTng layer to existing ftrace
5722 instrumentation using the `TRACE_EVENT()` API.
5724 This section doesn't document the `TRACE_EVENT()` macro. Read the
5725 following articles to learn more about this API:
5727 * https://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part{nbsp}1)]
5728 * https://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part{nbsp}2)]
5729 * https://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part{nbsp}3)]
5731 The following procedure assumes that your ftrace tracepoints are
5732 correctly defined in their own header and that they're created in
5733 one source file using the `CREATE_TRACE_POINTS` definition.
5735 To add an LTTng layer over an existing ftrace tracepoint:
5737 . Make sure the following kernel configuration options are
5743 * `CONFIG_HIGH_RES_TIMERS`
5744 * `CONFIG_TRACEPOINTS`
5747 . Build the Linux source tree with your custom ftrace tracepoints.
5748 . Boot the resulting Linux image on your target system.
5750 Confirm that the tracepoints exist by looking for their names in the
5751 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5752 is your subsystem name.
5754 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
5759 $ cd $(mktemp -d) &&
5760 wget https://lttng.org/files/lttng-modules/lttng-modules-latest-2.13.tar.bz2 &&
5761 tar -xf lttng-modules-latest-2.13.tar.bz2 &&
5762 cd lttng-modules-2.13.*
5766 . In dir:{instrumentation/events/lttng-module}, relative to the root
5767 of the LTTng-modules source tree, create a header file named
5768 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5769 LTTng-modules tracepoint definitions using the LTTng-modules
5772 Start with this template:
5776 .path:{instrumentation/events/lttng-module/my_subsys.h}
5779 #define TRACE_SYSTEM my_subsys
5781 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5782 #define _LTTNG_MY_SUBSYS_H
5784 #include "../../../probes/lttng-tracepoint-event.h"
5785 #include <linux/tracepoint.h>
5787 LTTNG_TRACEPOINT_EVENT(
5789 * Format is identical to the TRACE_EVENT() version for the three
5790 * following macro parameters:
5793 TP_PROTO(int my_int, const char *my_string),
5794 TP_ARGS(my_int, my_string),
5796 /* LTTng-modules specific macros */
5798 ctf_integer(int, my_int_field, my_int)
5799 ctf_string(my_bar_field, my_bar)
5803 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5805 #include "../../../probes/define_trace.h"
5809 The entries in the `TP_FIELDS()` section are the list of fields for the
5810 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5811 the `TRACE_EVENT()` ftrace macro.
5813 See ``<<lttng-modules-tp-fields,Tracepoint fields macros>>'' for a
5814 complete description of the available `ctf_*()` macros.
5816 . Create the kernel module C{nbsp}source file of the LTTng-modules
5817 probe, +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5822 .path:{probes/lttng-probe-my-subsys.c}
5824 #include <linux/module.h>
5825 #include "../lttng-tracer.h"
5828 * Build-time verification of mismatch between mainline
5829 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5830 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5832 #include <trace/events/my_subsys.h>
5834 /* Create LTTng tracepoint probes */
5835 #define LTTNG_PACKAGE_BUILD
5836 #define CREATE_TRACE_POINTS
5837 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5839 #include "../instrumentation/events/lttng-module/my_subsys.h"
5841 MODULE_LICENSE("GPL and additional rights");
5842 MODULE_AUTHOR("Your name <your-email>");
5843 MODULE_DESCRIPTION("LTTng my_subsys probes");
5844 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5845 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5846 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5847 LTTNG_MODULES_EXTRAVERSION);
5851 . Edit path:{probes/KBuild} and add your new kernel module object
5852 next to the existing ones:
5856 .path:{probes/KBuild}
5860 obj-m += lttng-probe-module.o
5861 obj-m += lttng-probe-power.o
5863 obj-m += lttng-probe-my-subsys.o
5869 . Build and install the LTTng kernel modules:
5874 $ make KERNELDIR=/path/to/linux
5875 # make modules_install && depmod -a
5879 Replace `/path/to/linux` with the path to the Linux source tree where
5880 you defined and used tracepoints with the `TRACE_EVENT()` ftrace macro.
5882 Note that you can also use the
5883 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5884 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5885 C{nbsp}code that need to be executed before LTTng records the event
5888 The best way to learn how to use the previous LTTng-modules macros is to
5889 inspect the existing LTTng-modules tracepoint definitions in the
5890 dir:{instrumentation/events/lttng-module} header files. Compare them
5891 with the Linux kernel mainline versions in the
5892 dir:{include/trace/events} directory of the Linux source tree.
5896 [[lttng-tracepoint-event-code]]
5897 ===== Use custom C code to access the data for tracepoint fields
5899 Although we recommended to always use the
5900 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5901 the arguments and fields of an LTTng-modules tracepoint when possible,
5902 sometimes you need a more complex process to access the data that the
5903 tracer records as event record fields. In other words, you need local
5904 variables and multiple C{nbsp}statements instead of simple
5905 argument-based expressions that you pass to the
5906 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5908 Use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5909 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5910 a block of C{nbsp}code to be executed before LTTng records the fields.
5911 The structure of this macro is:
5914 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5916 LTTNG_TRACEPOINT_EVENT_CODE(
5918 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5919 * version for the following three macro parameters:
5922 TP_PROTO(int my_int, const char *my_string),
5923 TP_ARGS(my_int, my_string),
5925 /* Declarations of custom local variables */
5928 unsigned long b = 0;
5929 const char *name = "(undefined)";
5930 struct my_struct *my_struct;
5934 * Custom code which uses both tracepoint arguments
5935 * (in TP_ARGS()) and local variables (in TP_locvar()).
5937 * Local variables are actually members of a structure pointed
5938 * to by the special variable tp_locvar.
5942 tp_locvar->a = my_int + 17;
5943 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5944 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5945 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5946 put_my_struct(tp_locvar->my_struct);
5955 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5956 * version for this, except that tp_locvar members can be
5957 * used in the argument expression parameters of
5958 * the ctf_*() macros.
5961 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5962 ctf_integer(int, my_struct_a, tp_locvar->a)
5963 ctf_string(my_string_field, my_string)
5964 ctf_string(my_struct_name, tp_locvar->name)
5969 IMPORTANT: The C code defined in `TP_code()` must not have any side
5970 effects when executed. In particular, the code must not allocate
5971 memory or get resources without deallocating this memory or putting
5972 those resources afterwards.
5975 [[instrumenting-linux-kernel-tracing]]
5976 ==== Load and unload a custom probe kernel module
5978 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5979 kernel module>> in the kernel before it can emit LTTng events.
5981 To load the default probe kernel modules and a custom probe kernel
5984 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5985 probe modules to load when starting a root <<lttng-sessiond,session
5989 .Load the `my_subsys`, `usb`, and the default probe modules.
5993 # lttng-sessiond --extra-kmod-probes=my_subsys,usb
5998 You only need to pass the subsystem name, not the whole kernel module
6001 To load _only_ a given custom probe kernel module:
6003 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
6004 modules to load when starting a root session daemon:
6007 .Load only the `my_subsys` and `usb` probe modules.
6011 # lttng-sessiond --kmod-probes=my_subsys,usb
6016 To confirm that a probe module is loaded:
6023 $ lsmod | grep lttng_probe_usb
6027 To unload the loaded probe modules:
6029 * Kill the session daemon with `SIGTERM`:
6034 # pkill lttng-sessiond
6038 You can also use the `--remove` option of man:modprobe(8) if the session
6039 daemon terminates abnormally.
6042 [[controlling-tracing]]
6045 Once an application or a Linux kernel is <<instrumenting,instrumented>>
6046 for LTTng tracing, you can _trace_ it.
6048 In the LTTng context, _tracing_ means making sure that LTTng attempts to
6049 execute some action(s) when a CPU executes an instrumentation point.
6051 This section is divided in topics on how to use the various
6052 <<plumbing,components of LTTng>>, in particular the
6053 <<lttng-cli,cmd:lttng command-line tool>>, to _control_ the LTTng
6054 daemons and tracers.
6056 NOTE: In the following subsections, we refer to an man:lttng(1) command
6057 using its man page name. For example, instead of ``Run the `create`
6058 command to'', we write ``Run the man:lttng-create(1) command to''.
6062 === Start a session daemon
6064 In some situations, you need to run a <<lttng-sessiond,session daemon>>
6065 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
6068 You will see the following error when you run a command while no session
6072 Error: No session daemon is available
6075 The only command that automatically runs a session daemon is
6076 man:lttng-create(1), which you use to
6077 <<creating-destroying-tracing-sessions,create a recording session>>. While
6078 this could be your most used first operation, sometimes it's not. Some
6081 * <<list-instrumentation-points,List the available instrumentation points>>.
6082 * <<saving-loading-tracing-session,Load a recording session configuration>>.
6083 * <<add-event-rule-matches-trigger,Add a trigger>>.
6085 All the examples above don't require a recording session to operate on.
6087 [[tracing-group]] Each Unix user can have its own running session daemon
6088 to use the user space LTTng tracer. The session daemon that the `root`
6089 user starts is the only one allowed to control the LTTng kernel tracer.
6090 Members of the Unix _tracing group_ may connect to and control the root
6091 session daemon, even for user space tracing. See the ``Session daemon
6092 connection'' section of man:lttng(1) to learn more about the Unix
6095 To start a user session daemon:
6097 * Run man:lttng-sessiond(8):
6102 $ lttng-sessiond --daemonize
6106 To start the root session daemon:
6108 * Run man:lttng-sessiond(8) as the `root` user:
6113 # lttng-sessiond --daemonize
6117 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
6118 start the session daemon in foreground.
6120 To stop a session daemon, kill its process (see man:kill(1)) with the
6121 standard `TERM` signal.
6123 Note that some Linux distributions could manage the LTTng session daemon
6124 as a service. In this case, we suggest that you use the service manager
6125 to start, restart, and stop session daemons.
6128 [[creating-destroying-tracing-sessions]]
6129 === Create and destroy a recording session
6131 Many LTTng control operations happen in the scope of a
6132 <<tracing-session,recording session>>, which is the dialogue between the
6133 <<lttng-sessiond,session daemon>> and you for everything related to
6134 <<event,event recording>>.
6136 To create a recording session with a generated name:
6138 * Use the man:lttng-create(1) command:
6147 The name of the created recording session is `auto` followed by the
6150 To create a recording session with a specific name:
6152 * Use the optional argument of the man:lttng-create(1) command:
6157 $ lttng create SESSION
6161 Replace +__SESSION__+ with your specific recording session name.
6163 In <<local-mode,local mode>>, LTTng writes the traces of a recording
6164 session to the +$LTTNG_HOME/lttng-traces/__NAME__-__DATE__-__TIME__+
6165 directory by default, where +__NAME__+ is the name of the recording
6166 session. Note that the env:LTTNG_HOME environment variable defaults to
6169 To output LTTng traces to a non-default location:
6171 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1)
6177 $ lttng create my-session --output=/tmp/some-directory
6181 You may create as many recording sessions as you wish.
6183 To list all the existing recording sessions for your Unix user, or for
6184 all users if your Unix user is `root`:
6186 * Use the man:lttng-list(1) command:
6195 [[cur-tracing-session]]When you create a recording session, the
6196 man:lttng-create(1) command sets it as the _current recording session_.
6197 The following man:lttng(1) commands operate on the current recording
6198 session when you don't specify one:
6200 [role="list-3-cols"]
6201 * man:lttng-add-context(1)
6202 * man:lttng-clear(1)
6203 * man:lttng-destroy(1)
6204 * man:lttng-disable-channel(1)
6205 * man:lttng-disable-event(1)
6206 * man:lttng-disable-rotation(1)
6207 * man:lttng-enable-channel(1)
6208 * man:lttng-enable-event(1)
6209 * man:lttng-enable-rotation(1)
6211 * man:lttng-regenerate(1)
6212 * man:lttng-rotate(1)
6214 * man:lttng-snapshot(1)
6215 * man:lttng-start(1)
6216 * man:lttng-status(1)
6218 * man:lttng-track(1)
6219 * man:lttng-untrack(1)
6222 To change the current recording session:
6224 * Use the man:lttng-set-session(1) command:
6229 $ lttng set-session SESSION
6233 Replace +__SESSION__+ with the name of the new current recording session.
6235 When you're done recording in a given recording session, destroy it.
6236 This operation frees the resources taken by the recording session to
6237 destroy; it doesn't destroy the trace data that LTTng wrote for this
6238 recording session (see ``<<clear,Clear a recording session>>'' for one
6241 To destroy the current recording session:
6243 * Use the man:lttng-destroy(1) command:
6252 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
6253 implicitly (see ``<<basic-tracing-session-control,Start and stop a
6254 recording session>>''). You need to stop recording to make LTTng flush the
6255 remaining trace data and make the trace readable.
6258 [[list-instrumentation-points]]
6259 === List the available instrumentation points
6261 The <<lttng-sessiond,session daemon>> can query the running instrumented
6262 user applications and the Linux kernel to get a list of available
6263 instrumentation points:
6265 * LTTng tracepoints and system calls for the Linux kernel
6266 <<domain,tracing domain>>.
6268 * LTTng tracepoints for the user space tracing domain.
6270 To list the available instrumentation points:
6272 . <<start-sessiond,Make sure>> there's a running
6273 <<lttng-sessiond,session daemon>> to which your Unix user can
6276 . Use the man:lttng-list(1) command with the option of the requested
6277 tracing domain amongst:
6280 opt:lttng-list(1):--kernel::
6281 Linux kernel tracepoints.
6283 Your Unix user must be `root`, or it must be a member of the Unix
6284 <<tracing-group,tracing group>>.
6286 opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall::
6287 Linux kernel system calls.
6289 Your Unix user must be `root`, or it must be a member of the Unix
6290 <<tracing-group,tracing group>>.
6292 opt:lttng-list(1):--userspace::
6293 User space tracepoints.
6295 opt:lttng-list(1):--jul::
6296 `java.util.logging` loggers.
6298 opt:lttng-list(1):--log4j::
6299 Apache log4j loggers.
6301 opt:lttng-list(1):--python::
6305 .List the available user space tracepoints.
6309 $ lttng list --userspace
6313 .List the available Linux kernel system calls.
6317 $ lttng list --kernel --syscall
6322 [[enabling-disabling-events]]
6323 === Create and enable a recording event rule
6325 Once you <<creating-destroying-tracing-sessions,create a recording
6326 session>>, you can create <<event,recording event rules>> with the
6327 man:lttng-enable-event(1) command.
6329 The man:lttng-enable-event(1) command always attaches an event rule to a
6330 <<channel,channel>> on creation. The command can create a _default
6331 channel_, named `channel0`, for you. The man:lttng-enable-event(1)
6332 command reuses the default channel each time you run it for the same
6333 tracing domain and session.
6335 A recording event rule is always enabled at creation time.
6337 The following examples show how to combine the command-line arguments of
6338 the man:lttng-enable-event(1) command to create simple to more complex
6339 recording event rules within the <<cur-tracing-session,current recording
6342 .Create a recording event rule matching specific Linux kernel tracepoint events (default channel).
6346 # lttng enable-event --kernel sched_switch
6350 .Create a recording event rule matching Linux kernel system call events with four specific names (default channel).
6354 # lttng enable-event --kernel --syscall open,write,read,close
6358 .Create recording event rules matching tracepoint events which satisfy a filter expressions (default channel).
6362 # lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
6367 # lttng enable-event --kernel --all \
6368 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
6373 $ lttng enable-event --jul my_logger \
6374 --filter='$app.retriever:cur_msg_id > 3'
6377 IMPORTANT: Make sure to always single-quote the filter string when you
6378 run man:lttng(1) from a shell.
6380 See also ``<<pid-tracking,Allow specific processes to record events>>''
6381 which offers another, more efficient filtering mechanism for process ID,
6382 user ID, and group ID attributes.
6385 .Create a recording event rule matching any user space event from the `my_app` tracepoint provider and with a log level range (default channel).
6389 $ lttng enable-event --userspace my_app:'*' --loglevel=INFO
6392 IMPORTANT: Make sure to always single-quote the wildcard character when
6393 you run man:lttng(1) from a shell.
6396 .Create a recording event rule matching user space events named specifically, but with name exclusions (default channel).
6400 $ lttng enable-event --userspace my_app:'*' \
6401 --exclude=my_app:set_user,my_app:handle_sig
6405 .Create a recording event rule matching any Apache log4j event with a specific log level (default channel).
6409 $ lttng enable-event --log4j --all --loglevel-only=WARN
6413 .Create a recording event rule, attached to a specific channel, and matching user space tracepoint events named `my_app:my_tracepoint`.
6417 $ lttng enable-event --userspace my_app:my_tracepoint \
6418 --channel=my-channel
6422 .Create a recording event rule matching user space probe events for the `malloc` function entry in path:{/usr/lib/libc.so.6}:
6426 # lttng enable-event --kernel \
6427 --userspace-probe=/usr/lib/libc.so.6:malloc \
6432 .Create a recording event rule matching user space probe events for the `server`/`accept_request` https://www.sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps[USDT probe] in path:{/usr/bin/serv}:
6436 # lttng enable-event --kernel \
6437 --userspace-probe=sdt:serv:server:accept_request \
6438 server_accept_request
6442 The recording event rules of a given channel form a whitelist: as soon
6443 as an event rule matches an event, LTTng emits it _once_ and therefore
6444 <<channel-overwrite-mode-vs-discard-mode,can>> record it. For example,
6445 the following rules both match user space tracepoint events named
6446 `my_app:my_tracepoint` with an `INFO` log level:
6450 $ lttng enable-event --userspace my_app:my_tracepoint
6451 $ lttng enable-event --userspace my_app:my_tracepoint \
6455 The second recording event rule is redundant: the first one includes the
6459 [[disable-event-rule]]
6460 === Disable a recording event rule
6462 To disable a <<event,recording event rule>> that you
6463 <<enabling-disabling-events,created>> previously, use the
6464 man:lttng-disable-event(1) command.
6466 man:lttng-disable-event(1) can only find recording event rules to
6467 disable by their <<instrumentation-point-types,instrumentation point
6468 type>> and event name conditions. Therefore, you cannot disable
6469 recording event rules having a specific instrumentation point log level
6470 condition, for example.
6472 LTTng doesn't emit (and, therefore, won't record) an event which only
6473 _disabled_ recording event rules match.
6475 .Disable event rules matching Python logging events from the `my-logger` logger (default <<channel,channel>>, <<cur-tracing-session,current recording session>>).
6479 $ lttng disable-event --python my-logger
6483 .Disable event rules matching all `java.util.logging` events (default channel, recording session `my-session`).
6487 $ lttng disable-event --jul --session=my-session '*'
6491 .Disable _all_ the Linux kernel recording event rules (channel `my-chan`, current recording session).
6493 The opt:lttng-disable-event(1):--all-events option isn't, like the
6494 opt:lttng-enable-event(1):--all option of the man:lttng-enable-event(1)
6495 command, an alias for the event name globbing pattern `*`: it disables
6496 _all_ the recording event rules of a given channel.
6500 # lttng disable-event --kernel --channel=my-chan --all-events
6504 NOTE: You can't _remove_ a recording event rule once you create it.
6508 === Get the status of a recording session
6510 To get the status of the <<cur-tracing-session,current recording
6511 session>>, that is, its parameters, its channels, recording event rules,
6512 and their attributes:
6514 * Use the man:lttng-status(1) command:
6523 To get the status of any recording session:
6525 * Use the man:lttng-list(1) command with the name of the recording
6531 $ lttng list SESSION
6535 Replace +__SESSION__+ with the recording session name.
6538 [[basic-tracing-session-control]]
6539 === Start and stop a recording session
6541 Once you <<creating-destroying-tracing-sessions,create a recording
6542 session>> and <<enabling-disabling-events,create one or more recording
6543 event rules>>, you can start and stop the tracers for this recording
6546 To start the <<cur-tracing-session,current recording session>>:
6548 * Use the man:lttng-start(1) command:
6557 LTTng is flexible: you can launch user applications before or after you
6558 start the tracers. An LTTng tracer only <<event,records an event>> if a
6559 recording event rule matches it, which means the tracer is active.
6561 The `start-session` <<trigger,trigger>> action can also start a recording
6564 To stop the current recording session:
6566 * Use the man:lttng-stop(1) command:
6575 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
6576 records>> or lost sub-buffers since the last time you ran
6577 man:lttng-start(1), the man:lttng-stop(1) command prints corresponding
6580 IMPORTANT: You need to stop recording to make LTTng flush the remaining
6581 trace data and make the trace readable. Note that the
6582 man:lttng-destroy(1) command (see
6583 ``<<creating-destroying-tracing-sessions,Create and destroy a recording
6584 session>>'') also runs the man:lttng-stop(1) command implicitly.
6586 The `stop-session` <<trigger,trigger>> action can also stop a recording
6591 === Clear a recording session
6593 You might need to remove all the current tracing data of one or more
6594 <<tracing-session,recording sessions>> between multiple attempts to
6595 reproduce a problem without interrupting the LTTng recording activity.
6597 To clear the tracing data of the
6598 <<cur-tracing-session,current recording session>>:
6600 * Use the man:lttng-clear(1) command:
6609 To clear the tracing data of all the recording sessions:
6611 * Use the `lttng clear` command with its opt:lttng-clear(1):--all
6622 [[enabling-disabling-channels]]
6623 === Create a channel
6625 Once you <<creating-destroying-tracing-sessions,create a recording
6626 session>>, you can create a <<channel,channel>> with the
6627 man:lttng-enable-channel(1) command.
6629 Note that LTTng can automatically create a default channel when you
6630 <<enabling-disabling-events,create a recording event rule>>.
6631 Therefore, you only need to create a channel when you need non-default
6634 Specify each non-default channel attribute with a command-line
6635 option when you run the man:lttng-enable-channel(1) command.
6637 You can only create a custom channel in the Linux kernel and user space
6638 <<domain,tracing domains>>: the Java/Python logging tracing domains have
6639 their own default channel which LTTng automatically creates when you
6640 <<enabling-disabling-events,create a recording event rule>>.
6644 As of LTTng{nbsp}{revision}, you may _not_ perform the
6645 following operations with the man:lttng-enable-channel(1) command:
6647 * Change an attribute of an existing channel.
6649 * Enable a disabled channel once its recording session has been
6650 <<basic-tracing-session-control,active>> at least once.
6652 * Create a channel once its recording session has been active at
6655 * Create a user space channel with a given
6656 <<channel-buffering-schemes,buffering scheme>> and create a second
6657 user space channel with a different buffering scheme in the same
6661 The following examples show how to combine the command-line options of
6662 the man:lttng-enable-channel(1) command to create simple to more complex
6663 channels within the <<cur-tracing-session,current recording session>>.
6665 .Create a Linux kernel channel with default attributes.
6669 # lttng enable-channel --kernel my-channel
6673 .Create a user space channel with four sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6677 $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6678 --buffers-pid my-channel
6682 .[[blocking-timeout-example]]Create a default user space channel with an infinite blocking timeout.
6684 <<creating-destroying-tracing-sessions,Create a recording session>>,
6685 create the channel, <<enabling-disabling-events,create a recording event
6686 rule>>, and <<basic-tracing-session-control,start recording>>:
6691 $ lttng enable-channel --userspace --blocking-timeout=inf blocking-chan
6692 $ lttng enable-event --userspace --channel=blocking-chan --all
6696 Run an application instrumented with LTTng-UST tracepoints and allow it
6701 $ LTTNG_UST_ALLOW_BLOCKING=1 my-app
6705 .Create a Linux kernel channel which rotates eight trace files of 4{nbsp}MiB each for each stream.
6709 # lttng enable-channel --kernel --tracefile-count=8 \
6710 --tracefile-size=4194304 my-channel
6714 .Create a user space channel in <<overwrite-mode,overwrite>> (or ``flight recorder'') mode.
6718 $ lttng enable-channel --userspace --overwrite my-channel
6722 .<<enabling-disabling-events,Create>> the same <<event,recording event rule>> attached to two different channels.
6726 $ lttng enable-event --userspace --channel=my-channel app:tp
6727 $ lttng enable-event --userspace --channel=other-channel app:tp
6730 When a CPU executes the `app:tp` <<c-application,user space
6731 tracepoint>>, the two recording event rules above match the created
6732 event, making LTTng emit the event. Because the recording event rules
6733 are not attached to the same channel, LTTng records the event twice.
6738 === Disable a channel
6740 To disable a specific channel that you
6741 <<enabling-disabling-channels,created>> previously, use the
6742 man:lttng-disable-channel(1) command.
6744 .Disable a specific Linux kernel channel (<<cur-tracing-session,current recording session>>).
6748 # lttng disable-channel --kernel my-channel
6752 An enabled channel is an implicit <<event,recording event rule>>
6755 NOTE: As of LTTng{nbsp}{revision}, you may _not_ enable a disabled
6756 channel once its recording session has been
6757 <<basic-tracing-session-control,started>> at least once.
6761 === Add context fields to be recorded to the event records of a channel
6763 <<event,Event record>> fields in trace files provide important
6764 information about previously emitted events, but sometimes some external
6765 context may help you solve a problem faster.
6767 Examples of context fields are:
6769 * The **process ID**, **thread ID**, **process name**, and
6770 **process priority** of the thread from which LTTng emits the event.
6772 * The **hostname** of the system on which LTTng emits the event.
6774 * The Linux kernel and user call stacks (since LTTng{nbsp}2.11).
6776 * The current values of many possible **performance counters** using
6779 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6781 ** Branch instructions, misses, and loads.
6784 * Any state defined at the application level (supported for the
6785 `java.util.logging` and Apache log4j <<domain,tracing domains>>).
6787 To get the full list of available context fields:
6789 * Use the opt:lttng-add-context(1):--list option of the
6790 man:lttng-add-context(1) command:
6794 $ lttng add-context --list
6797 .Add context fields to be recorded to the event records of all the <<channel,channels>> of the <<cur-tracing-session,current recording session>>.
6799 The following command line adds the virtual process identifier and the
6800 per-thread CPU cycles count fields to all the user space channels of the
6801 current recording session.
6805 $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6809 .Add performance counter context fields by raw ID
6811 See man:lttng-add-context(1) for the exact format of the context field
6812 type, which is partly compatible with the format used in
6817 # lttng add-context --userspace --type=perf:thread:raw:r0110:test
6818 # lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6822 .Add context fields to be recorded to the event records of a specific channel.
6824 The following command line adds the thread identifier and user call
6825 stack context fields to the Linux kernel channel named `my-channel` of
6826 the <<cur-tracing-session,current recording session>>.
6830 # lttng add-context --kernel --channel=my-channel \
6831 --type=tid --type=callstack-user
6835 .Add an <<java-application-context,application-specific context field>> to be recorded to the event records of a specific channel.
6837 The following command line makes sure LTTng writes the `cur_msg_id`
6838 context field of the `retriever` context retriever to all the Java
6839 logging <<event,event records>> of the channel named `my-channel`:
6843 # lttng add-context --kernel --channel=my-channel \
6844 --type='$app:retriever:cur_msg_id'
6847 IMPORTANT: Make sure to always single-quote the `$` character when you
6848 run man:lttng-add-context(1) from a shell.
6851 NOTE: You can't undo what the man:lttng-add-context(1) command does.
6856 === Allow specific processes to record events
6858 It's often useful to only allow processes with specific attributes to
6859 record events. For example, you may wish to record all the system calls
6860 which a given process makes (à la man:strace(1)).
6862 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6863 purpose. Both commands operate on _inclusion sets_ of process
6864 attributes. The available process attribute types are:
6866 Linux kernel <<domain,tracing domain>>::
6870 * Virtual process ID (VPID).
6872 This is the PID as seen by the application.
6874 * Unix user ID (UID).
6876 * Virtual Unix user ID (VUID).
6878 This is the UID as seen by the application.
6880 * Unix group ID (GID).
6882 * Virtual Unix group ID (VGID).
6884 This is the GID as seen by the application.
6886 User space tracing domain::
6892 A <<tracing-session,recording session>> has nine process
6893 attribute inclusion sets: six for the Linux kernel <<domain,tracing domain>>
6894 and three for the user space tracing domain.
6896 For a given recording session, a process{nbsp}__P__ is allowed to record
6897 LTTng events for a given <<domain,tracing domain>>{nbsp}__D__ if _all_
6898 the attributes of{nbsp}__P__ are part of the inclusion sets
6901 Whether a process is allowed or not to record LTTng events is an
6902 implicit condition of all <<event,recording event rules>>. Therefore, if
6903 LTTng creates an event{nbsp}__E__ for a given process, but this process
6904 may not record events, then no recording event rule matches{nbsp}__E__,
6905 which means LTTng won't emit and record{nbsp}__E__.
6907 When you <<creating-destroying-tracing-sessions,create a recording
6908 session>>, all its process attribute inclusion sets contain all the
6909 possible values. In other words, all processes are allowed to record
6912 Add values to an inclusion set with the man:lttng-track(1) command and
6913 remove values with the man:lttng-untrack(1) command.
6917 The process attribute values are _numeric_.
6919 Should a process with a given ID (part of an inclusion set), for
6920 example, exit, and then a new process be given this same ID, then the
6921 latter would also be allowed to record events.
6923 With the man:lttng-track(1) command, you can add Unix user and group
6924 _names_ to the user and group inclusion sets: the
6925 <<lttng-sessiond,session daemon>> finds the corresponding UID, VUID,
6926 GID, or VGID once on _addition_ to the inclusion set. This means that if
6927 you rename the user or group after you run the man:lttng-track(1)
6928 command, its user/group ID remains part of the inclusion sets.
6931 .Allow processes to record events based on their virtual process ID (VPID).
6933 For the sake of the following example, assume the target system has
6934 16{nbsp}possible VPIDs.
6937 <<creating-destroying-tracing-sessions,create a recording session>>,
6938 the user space VPID inclusion set contains _all_ the possible VPIDs:
6941 .The VPID inclusion set is full.
6942 image::track-all.png[]
6944 When the inclusion set is full and you run the man:lttng-track(1)
6945 command to specify some VPIDs, LTTng:
6947 . Clears the inclusion set.
6948 . Adds the specific VPIDs to the inclusion set.
6954 $ lttng track --userspace --vpid=3,4,7,10,13
6957 the VPID inclusion set is:
6960 .The VPID inclusion set contains the VPIDs 3, 4, 7, 10, and 13.
6961 image::track-3-4-7-10-13.png[]
6963 Add more VPIDs to the inclusion set afterwards:
6967 $ lttng track --userspace --vpid=1,15,16
6973 .VPIDs 1, 15, and 16 are added to the inclusion set.
6974 image::track-1-3-4-7-10-13-15-16.png[]
6976 The man:lttng-untrack(1) command removes entries from process attribute
6977 inclusion sets. Given the previous example, the following command:
6981 $ lttng untrack --userspace --vpid=3,7,10,13
6984 leads to this VPID inclusion set:
6987 .VPIDs 3, 7, 10, and 13 are removed from the inclusion set.
6988 image::track-1-4-15-16.png[]
6990 You can make the VPID inclusion set full again with the
6991 opt:lttng-track(1):--all option:
6995 $ lttng track --userspace --vpid --all
6998 The result is, again:
7001 .The VPID inclusion set is full.
7002 image::track-all.png[]
7005 .Allow specific processes to record events based on their user ID (UID).
7007 A typical use case with process attribute inclusion sets is to start
7008 with an empty inclusion set, then <<basic-tracing-session-control,start
7009 the tracers>>, and finally add values manually while the tracers are
7012 Use the opt:lttng-untrack(1):--all option of the
7013 man:lttng-untrack(1) command to clear the inclusion set after you
7014 <<creating-destroying-tracing-sessions,create a recording session>>, for
7015 example (with UIDs):
7019 # lttng untrack --kernel --uid --all
7025 .The UID inclusion set is empty.
7026 image::untrack-all.png[]
7028 If the LTTng tracer runs with this inclusion set configuration, it
7029 records no events within the <<cur-tracing-session,current recording
7030 session>> because no processes is allowed to do so. Use the
7031 man:lttng-track(1) command as usual to add specific values to the UID
7032 inclusion set when you need to, for example:
7036 # lttng track --kernel --uid=http,11
7042 .UIDs 6 (`http`) and 11 are part of the UID inclusion set.
7043 image::track-6-11.png[]
7048 [[saving-loading-tracing-session]]
7049 === Save and load recording session configurations
7051 Configuring a <<tracing-session,recording session>> can be long. Some of
7052 the tasks involved are:
7054 * <<enabling-disabling-channels,Create channels>> with
7055 specific attributes.
7057 * <<adding-context,Add context fields>> to be recorded to the
7058 <<event,event records>> of specific channels.
7060 * <<enabling-disabling-events,Create recording event rules>> with
7061 specific log level, filter, and other conditions.
7063 If you use LTTng to solve real world problems, chances are you have to
7064 record events using the same recording session setup over and over,
7065 modifying a few variables each time in your instrumented program or
7068 To avoid constant recording session reconfiguration, the man:lttng(1)
7069 command-line tool can save and load recording session configurations
7072 To save a given recording session configuration:
7074 * Use the man:lttng-save(1) command:
7079 $ lttng save SESSION
7083 Replace +__SESSION__+ with the name of the recording session to save.
7085 LTTng saves recording session configurations to
7086 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
7087 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. See
7088 man:lttng-save(1) to learn more about the recording session configuration
7091 LTTng saves all configuration parameters, for example:
7093 * The recording session name.
7094 * The trace data output path.
7095 * The <<channel,channels>>, with their state and all their attributes.
7096 * The context fields you added to channels.
7097 * The <<event,recording event rules>> with their state and conditions.
7099 To load a recording session:
7101 * Use the man:lttng-load(1) command:
7106 $ lttng load SESSION
7110 Replace +__SESSION__+ with the name of the recording session to load.
7112 When LTTng loads a configuration, it restores your saved recording session
7113 as if you just configured it manually.
7115 You can also save and load many sessions at a time; see
7116 man:lttng-save(1) and man:lttng-load(1) to learn more.
7119 [[sending-trace-data-over-the-network]]
7120 === Send trace data over the network
7122 LTTng can send the recorded trace data of a <<tracing-session,recording
7123 session>> to a remote system over the network instead of writing it to
7124 the local file system.
7126 To send the trace data over the network:
7128 . On the _remote_ system (which can also be the target system),
7129 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
7138 . On the _target_ system, create a recording session
7139 <<net-streaming-mode,configured>> to send trace data over the network:
7144 $ lttng create my-session --set-url=net://remote-system
7148 Replace +__remote-system__+ with the host name or IP address of the
7149 remote system. See man:lttng-create(1) for the exact URL format.
7151 . On the target system, use the man:lttng(1) command-line tool as usual.
7153 When recording is <<basic-tracing-session-control,active>>, the
7154 <<lttng-consumerd,consumer daemon>> of the target sends the contents of
7155 <<channel,sub-buffers>> to the remote relay daemon instead of flushing
7156 them to the local file system. The relay daemon writes the received
7157 packets to its local file system.
7159 See the ``Output directory'' section of man:lttng-relayd(8) to learn
7160 where a relay daemon writes its received trace data.
7165 === View events as LTTng records them (noch:{LTTng} live)
7167 _LTTng live_ is a network protocol implemented by the
7168 <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)) to allow compatible
7169 trace readers to display or analyze <<event,event records>> as LTTng
7170 records events on the target system while recording is
7171 <<basic-tracing-session-control,active>>.
7173 The relay daemon creates a _tee_: it forwards the trace data to both the
7174 local file system and to connected live readers:
7177 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live reader.
7182 . On the _target system_, create a <<tracing-session,recording session>>
7188 $ lttng create my-session --live
7192 This operation spawns a local relay daemon.
7194 . Start the live reader and configure it to connect to the relay daemon.
7196 For example, with man:babeltrace2(1):
7201 $ babeltrace2 net://localhost/host/HOSTNAME/my-session
7205 Replace +__HOSTNAME__+ with the host name of the target system.
7207 . Configure the recording session as usual with the man:lttng(1)
7208 command-line tool, and <<basic-tracing-session-control,start recording>>.
7210 List the available live recording sessions with man:babeltrace2(1):
7214 $ babeltrace2 net://localhost
7217 You can start the relay daemon on another system. In this case, you need
7218 to specify the URL of the relay daemon when you
7219 <<creating-destroying-tracing-sessions,create the recording session>> with
7220 the opt:lttng-create(1):--set-url option of the man:lttng-create(1)
7221 command. You also need to replace +__localhost__+ in the procedure above
7222 with the host name of the system on which the relay daemon runs.
7226 [[taking-a-snapshot]]
7227 === Take a snapshot of the current sub-buffers of a recording session
7229 The normal behavior of LTTng is to append full sub-buffers to growing
7230 trace data files. This is ideal to keep a full history of the events
7231 which the target system emitted, but it can represent too much data in
7234 For example, you may wish to have LTTng record your application
7235 continuously until some critical situation happens, in which case you
7236 only need the latest few recorded events to perform the desired
7237 analysis, not multi-gigabyte trace files.
7239 With the man:lttng-snapshot(1) command, you can take a _snapshot_ of the
7240 current <<channel,sub-buffers>> of a given <<tracing-session,recording
7241 session>>. LTTng can write the snapshot to the local file system or send
7242 it over the network.
7245 .A snapshot is a copy of the current sub-buffers, which LTTng does _not_ clear after the operation.
7246 image::snapshot.png[]
7248 The snapshot feature of LTTng is similar to how a
7249 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder] or the
7250 ``roll'' mode of an oscilloscope work.
7252 TIP: If you wish to create unmanaged, self-contained, non-overlapping
7253 trace chunk archives instead of a simple copy of the current
7254 sub-buffers, see the <<session-rotation,recording session rotation>>
7255 feature (available since LTTng{nbsp}2.11).
7257 To take a snapshot of the <<cur-tracing-session,current recording
7260 . Create a recording session in <<snapshot-mode,snapshot mode>>:
7265 $ lttng create my-session --snapshot
7269 The <<channel-overwrite-mode-vs-discard-mode,event record loss mode>> of
7270 <<channel,channels>> created in this mode is automatically set to
7271 <<overwrite-mode,_overwrite_>>.
7273 . Configure the recording session as usual with the man:lttng(1)
7274 command-line tool, and <<basic-tracing-session-control,start
7277 . **Optional**: When you need to take a snapshot,
7278 <<basic-tracing-session-control,stop recording>>.
7280 You can take a snapshot when the tracers are active, but if you stop
7281 them first, you're guaranteed that the trace data in the sub-buffers
7282 doesn't change before you actually take the snapshot.
7289 $ lttng snapshot record --name=my-first-snapshot
7293 LTTng writes the current sub-buffers of all the channels of the
7294 <<cur-tracing-session,current recording session>> to
7295 trace files on the local file system. Those trace files have
7296 `my-first-snapshot` in their name.
7298 There's no difference between the format of a normal trace file and the
7299 format of a snapshot: LTTng trace readers also support LTTng snapshots.
7301 By default, LTTng writes snapshot files to the path shown by
7305 $ lttng snapshot list-output
7308 You can change this path or decide to send snapshots over the network
7311 . An output path or URL that you specify when you
7312 <<creating-destroying-tracing-sessions,create the recording session>>.
7314 . A snapshot output path or URL that you add using the
7315 `add-output` action of the man:lttng-snapshot(1) command.
7317 . An output path or URL that you provide directly to the
7318 `record` action of the man:lttng-snapshot(1) command.
7320 Method{nbsp}3 overrides method{nbsp}2, which overrides method 1. When
7321 you specify a URL, a <<lttng-relayd,relay daemon>> must listen on a
7322 remote system (see ``<<sending-trace-data-over-the-network,Send trace
7323 data over the network>>'').
7325 The `snapshot-session` <<trigger,trigger>> action can also take
7326 a recording session snapshot.
7330 [[session-rotation]]
7331 === Archive the current trace chunk (rotate a recording session)
7333 The <<taking-a-snapshot,snapshot user guide>> shows how to dump the
7334 current sub-buffers of a recording session to the file system or send them
7335 over the network. When you take a snapshot, LTTng doesn't clear the ring
7336 buffers of the recording session: if you take another snapshot immediately
7337 after, both snapshots could contain overlapping trace data.
7339 Inspired by https://en.wikipedia.org/wiki/Log_rotation[log rotation],
7340 _recording session rotation_ is a feature which appends the content of the
7341 ring buffers to what's already on the file system or sent over the
7342 network since the creation of the recording session or since the last
7343 rotation, and then clears those ring buffers to avoid trace data
7346 What LTTng is about to write when performing a recording session rotation
7347 is called the _current trace chunk_. When LTTng writes or sends over the
7348 network this current trace chunk, it becomes a _trace chunk archive_.
7349 Therefore, a recording session rotation operation _archives_ the current
7353 .A recording session rotation operation _archives_ the current trace chunk.
7354 image::rotation.png[]
7356 A trace chunk archive is a self-contained LTTng trace which LTTng
7357 doesn't manage anymore: you can read it, modify it, move it, or remove
7360 As of LTTng{nbsp}{revision}, there are three methods to perform a
7361 recording session rotation:
7363 * <<immediate-rotation,Immediately>>.
7365 * With a <<rotation-schedule,rotation schedule>>.
7367 * Through the execution of a `rotate-session` <<trigger,trigger>>
7370 [[immediate-rotation]]To perform an immediate rotation of the
7371 <<cur-tracing-session,current recording session>>:
7373 . <<creating-destroying-tracing-sessions,Create a recording session>> in
7374 <<local-mode,local mode>> or <<net-streaming-mode,network streaming
7375 mode>> (only those two recording session modes support recording session
7381 # lttng create my-session
7385 . <<enabling-disabling-events,Create one or more recording event rules>>
7386 and <<basic-tracing-session-control,start recording>>:
7391 # lttng enable-event --kernel sched_'*'
7396 . When needed, immediately rotate the current recording session:
7405 The man:lttng-rotate(1) command prints the path to the created trace
7406 chunk archive. See its manual page to learn about the format of trace
7407 chunk archive directory names.
7409 Perform other immediate rotations while the recording session is active.
7410 It's guaranteed that all the trace chunk archives don't contain
7411 overlapping trace data. You can also perform an immediate rotation once
7412 you have <<basic-tracing-session-control,stopped>> the recording session.
7414 . When you're done recording,
7415 <<creating-destroying-tracing-sessions,destroy the current recording
7425 The recording session destruction operation creates one last trace chunk
7426 archive from the current trace chunk.
7428 [[rotation-schedule]]A recording session rotation schedule is a planned
7429 rotation which LTTng performs automatically based on one of the
7430 following conditions:
7432 * A timer with a configured period expires.
7434 * The total size of the _flushed_ part of the current trace chunk
7435 becomes greater than or equal to a configured value.
7437 To schedule a rotation of the <<cur-tracing-session,current recording
7438 session>>, set a _rotation schedule_:
7440 . <<creating-destroying-tracing-sessions,Create a recording session>> in
7441 <<local-mode,local mode>> or <<net-streaming-mode,network streaming
7442 mode>> (only those two creation modes support recording session
7448 # lttng create my-session
7452 . <<enabling-disabling-events,Create one or more recording event rules>>:
7457 # lttng enable-event --kernel sched_'*'
7461 . Set a recording session rotation schedule:
7466 # lttng enable-rotation --timer=10s
7470 In this example, we set a rotation schedule so that LTTng performs a
7471 recording session rotation every ten seconds.
7473 See man:lttng-enable-rotation(1) to learn more about other ways to set a
7476 . <<basic-tracing-session-control,Start recording>>:
7485 LTTng performs recording session rotations automatically while the
7486 recording session is active thanks to the rotation schedule.
7488 . When you're done recording,
7489 <<creating-destroying-tracing-sessions,destroy the current recording
7499 The recording session destruction operation creates one last trace chunk
7500 archive from the current trace chunk.
7502 Unset a recording session rotation schedule with the
7503 man:lttng-disable-rotation(1) command.
7507 [[add-event-rule-matches-trigger]]
7508 === Add an ``event rule matches'' trigger to a session daemon
7510 With the man:lttng-add-trigger(1) command, you can add a
7511 <<trigger,trigger>> to a <<lttng-sessiond,session daemon>>.
7513 A trigger associates an LTTng tracing condition to one or more actions:
7514 when the condition is satisfied, LTTng attempts to execute the actions.
7516 A trigger doesn't need any <<tracing-session,recording session>> to exist:
7517 it belongs to a session daemon.
7519 As of LTTng{nbsp}{revision}, many condition types are available through
7520 the <<liblttng-ctl-lttng,`liblttng-ctl`>> C{nbsp}API, but the
7521 man:lttng-add-trigger(1) command only accepts the ``event rule matches''
7524 An ``event rule matches'' condition is satisfied when its event rule
7527 Unlike a <<event,recording event rule>>, the event rule of an
7528 ``event rule matches'' trigger condition has no implicit conditions,
7531 * It has no enabled/disabled state.
7532 * It has no attached <<channel,channel>>.
7533 * It doesn't belong to a <<tracing-session,recording session>>.
7535 Both the man:lttng-add-trigger(1) and man:lttng-enable-event(1) commands
7536 accept command-line arguments to specify an <<event-rule,event rule>>.
7537 That being said, the former is a more recent command and therefore
7538 follows the common event rule specification format (see
7539 man:lttng-event-rule(7)).
7541 .Start a <<tracing-session,recording session>> when an event rule matches.
7543 This example shows how to add the following trigger to the root
7544 <<lttng-sessiond,session daemon>>:
7547 An event rule matches a Linux kernel system call event of which the
7548 name starts with `exec` and `*/ls` matches the `filename` payload
7551 With such an event rule, LTTng emits an event when the cmd:ls program
7555 <<basic-tracing-session-control,Start the recording session>>
7558 To add such a trigger to the root session daemon:
7560 . **If there's no currently running LTTng root session daemon**, start
7565 # lttng-sessiond --daemonize
7568 . <<creating-destroying-tracing-sessions,Create a recording session>>
7570 <<enabling-disabling-events,create a recording event rule>> matching
7571 all the system call events:
7575 # lttng create pitou
7576 # lttng enable-event --kernel --syscall --all
7579 . Add the trigger to the root session daemon:
7583 # lttng add-trigger --condition=event-rule-matches \
7584 --type=syscall --name='exec*' \
7585 --filter='filename == "*/ls"' \
7586 --action=start-session pitou
7589 Confirm that the trigger exists with the man:lttng-list-triggers(1)
7594 # lttng list-triggers
7597 . Make sure the `pitou` recording session is still inactive (stopped):
7604 The first line should be something like:
7607 Recording session pitou: [inactive]
7610 Run the cmd:ls program to fire the LTTng trigger above:
7617 At this point, the `pitou` recording session should be active
7618 (started). Confirm this with the man:lttng-list(1) command again:
7625 The first line should now look like:
7628 Recording session pitou: [active]
7631 This line confirms that the LTTng trigger you added fired, therefore
7632 starting the `pitou` recording session.
7635 .[[trigger-event-notif]]Send a notification to a user application when an event rule matches.
7637 This example shows how to add the following trigger to the root
7638 <<lttng-sessiond,session daemon>>:
7641 An event rule matches a Linux kernel tracepoint event named
7642 `sched_switch` and of which the value of the `next_comm` payload
7645 With such an event rule, LTTng emits an event when Linux gives access to
7646 the processor to a process named `bash`.
7649 Send an LTTng notification to a user application.
7651 Moreover, we'll specify a _capture descriptor_ with the
7652 `event-rule-matches` trigger condition so that the user application can
7653 get the value of a specific `sched_switch` event payload field.
7655 First, write and build the user application:
7657 . Create the C{nbsp}source file of the application:
7665 #include <stdbool.h>
7668 #include <lttng/lttng.h>
7671 * Subscribes to notifications, through the notification channel
7672 * `notification_channel`, which match the condition of the trigger
7673 * named `trigger_name`.
7675 * Returns `true` on success.
7677 static bool subscribe(struct lttng_notification_channel *notification_channel,
7678 const char *trigger_name)
7680 const struct lttng_condition *condition = NULL;
7681 struct lttng_triggers *triggers = NULL;
7682 unsigned int trigger_count;
7684 enum lttng_error_code error_code;
7685 enum lttng_trigger_status trigger_status;
7688 /* Get all LTTng triggers */
7689 error_code = lttng_list_triggers(&triggers);
7690 assert(error_code == LTTNG_OK);
7692 /* Get the number of triggers */
7693 trigger_status = lttng_triggers_get_count(triggers, &trigger_count);
7694 assert(trigger_status == LTTNG_TRIGGER_STATUS_OK);
7696 /* Find the trigger named `trigger_name` */
7697 for (i = 0; i < trigger_count; i++) {
7698 const struct lttng_trigger *trigger;
7699 const char *this_trigger_name;
7701 trigger = lttng_triggers_get_at_index(triggers, i);
7702 trigger_status = lttng_trigger_get_name(trigger, &this_trigger_name);
7703 assert(trigger_status == LTTNG_TRIGGER_STATUS_OK);
7705 if (strcmp(this_trigger_name, trigger_name) == 0) {
7706 /* Trigger found: subscribe with its condition */
7707 enum lttng_notification_channel_status notification_channel_status;
7709 notification_channel_status = lttng_notification_channel_subscribe(
7710 notification_channel,
7711 lttng_trigger_get_const_condition(trigger));
7712 assert(notification_channel_status ==
7713 LTTNG_NOTIFICATION_CHANNEL_STATUS_OK);
7719 lttng_triggers_destroy(triggers);
7724 * Handles the evaluation `evaluation` of a single notification.
7726 static void handle_evaluation(const struct lttng_evaluation *evaluation)
7728 enum lttng_evaluation_status evaluation_status;
7729 const struct lttng_event_field_value *array_field_value;
7730 const struct lttng_event_field_value *string_field_value;
7731 enum lttng_event_field_value_status event_field_value_status;
7732 const char *string_field_string_value;
7734 /* Get the value of the first captured (string) field */
7735 evaluation_status = lttng_evaluation_event_rule_matches_get_captured_values(
7736 evaluation, &array_field_value);
7737 assert(evaluation_status == LTTNG_EVALUATION_STATUS_OK);
7738 event_field_value_status =
7739 lttng_event_field_value_array_get_element_at_index(
7740 array_field_value, 0, &string_field_value);
7741 assert(event_field_value_status == LTTNG_EVENT_FIELD_VALUE_STATUS_OK);
7742 assert(lttng_event_field_value_get_type(string_field_value) ==
7743 LTTNG_EVENT_FIELD_VALUE_TYPE_STRING);
7744 event_field_value_status = lttng_event_field_value_string_get_value(
7745 string_field_value, &string_field_string_value);
7746 assert(event_field_value_status == LTTNG_EVENT_FIELD_VALUE_STATUS_OK);
7748 /* Print the string value of the field */
7749 puts(string_field_string_value);
7752 int main(int argc, char *argv[])
7754 int exit_status = EXIT_SUCCESS;
7755 struct lttng_notification_channel *notification_channel;
7756 enum lttng_notification_channel_status notification_channel_status;
7757 const struct lttng_condition *condition;
7758 const char *trigger_name;
7762 trigger_name = argv[1];
7765 * Create a notification channel.
7767 * A notification channel connects the user application to the LTTng
7770 * You can use this notification channel to listen to various types
7773 notification_channel = lttng_notification_channel_create(
7774 lttng_session_daemon_notification_endpoint);
7775 assert(notification_channel);
7778 * Subscribe to notifications which match the condition of the
7779 * trigger named `trigger_name`.
7781 if (!subscribe(notification_channel, trigger_name)) {
7783 "Error: Failed to subscribe to notifications (trigger `%s`).\n",
7785 exit_status = EXIT_FAILURE;
7790 * Notification loop.
7792 * Put this in a dedicated thread to avoid blocking the main thread.
7795 struct lttng_notification *notification;
7796 enum lttng_notification_channel_status status;
7797 const struct lttng_evaluation *notification_evaluation;
7799 /* Receive the next notification */
7800 status = lttng_notification_channel_get_next_notification(
7801 notification_channel, ¬ification);
7804 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
7806 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
7808 * The session daemon can drop notifications if a receiving
7809 * application doesn't consume the notifications fast
7813 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
7815 * The session daemon closed the notification channel.
7817 * This is typically caused by a session daemon shutting
7822 /* Unhandled conditions or errors */
7823 exit_status = EXIT_FAILURE;
7828 * Handle the condition evaluation.
7830 * A notification provides, amongst other things:
7832 * * The condition that caused LTTng to send this notification.
7834 * * The condition evaluation, which provides more specific
7835 * information on the evaluation of the condition.
7837 handle_evaluation(lttng_notification_get_evaluation(notification));
7839 /* Destroy the notification object */
7840 lttng_notification_destroy(notification);
7844 lttng_notification_channel_destroy(notification_channel);
7850 This application prints the first captured string field value of the
7851 condition evaluation of each LTTng notification it receives.
7853 . Build the `notif-app` application,
7854 using https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
7855 to provide the right compiler and linker flags:
7860 $ gcc -o notif-app notif-app.c $(pkg-config --cflags --libs lttng-ctl)
7864 Now, to add the trigger to the root session daemon:
7867 . **If there's no currently running LTTng root session daemon**, start
7872 # lttng-sessiond --daemonize
7875 . Add the trigger, naming it `sched-switch-notif`, to the root
7880 # lttng add-trigger --name=sched-switch-notif \
7881 --condition=event-rule-matches \
7882 --type=kernel --name=sched_switch \
7883 --filter='next_comm == "bash"' --capture=prev_comm \
7887 Confirm that the `sched-switch-notif` trigger exists with the
7888 man:lttng-list-triggers(1) command:
7892 # lttng list-triggers
7895 Run the cmd:notif-app application, passing the name of the trigger
7896 of which to watch the notifications:
7900 # ./notif-app sched-switch-notif
7903 Now, in an interactive Bash, type a few keys to fire the
7904 `sched-switch-notif` trigger. Watch the `notif-app` application print
7905 the previous process names.
7910 === Use the machine interface
7912 With any command of the man:lttng(1) command-line tool, set the
7913 opt:lttng(1):--mi option to `xml` (before the command name) to get an
7914 XML machine interface output, for example:
7918 $ lttng --mi=xml list my-session
7921 A schema definition (XSD) is
7922 https://github.com/lttng/lttng-tools/blob/stable-{revision}/src/common/mi-lttng-4.0.xsd[available]
7923 to ease the integration with external tools as much as possible.
7927 [[metadata-regenerate]]
7928 === Regenerate the metadata of an LTTng trace
7930 An LTTng trace, which is a https://diamon.org/ctf[CTF] trace, has both
7931 data stream files and a metadata stream file. This metadata file
7932 contains, amongst other things, information about the offset of the
7933 clock sources which LTTng uses to assign timestamps to <<event,event
7934 records>> when recording.
7936 If, once a <<tracing-session,recording session>> is
7937 <<basic-tracing-session-control,started>>, a major
7938 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
7939 happens, the clock offset of the trace also needs to be updated. Use
7940 the `metadata` item of the man:lttng-regenerate(1) command to do so.
7942 The main use case of this command is to allow a system to boot with
7943 an incorrect wall time and have LTTng trace it before its wall time
7944 is corrected. Once the system is known to be in a state where its
7945 wall time is correct, you can run `lttng regenerate metadata`.
7947 To regenerate the metadata stream files of the
7948 <<cur-tracing-session,current recording session>>:
7950 * Use the `metadata` item of the man:lttng-regenerate(1) command:
7955 $ lttng regenerate metadata
7961 [[regenerate-statedump]]
7962 === Regenerate the state dump event records of a recording session
7964 The LTTng kernel and user space tracers generate state dump
7965 <<event,event records>> when the application starts or when you
7966 <<basic-tracing-session-control,start a recording session>>.
7968 An analysis can use the state dump event records to set an initial state
7969 before it builds the rest of the state from the subsequent event
7970 records. http://tracecompass.org/[Trace Compass] is a notable
7971 example of an application which uses the state dump of an LTTng trace.
7973 When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
7974 state dump event records aren't included in the snapshot trace files
7975 because they were recorded to a <<channel,sub-buffer>> that has been
7976 consumed or <<overwrite-mode,overwritten>> already.
7978 Use the `statedump` item of the man:lttng-regenerate(1) command to emit
7979 and record the state dump events again.
7981 To regenerate the state dump of the <<cur-tracing-session,current
7982 recording session>>, provided you created it in <<snapshot-mode,snapshot
7983 mode>>, before you take a snapshot:
7985 . Use the `statedump` item of the man:lttng-regenerate(1) command:
7990 $ lttng regenerate statedump
7994 . <<basic-tracing-session-control,Stop the recording session>>:
8003 . <<taking-a-snapshot,Take a snapshot>>:
8008 $ lttng snapshot record --name=my-snapshot
8012 Depending on the event throughput, you should run steps{nbsp}1
8013 and{nbsp}2 as closely as possible.
8017 To record the state dump events, you need to
8018 <<enabling-disabling-events,create recording event rules>> which enable
8021 * The names of LTTng-UST state dump tracepoints start with
8022 `lttng_ust_statedump:`.
8024 * The names of LTTng-modules state dump tracepoints start with
8030 [[persistent-memory-file-systems]]
8031 === Record trace data on persistent memory file systems
8033 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile
8034 random-access memory] (NVRAM) is random-access memory that retains its
8035 information when power is turned off (non-volatile). Systems with such
8036 memory can store data structures in RAM and retrieve them after a
8037 reboot, without flushing to typical _storage_.
8039 Linux supports NVRAM file systems thanks to either
8040 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
8041 (requires Linux{nbsp}4.1+) or http://pramfs.sourceforge.net/[PRAMFS] (requires Linux{nbsp}<{nbsp}4).
8043 This section doesn't describe how to operate such file systems; we
8044 assume that you have a working persistent memory file system.
8046 When you <<creating-destroying-tracing-sessions,create a recording
8047 session>>, you can specify the path of the shared memory holding the
8048 sub-buffers. If you specify a location on an NVRAM file system, then you
8049 can retrieve the latest recorded trace data when the system reboots
8052 To record trace data on a persistent memory file system and retrieve the
8053 trace data after a system crash:
8055 . Create a recording session with a <<channel,sub-buffer>> shared memory
8056 path located on an NVRAM file system:
8061 $ lttng create my-session --shm-path=/path/to/shm/on/nvram
8065 . Configure the recording session as usual with the man:lttng(1)
8066 command-line tool, and <<basic-tracing-session-control,start
8069 . After a system crash, use the man:lttng-crash(1) command-line tool to
8070 read the trace data recorded on the NVRAM file system:
8075 $ lttng-crash /path/to/shm/on/nvram
8079 The binary layout of the ring buffer files isn't exactly the same as the
8080 trace files layout. This is why you need to use man:lttng-crash(1)
8081 instead of some standard LTTng trace reader.
8083 To convert the ring buffer files to LTTng trace files:
8085 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
8090 $ lttng-crash --extract=/path/to/trace /path/to/shm/on/nvram
8096 [[notif-trigger-api]]
8097 === Get notified when the buffer usage of a channel is too high or too low
8099 With the notification and <<trigger,trigger>> C{nbsp}API of
8100 <<liblttng-ctl-lttng,`liblttng-ctl`>>, LTTng can notify your user
8101 application when the buffer usage of one or more <<channel,channels>>
8102 becomes too low or too high.
8104 Use this API and enable or disable <<event,recording event rules>> while
8105 a recording session <<basic-tracing-session-control,is active>> to avoid
8106 <<channel-overwrite-mode-vs-discard-mode,discarded event records>>, for
8109 .Send a notification to a user application when the buffer usage of an LTTng channel is too high.
8111 In this example, we create and build an application which gets notified
8112 when the buffer usage of a specific LTTng channel is higher than
8115 We only print that it's the case in this example, but we could as well
8116 use the `liblttng-ctl` C{nbsp}API to <<enabling-disabling-events,disable
8117 recording event rules>> when this happens, for example.
8119 . Create the C{nbsp}source file of the application:
8128 #include <lttng/lttng.h>
8130 int main(int argc, char *argv[])
8132 int exit_status = EXIT_SUCCESS;
8133 struct lttng_notification_channel *notification_channel;
8134 struct lttng_condition *condition;
8135 struct lttng_action *action;
8136 struct lttng_trigger *trigger;
8137 const char *recording_session_name;
8138 const char *channel_name;
8141 recording_session_name = argv[1];
8142 channel_name = argv[2];
8145 * Create a notification channel.
8147 * A notification channel connects the user application to the LTTng
8150 * You can use this notification channel to listen to various types
8153 notification_channel = lttng_notification_channel_create(
8154 lttng_session_daemon_notification_endpoint);
8157 * Create a "buffer usage becomes greater than" condition.
8159 * In this case, the condition is satisfied when the buffer usage
8160 * becomes greater than or equal to 75 %.
8162 * We create the condition for a specific recording session name,
8163 * channel name, and for the user space tracing domain.
8165 * The following condition types also exist:
8167 * * The buffer usage of a channel becomes less than a given value.
8169 * * The consumed data size of a recording session becomes greater
8170 * than a given value.
8172 * * A recording session rotation becomes ongoing.
8174 * * A recording session rotation becomes completed.
8176 * * A given event rule matches an event.
8178 condition = lttng_condition_buffer_usage_high_create();
8179 lttng_condition_buffer_usage_set_threshold_ratio(condition, .75);
8180 lttng_condition_buffer_usage_set_session_name(condition,
8181 recording_session_name);
8182 lttng_condition_buffer_usage_set_channel_name(condition,
8184 lttng_condition_buffer_usage_set_domain_type(condition,
8188 * Create an action (receive a notification) to execute when the
8189 * condition created above is satisfied.
8191 action = lttng_action_notify_create();
8196 * A trigger associates a condition to an action: LTTng executes
8197 * the action when the condition is satisfied.
8199 trigger = lttng_trigger_create(condition, action);
8201 /* Register the trigger to the LTTng session daemon. */
8202 lttng_register_trigger(trigger);
8205 * Now that we have registered a trigger, LTTng will send a
8206 * notification every time its condition is met through a
8207 * notification channel.
8209 * To receive this notification, we must subscribe to notifications
8210 * which match the same condition.
8212 lttng_notification_channel_subscribe(notification_channel,
8216 * Notification loop.
8218 * Put this in a dedicated thread to avoid blocking the main thread.
8221 struct lttng_notification *notification;
8222 enum lttng_notification_channel_status status;
8223 const struct lttng_evaluation *notification_evaluation;
8224 const struct lttng_condition *notification_condition;
8225 double buffer_usage;
8227 /* Receive the next notification. */
8228 status = lttng_notification_channel_get_next_notification(
8229 notification_channel, ¬ification);
8232 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
8234 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
8236 * The session daemon can drop notifications if a monitoring
8237 * application isn't consuming the notifications fast
8241 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
8243 * The session daemon closed the notification channel.
8245 * This is typically caused by a session daemon shutting
8250 /* Unhandled conditions or errors. */
8251 exit_status = EXIT_FAILURE;
8256 * A notification provides, amongst other things:
8258 * * The condition that caused LTTng to send this notification.
8260 * * The condition evaluation, which provides more specific
8261 * information on the evaluation of the condition.
8263 * The condition evaluation provides the buffer usage
8264 * value at the moment the condition was satisfied.
8266 notification_condition = lttng_notification_get_condition(
8268 notification_evaluation = lttng_notification_get_evaluation(
8271 /* We're subscribed to only one condition. */
8272 assert(lttng_condition_get_type(notification_condition) ==
8273 LTTNG_CONDITION_TYPE_BUFFER_USAGE_HIGH);
8276 * Get the exact sampled buffer usage from the condition
8279 lttng_evaluation_buffer_usage_get_usage_ratio(
8280 notification_evaluation, &buffer_usage);
8283 * At this point, instead of printing a message, we could do
8284 * something to reduce the buffer usage of the channel, like
8285 * disable specific events, for example.
8287 printf("Buffer usage is %f %% in recording session \"%s\", "
8288 "user space channel \"%s\".\n", buffer_usage * 100,
8289 recording_session_name, channel_name);
8291 /* Destroy the notification object. */
8292 lttng_notification_destroy(notification);
8296 lttng_action_destroy(action);
8297 lttng_condition_destroy(condition);
8298 lttng_trigger_destroy(trigger);
8299 lttng_notification_channel_destroy(notification_channel);
8305 . Build the `notif-app` application, linking it with `liblttng-ctl`:
8310 $ gcc -o notif-app notif-app.c $(pkg-config --cflags --libs lttng-ctl)
8314 . <<creating-destroying-tracing-sessions,Create a recording session>>,
8315 <<enabling-disabling-events,create a recording event rule>> matching
8316 all the user space tracepoint events, and
8317 <<basic-tracing-session-control,start recording>>:
8322 $ lttng create my-session
8323 $ lttng enable-event --userspace --all
8328 If you create the channel manually with the man:lttng-enable-channel(1)
8329 command, you can set its <<channel-monitor-timer,monitor timer>> to
8330 control how frequently LTTng samples the current values of the channel
8331 properties to evaluate user conditions.
8333 . Run the `notif-app` application.
8335 This program accepts the <<tracing-session,recording session>> and
8336 user space channel names as its two first arguments. The channel
8337 which LTTng automatically creates with the man:lttng-enable-event(1)
8338 command above is named `channel0`:
8343 $ ./notif-app my-session channel0
8347 . In another terminal, run an application with a very high event
8348 throughput so that the 75{nbsp}% buffer usage condition is reached.
8350 In the first terminal, the application should print lines like this:
8353 Buffer usage is 81.45197 % in recording session "my-session", user space
8357 If you don't see anything, try to make the threshold of the condition in
8358 path:{notif-app.c} lower (0.1{nbsp}%, for example), and then rebuild the
8359 `notif-app` application (step{nbsp}2) and run it again (step{nbsp}4).
8366 [[lttng-modules-ref]]
8367 === noch:{LTTng-modules}
8371 [[lttng-tracepoint-enum]]
8372 ==== `LTTNG_TRACEPOINT_ENUM()` usage
8374 Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
8378 LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
8383 * `name` with the name of the enumeration (C identifier, unique
8384 amongst all the defined enumerations).
8385 * `entries` with a list of enumeration entries.
8387 The available enumeration entry macros are:
8389 +ctf_enum_value(__name__, __value__)+::
8390 Entry named +__name__+ mapped to the integral value +__value__+.
8392 +ctf_enum_range(__name__, __begin__, __end__)+::
8393 Entry named +__name__+ mapped to the range of integral values between
8394 +__begin__+ (included) and +__end__+ (included).
8396 +ctf_enum_auto(__name__)+::
8397 Entry named +__name__+ mapped to the integral value following the
8400 The last value of a `ctf_enum_value()` entry is its +__value__+
8403 The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
8405 If `ctf_enum_auto()` is the first entry in the list, its integral
8408 Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
8409 to use a defined enumeration as a tracepoint field.
8411 .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
8415 LTTNG_TRACEPOINT_ENUM(
8418 ctf_enum_auto("AUTO: EXPECT 0")
8419 ctf_enum_value("VALUE: 23", 23)
8420 ctf_enum_value("VALUE: 27", 27)
8421 ctf_enum_auto("AUTO: EXPECT 28")
8422 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
8423 ctf_enum_auto("AUTO: EXPECT 304")
8431 [[lttng-modules-tp-fields]]
8432 ==== Tracepoint fields macros (for `TP_FIELDS()`)
8434 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
8435 tracepoint fields, which must be listed within `TP_FIELDS()` in
8436 `LTTNG_TRACEPOINT_EVENT()`, are:
8438 [role="func-desc growable",cols="asciidoc,asciidoc"]
8439 .Available macros to define LTTng-modules tracepoint fields
8441 |Macro |Description and parameters
8444 +ctf_integer(__t__, __n__, __e__)+
8446 +ctf_integer_nowrite(__t__, __n__, __e__)+
8448 +ctf_user_integer(__t__, __n__, __e__)+
8450 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
8452 Standard integer, displayed in base{nbsp}10.
8455 Integer C type (`int`, `long`, `size_t`, ...).
8461 Argument expression.
8464 +ctf_integer_hex(__t__, __n__, __e__)+
8466 +ctf_user_integer_hex(__t__, __n__, __e__)+
8468 Standard integer, displayed in base{nbsp}16.
8477 Argument expression.
8479 |+ctf_integer_oct(__t__, __n__, __e__)+
8481 Standard integer, displayed in base{nbsp}8.
8490 Argument expression.
8493 +ctf_integer_network(__t__, __n__, __e__)+
8495 +ctf_user_integer_network(__t__, __n__, __e__)+
8497 Integer in network byte order (big-endian), displayed in base{nbsp}10.
8506 Argument expression.
8509 +ctf_integer_network_hex(__t__, __n__, __e__)+
8511 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
8513 Integer in network byte order, displayed in base{nbsp}16.
8522 Argument expression.
8525 +ctf_enum(__N__, __t__, __n__, __e__)+
8527 +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
8529 +ctf_user_enum(__N__, __t__, __n__, __e__)+
8531 +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
8536 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
8539 Integer C type (`int`, `long`, `size_t`, ...).
8545 Argument expression.
8548 +ctf_string(__n__, __e__)+
8550 +ctf_string_nowrite(__n__, __e__)+
8552 +ctf_user_string(__n__, __e__)+
8554 +ctf_user_string_nowrite(__n__, __e__)+
8556 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
8562 Argument expression.
8565 +ctf_array(__t__, __n__, __e__, __s__)+
8567 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
8569 +ctf_user_array(__t__, __n__, __e__, __s__)+
8571 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
8573 Statically-sized array of integers.
8576 Array element C type.
8582 Argument expression.
8588 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
8590 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
8592 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
8594 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
8596 Statically-sized array of bits.
8598 The type of +__e__+ must be an integer type. +__s__+ is the number
8599 of elements of such type in +__e__+, not the number of bits.
8602 Array element C type.
8608 Argument expression.
8614 +ctf_array_text(__t__, __n__, __e__, __s__)+
8616 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
8618 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
8620 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
8622 Statically-sized array, printed as text.
8624 The string doesn't need to be null-terminated.
8627 Array element C type (always `char`).
8633 Argument expression.
8639 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
8641 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
8643 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
8645 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
8647 Dynamically-sized array of integers.
8649 The type of +__E__+ must be unsigned.
8652 Array element C type.
8658 Argument expression.
8661 Length expression C type.
8667 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
8669 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
8671 Dynamically-sized array of integers, displayed in base{nbsp}16.
8673 The type of +__E__+ must be unsigned.
8676 Array element C type.
8682 Argument expression.
8685 Length expression C type.
8690 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
8692 Dynamically-sized array of integers in network byte order (big-endian),
8693 displayed in base{nbsp}10.
8695 The type of +__E__+ must be unsigned.
8698 Array element C type.
8704 Argument expression.
8707 Length expression C type.
8713 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
8715 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
8717 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
8719 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
8721 Dynamically-sized array of bits.
8723 The type of +__e__+ must be an integer type. +__s__+ is the number
8724 of elements of such type in +__e__+, not the number of bits.
8726 The type of +__E__+ must be unsigned.
8729 Array element C type.
8735 Argument expression.
8738 Length expression C type.
8744 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
8746 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
8748 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
8750 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
8752 Dynamically-sized array, displayed as text.
8754 The string doesn't need to be null-terminated.
8756 The type of +__E__+ must be unsigned.
8758 The behaviour is undefined if +__e__+ is `NULL`.
8761 Sequence element C type (always `char`).
8767 Argument expression.
8770 Length expression C type.
8776 Use the `_user` versions when the argument expression, `e`, is
8777 a user space address. In the cases of `ctf_user_integer*()` and
8778 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
8781 The `_nowrite` versions omit themselves from the trace data, but are
8782 otherwise identical. This means LTTng won't write the `_nowrite` fields
8783 to the recorded trace. Their primary purpose is to make some of the
8784 event context available to the <<enabling-disabling-events,recording
8785 event rule filters>> without having to commit the data to
8786 <<channel,sub-buffers>>.
8792 Terms related to LTTng and to tracing in general:
8794 [[def-action]]action::
8795 The part of a <<def-trigger,trigger>> which LTTng executes when the
8796 trigger <<def-condition,condition>> is satisfied.
8799 The https://diamon.org/babeltrace[Babeltrace] project, which includes:
8802 https://babeltrace.org/docs/v2.0/man1/babeltrace2.1/[cmd:babeltrace2]
8803 command-line interface.
8804 * The libbabeltrace2 library which offers a
8805 https://babeltrace.org/docs/v2.0/libbabeltrace2/[C API].
8806 * https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings].
8809 [[def-buffering-scheme]]<<channel-buffering-schemes,buffering scheme>>::
8810 A layout of <<def-sub-buffer,sub-buffers>> applied to a given channel.
8812 [[def-channel]]<<channel,channel>>::
8813 An entity which is responsible for a set of
8814 <<def-ring-buffer,ring buffers>>.
8816 <<def-recording-event-rule,Recording event rules>> are always attached
8817 to a specific channel.
8820 A source of time for a <<def-tracer,tracer>>.
8822 [[def-condition]]condition::
8823 The part of a <<def-trigger,trigger>> which must be satisfied for
8824 LTTng to attempt to execute the trigger <<def-action,actions>>.
8826 [[def-consumer-daemon]]<<lttng-consumerd,consumer daemon>>::
8827 A program which is responsible for consuming the full
8828 <<def-sub-buffer,sub-buffers>> and write them to a file system or
8829 send them over the network.
8831 [[def-current-trace-chunk]]current trace chunk::
8832 A <<def-trace-chunk,trace chunk>> which includes the current content
8833 of all the <<def-sub-buffer,sub-buffers>> of the
8834 <<def-tracing-session,recording session>> and the stream files
8835 produced since the latest event amongst:
8837 * The creation of the recording session.
8838 * The last <<def-tracing-session-rotation,recording session rotation>>, if
8841 <<channel-overwrite-mode-vs-discard-mode,discard mode>>::
8842 The <<def-event-record-loss-mode,event record loss mode>> in which
8843 the <<def-tracer,tracer>> _discards_ new <<def-event-record,event
8844 records>> when there's no <<def-sub-buffer,sub-buffer>> space left to
8847 [[def-event]]event::
8848 The execution of an <<def-instrumentation-point,instrumentation
8849 point>>, like a <<def-tracepoint,tracepoint>> that you manually place
8850 in some source code, or a Linux kprobe.
8852 When an instrumentation point is executed, LTTng creates an event.
8854 When an <<def-event-rule,event rule>> matches the event,
8855 <<def-lttng,LTTng>> executes some action, for example:
8857 * Record its payload to a <<def-sub-buffer,sub-buffer>> as an
8858 <<def-event-record,event record>>.
8859 * Attempt to execute the user-defined actions of a
8860 <<def-trigger,trigger>> with an
8861 <<add-event-rule-matches-trigger,``event rule matches''>> condition.
8863 [[def-event-name]]event name::
8864 The name of an <<def-event,event>>, which is also the name of the
8865 <<def-event-record,event record>>.
8867 This is also called the _instrumentation point name_.
8869 [[def-event-record]]event record::
8870 A record (binary serialization), in a <<def-trace,trace>>, of the
8871 payload of an <<def-event,event>>.
8873 The payload of an event record has zero or more _fields_.
8875 [[def-event-record-loss-mode]]<<channel-overwrite-mode-vs-discard-mode,event record loss mode>>::
8876 The mechanism by which event records of a given
8877 <<def-channel,channel>> are lost (not recorded) when there's no
8878 <<def-sub-buffer,sub-buffer>> space left to store them.
8880 [[def-event-rule]]<<event-rule,event rule>>::
8881 Set of conditions which an <<def-event,event>> must satisfy
8882 for LTTng to execute some action.
8884 An event rule is said to _match_ events, like a
8885 https://en.wikipedia.org/wiki/Regular_expression[regular expression]
8888 A <<def-recording-event-rule,recording event rule>> is a specific type
8889 of event rule of which the action is to <<def-record,record>> the event
8890 to a <<def-sub-buffer,sub-buffer>>.
8892 [[def-incl-set]]inclusion set::
8893 In the <<pid-tracking,process attribute inclusion set>> context: a
8894 set of <<def-proc-attr,process attributes>> of a given type.
8896 <<instrumenting,instrumentation>>::
8897 The use of <<def-lttng,LTTng>> probes to make a kernel or
8898 <<def-user-application,user application>> traceable.
8900 [[def-instrumentation-point]]instrumentation point::
8901 A point in the execution path of a kernel or
8902 <<def-user-application,user application>> which, when executed,
8903 create an <<def-event,event>>.
8905 instrumentation point name::
8906 See _<<def-event-name,event name>>_.
8908 `java.util.logging`::
8910 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities]
8911 of the Java platform.
8914 A https://logging.apache.org/log4j/1.2/[logging library] for Java
8915 developed by the Apache Software Foundation.
8918 Level of severity of a log statement or user space
8919 <<def-instrumentation-point,instrumentation point>>.
8921 [[def-lttng]]LTTng::
8922 The _Linux Trace Toolkit: next generation_ project.
8924 <<lttng-cli,cmd:lttng>>::
8925 A command-line tool provided by the <<def-lttng-tools,LTTng-tools>>
8926 project which you can use to send and receive control messages to and
8927 from a <<def-session-daemon,session daemon>>.
8929 cmd:lttng-consumerd::
8930 The name of the <<def-consumer-daemon,consumer daemon>> program.
8933 A utility provided by the <<def-lttng-tools,LTTng-tools>> project
8934 which can convert <<def-ring-buffer,ring buffer>> files (usually
8935 <<persistent-memory-file-systems,saved on a persistent memory file
8936 system>>) to <<def-trace,trace>> files.
8938 See man:lttng-crash(1).
8940 LTTng Documentation::
8943 <<lttng-live,LTTng live>>::
8944 A communication protocol between the <<lttng-relayd,relay daemon>> and
8945 live readers which makes it possible to show or analyze
8946 <<def-event-record,event records>> ``live'', as they're received by
8947 the <<def-relay-daemon,relay daemon>>.
8949 <<lttng-modules,LTTng-modules>>::
8950 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
8951 which contains the Linux kernel modules to make the Linux kernel
8952 <<def-instrumentation-point,instrumentation points>> available for
8953 <<def-lttng,LTTng>> tracing.
8956 The name of the <<def-relay-daemon,relay daemon>> program.
8958 cmd:lttng-sessiond::
8959 The name of the <<def-session-daemon,session daemon>> program.
8961 [[def-lttng-tools]]LTTng-tools::
8962 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
8963 contains the various programs and libraries used to
8964 <<controlling-tracing,control tracing>>.
8966 [[def-lttng-ust]]<<lttng-ust,LTTng-UST>>::
8967 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
8968 contains libraries to instrument
8969 <<def-user-application,user applications>>.
8971 <<lttng-ust-agents,LTTng-UST Java agent>>::
8972 A Java package provided by the <<def-lttng-ust,LTTng-UST>> project to
8973 allow the LTTng instrumentation of `java.util.logging` and Apache
8974 log4j{nbsp}1.2 logging statements.
8976 <<lttng-ust-agents,LTTng-UST Python agent>>::
8977 A Python package provided by the <<def-lttng-ust,LTTng-UST>> project
8978 to allow the <<def-lttng,LTTng>> instrumentation of Python logging
8981 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
8982 The <<def-event-record-loss-mode,event record loss mode>> in which new
8983 <<def-event-record,event records>> _overwrite_ older event records
8984 when there's no <<def-sub-buffer,sub-buffer>> space left to store
8987 <<channel-buffering-schemes,per-process buffering>>::
8988 A <<def-buffering-scheme,buffering scheme>> in which each instrumented
8989 process has its own <<def-sub-buffer,sub-buffers>> for a given user
8990 space <<def-channel,channel>>.
8992 <<channel-buffering-schemes,per-user buffering>>::
8993 A <<def-buffering-scheme,buffering scheme>> in which all the processes
8994 of a Unix user share the same <<def-sub-buffer,sub-buffers>> for a
8995 given user space <<def-channel,channel>>.
8997 [[def-proc-attr]]process attribute::
8998 In the <<pid-tracking,process attribute inclusion set>> context:
9001 * A virtual process ID.
9003 * A virtual Unix user ID.
9005 * A virtual Unix group ID.
9008 See <<def-event-record,_event record_>>.
9010 [[def-record]]record (_verb_)::
9011 Serialize the binary payload of an <<def-event,event>> to a
9012 <<def-sub-buffer,sub-buffer>>.
9014 [[def-recording-event-rule]]<<event,recording event rule>>::
9015 Specific type of <<def-event-rule,event rule>> of which the action is
9016 to <<def-record,record>> the matched event to a
9017 <<def-sub-buffer,sub-buffer>>.
9019 [[def-tracing-session]][[def-recording-session]]<<tracing-session,recording session>>::
9020 A stateful dialogue between you and a <<lttng-sessiond,session daemon>>.
9022 [[def-tracing-session-rotation]]<<session-rotation,recording session rotation>>::
9023 The action of archiving the
9024 <<def-current-trace-chunk,current trace chunk>> of a
9025 <<def-tracing-session,recording session>>.
9027 [[def-relay-daemon]]<<lttng-relayd,relay daemon>>::
9028 A process which is responsible for receiving the <<def-trace,trace>>
9029 data which a distant <<def-consumer-daemon,consumer daemon>> sends.
9031 [[def-ring-buffer]]ring buffer::
9032 A set of <<def-sub-buffer,sub-buffers>>.
9035 See _<<def-tracing-session-rotation,recording session rotation>>_.
9037 [[def-session-daemon]]<<lttng-sessiond,session daemon>>::
9038 A process which receives control commands from you and orchestrates
9039 the <<def-tracer,tracers>> and various <<def-lttng,LTTng>> daemons.
9041 <<taking-a-snapshot,snapshot>>::
9042 A copy of the current data of all the <<def-sub-buffer,sub-buffers>>
9043 of a given <<def-tracing-session,recording session>>, saved as
9044 <<def-trace,trace>> files.
9046 [[def-sub-buffer]]sub-buffer::
9047 One part of an <<def-lttng,LTTng>> <<def-ring-buffer,ring buffer>>
9048 which contains <<def-event-record,event records>>.
9051 The time information attached to an <<def-event,event>> when LTTng
9054 [[def-trace]]trace (_noun_)::
9057 * One https://diamon.org/ctf/[CTF] metadata stream file.
9058 * One or more CTF data stream files which are the concatenations of one
9059 or more flushed <<def-sub-buffer,sub-buffers>>.
9061 [[def-trace-verb]]trace (_verb_)::
9062 From the perspective of a <<def-tracer,tracer>>: attempt to execute
9063 one or more actions when emitting an <<def-event,event>> in an
9064 application or in a system.
9066 [[def-trace-chunk]]trace chunk::
9067 A self-contained <<def-trace,trace>> which is part of a
9068 <<def-tracing-session,recording session>>. Each
9069 <<def-tracing-session-rotation, recording session rotation>> produces a
9070 <<def-trace-chunk-archive,trace chunk archive>>.
9072 [[def-trace-chunk-archive]]trace chunk archive::
9073 The result of a <<def-tracing-session-rotation, recording session
9076 <<def-lttng,LTTng>> doesn't manage any trace chunk archive, even if its
9077 containing <<def-tracing-session,recording session>> is still active: you
9078 are free to read it, modify it, move it, or remove it.
9081 The http://tracecompass.org[Trace Compass] project and application.
9083 [[def-tracepoint]]tracepoint::
9084 An instrumentation point using the tracepoint mechanism of the Linux
9085 kernel or of <<def-lttng-ust,LTTng-UST>>.
9087 tracepoint definition::
9088 The definition of a single <<def-tracepoint,tracepoint>>.
9091 The name of a <<def-tracepoint,tracepoint>>.
9093 [[def-tracepoint-provider]]tracepoint provider::
9094 A set of functions providing <<def-tracepoint,tracepoints>> to an
9095 instrumented <<def-user-application,user application>>.
9097 Not to be confused with a <<def-tracepoint-provider-package,tracepoint
9098 provider package>>: many tracepoint providers can exist within a
9099 tracepoint provider package.
9101 [[def-tracepoint-provider-package]]tracepoint provider package::
9102 One or more <<def-tracepoint-provider,tracepoint providers>> compiled
9103 as an https://en.wikipedia.org/wiki/Object_file[object file] or as a
9104 link:https://en.wikipedia.org/wiki/Library_(computing)#Shared_libraries[shared
9107 [[def-tracer]]tracer::
9108 A piece of software which executes some action when it emits
9109 an <<def-event,event>>, like <<def-record,record>> it to some
9112 <<domain,tracing domain>>::
9113 A type of LTTng <<def-tracer,tracer>>.
9115 <<tracing-group,tracing group>>::
9116 The Unix group which a Unix user can be part of to be allowed to
9117 control the Linux kernel LTTng <<def-tracer,tracer>>.
9119 [[def-trigger]]<<trigger,trigger>>::
9120 A <<def-condition,condition>>-<<def-action,actions>> pair; when the
9121 condition of a trigger is satisfied, LTTng attempts to execute its
9124 [[def-user-application]]user application::
9125 An application (program or library) running in user space, as opposed
9126 to a Linux kernel module, for example.