The LTTng Documentation ======================= Philippe Proulx v2.13, 28 November 2023 include::../common/copyright.txt[] include::../common/welcome.txt[] include::../common/audience.txt[] [[chapters]] === What's in this documentation? The LTTng Documentation is divided into the following sections: * ``**<>**'' explains the rudiments of software tracing and the rationale behind the LTTng project. + Skip this section if you’re familiar with software tracing and with the LTTng project. * ``**<>**'' describes the steps to install the LTTng packages on common Linux distributions and from their sources. + Skip this section if you already properly installed LTTng on your target system. * ``**<>**'' is a concise guide to get started quickly with LTTng kernel and user space tracing. + We recommend this section if you're new to LTTng or to software tracing in general. + Skip this section if you're not new to LTTng. * ``**<>**'' explains the concepts at the heart of LTTng. + It's a good idea to become familiar with the core concepts before attempting to use the toolkit. * ``**<>**'' describes the various components of the LTTng machinery, like the daemons, the libraries, and the command-line interface. * ``**<>**'' shows different ways to instrument user applications and the Linux kernel for LTTng tracing. + Instrumenting source code is essential to provide a meaningful source of events. + Skip this section if you don't have a programming background. * ``**<>**'' is divided into topics which demonstrate how to use the vast array of features that LTTng{nbsp}{revision} offers. * ``**<>**'' contains API reference tables. * ``**<>**'' is a specialized dictionary of terms related to LTTng or to the field of software tracing. include::../common/convention.txt[] include::../common/acknowledgements.txt[] [[whats-new]] == What's new in LTTng{nbsp}{revision}? LTTng{nbsp}{revision} bears the name _Nordicité_, the product of a collaboration between https://champlibre.co/[Champ Libre] and https://www.boreale.com/[Boréale]. This farmhouse IPA is brewed with https://en.wikipedia.org/wiki/Kveik[Kveik] yeast and Québec-grown barley, oats, and juniper branches. The result is a remarkable, fruity, hazy golden IPA that offers a balanced touch of resinous and woodsy bitterness. New features and changes in LTTng{nbsp}{revision}: General:: + * The LTTng trigger API of <> now offers the ``__event rule matches__'' condition (an <> matches an event) as well as the following new actions: + -- * <> a recording session. * <> of a recording session (rotate). * <> of a recording session. -- + As a reminder, a <> is a condition-actions pair. When the condition of a trigger is satisfied, LTTng attempts to execute its actions. + This feature is also available with the new man:lttng-add-trigger(1), man:lttng-remove-trigger(1), and man:lttng-list-triggers(1) <> commands. + Starting from LTTng{nbsp}{revision}, a trigger may have more than one action. + See “<>” to learn more. * The LTTng <> and <> tracers offer the new namespace context field `time_ns`, which is the inode number, in the proc file system, of the current clock namespace. + See man:lttng-add-context(1), man:lttng-ust(3), and man:time_namespaces(7). * The link:/man[manual pages] of LTTng-tools now have a terminology and style which match the LTTng Documentation, many fixes, more internal and manual page links, clearer lists and procedures, superior consistency, and usage examples. + The new man:lttng-event-rule(7) manual page explains the new, common way to specify an event rule on the command line. + The new man:lttng-concepts(7) manual page explains the core concepts of LTTng. Its contents is essentially the ``<>'' section of this documentation, but more adapted to the manual page style. User space tracing:: + [IMPORTANT] ==== The major version part of the `liblttng-ust` https://en.wikipedia.org/wiki/Soname[soname] is bumped, which means you **must recompile** your instrumented applications/libraries and <> to use LTTng-UST{nbsp}{revision}. This change became a necessity to clean up the library and for `liblttng-ust` to stop exporting private symbols. Also, LTTng{nbsp}{revision} prepends the `lttng_ust_` and `LTTNG_UST_` prefix to all public macro/definition/function names to offer a consistent API namespace. The LTTng{nbsp}2.12 API is still available; see the ``Compatibility with previous APIs'' section of man:lttng-ust(3). ==== + Other notable changes: + * The `liblttng-ust` C{nbsp}API offers the new man:lttng_ust_vtracef(3) and man:lttng_ust_vtracelog(3) macros which are to man:lttng_ust_tracef(3) and man:lttng_ust_tracelog(3) what man:vprintf(3) is to man:printf(3). * LTTng-UST now only depends on https://liburcu.org/[`liburcu`] at build time, not at run time. Kernel tracing:: + * The preferred display base of event record integer fields which contain memory addresses is now hexadecimal instead of decimal. * The `pid` field is removed from `lttng_statedump_file_descriptor` event records and the `file_table_address` field is added. + This new field is the address of the `files_struct` structure which contains the file descriptor. + See the ``https://github.com/lttng/lttng-modules/commit/e7a0ca7205fd4be7c829d171baa8823fe4784c90[statedump: introduce `file_table_address`]'' patch to learn more. * The `flags` field of `syscall_entry_clone` event records is now a structure containing two enumerations (exit signal and options). + This change makes the flag values more readable and meaningful. + See the ``https://github.com/lttng/lttng-modules/commit/d775625e2ba4825b73b5897e7701ad6e2bdba115[syscalls: Make `clone()`'s `flags` field a 2 enum struct]'' patch to learn more. * The memory footprint of the kernel tracer is improved: the latter only generates metadata for the specific system call recording event rules that you <>. [[nuts-and-bolts]] == Nuts and bolts What is LTTng? As its name suggests, the _Linux Trace Toolkit: next generation_ is a modern toolkit for tracing Linux systems and applications. So your first question might be: **what is tracing?** [[what-is-tracing]] === What is tracing? As the history of software engineering progressed and led to what we now take for granted--complex, numerous and interdependent software applications running in parallel on sophisticated operating systems like Linux--the authors of such components, software developers, began feeling a natural urge to have tools that would ensure the robustness and good performance of their masterpieces. One major achievement in this field is, inarguably, the https://www.gnu.org/software/gdb/[GNU debugger (GDB)], an essential tool for developers to find and fix bugs. But even the best debugger won't help make your software run faster, and nowadays, faster software means either more work done by the same hardware, or cheaper hardware for the same work. A _profiler_ is often the tool of choice to identify performance bottlenecks. Profiling is suitable to identify _where_ performance is lost in a given piece of software. The profiler outputs a profile, a statistical summary of observed events, which you may use to discover which functions took the most time to execute. However, a profiler won't report _why_ some identified functions are the bottleneck. Bottlenecks might only occur when specific conditions are met, conditions that are sometimes impossible to capture by a statistical profiler, or impossible to reproduce with an application altered by the overhead of an event-based profiler. For a thorough investigation of software performance issues, a history of execution is essential, with the recorded values of variables and context fields you choose, and with as little influence as possible on the instrumented application. This is where tracing comes in handy. _Tracing_ is a technique used to understand what goes on in a running software system. The piece of software used for tracing is called a _tracer_, which is conceptually similar to a tape recorder. When recording, specific instrumentation points placed in the software source code generate events that are saved on a giant tape: a _trace_ file. You can record user application and operating system events at the same time, opening the possibility of resolving a wide range of problems that would otherwise be extremely challenging. Tracing is often compared to _logging_. However, tracers and loggers are two different tools, serving two different purposes. Tracers are designed to record much lower-level events that occur much more frequently than log messages, often in the range of thousands per second, with very little execution overhead. Logging is more appropriate for a very high-level analysis of less frequent events: user accesses, exceptional conditions (errors and warnings, for example), database transactions, instant messaging communications, and such. Simply put, logging is one of the many use cases that can be satisfied with tracing. The list of recorded events inside a trace file can be read manually like a log file for the maximum level of detail, but it's generally much more interesting to perform application-specific analyses to produce reduced statistics and graphs that are useful to resolve a given problem. Trace viewers and analyzers are specialized tools designed to do this. In the end, this is what LTTng is: a powerful, open source set of tools to trace the Linux kernel and user applications at the same time. LTTng is composed of several components actively maintained and developed by its link:/community/#where[community]. [[lttng-alternatives]] === Alternatives to noch:{LTTng} Excluding proprietary solutions, a few competing software tracers exist for Linux: https://github.com/dtrace4linux/linux[dtrace4linux]:: A port of Sun Microsystems' DTrace to Linux. + The cmd:dtrace tool interprets user scripts and is responsible for loading code into the Linux kernel for further execution and collecting the outputted data. https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF]:: A subsystem in the Linux kernel in which a virtual machine can execute programs passed from the user space to the kernel. + You can attach such programs to tracepoints and kprobes thanks to a system call, and they can output data to the user space when executed thanks to different mechanisms (pipe, VM register values, and eBPF maps, to name a few). https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]:: The de facto function tracer of the Linux kernel. + Its user interface is a set of special files in sysfs. https://perf.wiki.kernel.org/[perf]:: A performance analysis tool for Linux which supports hardware performance counters, tracepoints, as well as other counters and types of probes. + The controlling utility of perf is the cmd:perf command line/text UI tool. https://linux.die.net/man/1/strace[strace]:: A command-line utility which records system calls made by a user process, as well as signal deliveries and changes of process state. + strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace] to fulfill its function. https://www.sysdig.org/[sysdig]:: Like SystemTap, uses scripts to analyze Linux kernel events. + You write scripts, or _chisels_ in the jargon of sysdig, in Lua and sysdig executes them while it traces the system or afterwards. The interface of sysdig is the cmd:sysdig command-line tool as well as the text UI-based cmd:csysdig tool. https://sourceware.org/systemtap/[SystemTap]:: A Linux kernel and user space tracer which uses custom user scripts to produce plain text traces. + SystemTap converts the scripts to the C language, and then compiles them as Linux kernel modules which are loaded to produce trace data. The primary user interface of SystemTap is the cmd:stap command-line tool. The main distinctive features of LTTng is that it produces correlated kernel and user space traces, as well as doing so with the lowest overhead amongst other solutions. It produces trace files in the https://diamon.org/ctf[CTF] format, a file format optimized for the production and analyses of multi-gigabyte data. LTTng is the result of more than 10{nbsp}years of active open source development by a community of passionate developers. LTTng is currently available on major desktop and server Linux distributions. The main interface for tracing control is a single command-line tool named cmd:lttng. The latter can create several recording sessions, enable and disable recording event rules on the fly, filter events efficiently with custom user expressions, start and stop tracing, and much more. LTTng can write the traces on the file system or send them over the network, and keep them totally or partially. You can make LTTng execute user-defined actions when LTTng emits an event. You can view the traces once tracing becomes inactive or as LTTng records events. <> and <>! [[installing-lttng]] == Installation **LTTng** is a set of software <> which interact to <> the Linux kernel and user applications, and to <> (start and stop recording, create recording event rules, and the rest). Those components are bundled into the following packages: LTTng-tools:: Libraries and command-line interface to control tracing. LTTng-modules:: Linux kernel modules to instrument and trace the kernel. LTTng-UST:: Libraries and Java/Python packages to instrument and trace user applications. Most distributions mark the LTTng-modules and LTTng-UST packages as optional when installing LTTng-tools (which is always required). In the following sections, we always provide the steps to install all three, but note that: * You only need to install LTTng-modules if you intend to use the Linux kernel LTTng tracer. * You only need to install LTTng-UST if you intend to use the user space LTTng tracer. [role="growable"] .Availability of LTTng{nbsp}{revision} for major Linux distributions as of 17{nbsp}October{nbsp}2023. |=== |Distribution |Available in releases |https://www.ubuntu.com/[Ubuntu] |xref:ubuntu[Ubuntu 22.04 LTS _Jammy Jellyfish_, Ubuntu 23.04 _Lunar Lobster_, and Ubuntu 23.10 _Mantic Minotaur_]. Ubuntu{nbsp}18.04 LTS _Bionic Beaver_ and Ubuntu{nbsp}20.04 LTS _Focal Fossa_: <>. |https://www.debian.org/[Debian] |<>. |https://getfedora.org/[Fedora] |xref:fedora[Fedora{nbsp}37, Fedora{nbsp}38, and Fedora{nbsp}39]. |https://www.archlinux.org/[Arch Linux] |<>. |https://alpinelinux.org/[Alpine Linux] |xref:alpine-linux[Alpine Linux{nbsp}3.16, Alpine Linux{nbsp}3.17, and Alpine Linux{nbsp}3.18]. |https://buildroot.org/[Buildroot] |xref:buildroot[Buildroot{nbsp}2022.02, Buildroot{nbsp}2022.05, Buildroot{nbsp}2022.08, Buildroot{nbsp}2022.11, Buildroot{nbsp}2023.02, Buildroot{nbsp}2023.05, and Buildroot{nbsp}2023.08]. |https://www.openembedded.org/wiki/Main_Page[OpenEmbedded] and https://www.yoctoproject.org/[Yocto] |xref:oe-yocto[Yocto Project{nbsp}3.3 _Honister_, Yocto Project{nbsp}4.0 _Kirkstone_, Yocto Project{nbsp}4.1 _Langdale_, Yocto Project{nbsp}4.2 _Mickledore_, and Yocto Project{nbsp}4.3 _Nanbield_]. |==== [NOTE] ==== For https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES] packages, see https://packages.efficios.com/[EfficiOS Enterprise Packages]. For other distributions, <>. ==== [[ubuntu]] === [[ubuntu-official-repository]]Ubuntu LTTng{nbsp}{revision} is available on Ubuntu 22.04 LTS _Jammy Jellyfish_, Ubuntu 23.04 _Lunar Lobster_, and Ubuntu 23.10 _Mantic Minotaur_. For previous supported releases of Ubuntu, <>. To install LTTng{nbsp}{revision} on Ubuntu{nbsp}22.04 LTS _Jammy Jellyfish_: . Install the main LTTng{nbsp}{revision} packages: + -- [role="term"] ---- # apt-get install lttng-tools # apt-get install lttng-modules-dkms # apt-get install liblttng-ust-dev ---- -- . **If you need to instrument and trace <>**, install the LTTng-UST Java agent: + -- [role="term"] ---- # apt-get install liblttng-ust-agent-java ---- -- . **If you need to instrument and trace <>**, install the LTTng-UST Python agent: + -- [role="term"] ---- # apt-get install python3-lttngust ---- -- [[ubuntu-ppa]] === Ubuntu: noch:{LTTng} Stable {revision} PPA The https://launchpad.net/~lttng/+archive/ubuntu/stable-{revision}[LTTng Stable{nbsp}{revision} PPA] offers the latest stable LTTng{nbsp}{revision} packages for Ubuntu{nbsp}18.04 LTS _Bionic Beaver_, Ubuntu{nbsp}20.04 LTS _Focal Fossa_, and Ubuntu{nbsp}22.04 LTS _Jammy Jellyfish_. To install LTTng{nbsp}{revision} from the LTTng Stable{nbsp}{revision} PPA: . Add the LTTng Stable{nbsp}{revision} PPA repository and update the list of packages: + -- [role="term",subs="attributes"] ---- # apt-add-repository ppa:lttng/stable-{revision} # apt-get update ---- -- . Install the main LTTng{nbsp}{revision} packages: + -- [role="term"] ---- # apt-get install lttng-tools # apt-get install lttng-modules-dkms # apt-get install liblttng-ust-dev ---- -- . **If you need to instrument and trace <>**, install the LTTng-UST Java agent: + -- [role="term"] ---- # apt-get install liblttng-ust-agent-java ---- -- . **If you need to instrument and trace <>**, install the LTTng-UST Python agent: + -- [role="term"] ---- # apt-get install python3-lttngust ---- -- [[debian]] === Debian To install LTTng{nbsp}{revision} on Debian{nbsp}12 _bookworm_: . Install the main LTTng{nbsp}{revision} packages: + -- [role="term"] ---- # apt install lttng-modules-dkms # apt install liblttng-ust-dev # apt install lttng-tools ---- -- . **If you need to instrument and trace <>**, install the LTTng-UST Java agent: + -- [role="term"] ---- # apt install liblttng-ust-agent-java ---- -- . **If you need to instrument and trace <>**, install the LTTng-UST Python agent: + -- [role="term"] ---- # apt install python3-lttngust ---- -- [[fedora]] === Fedora To install LTTng{nbsp}{revision} on Fedora{nbsp}37, Fedora{nbsp}38, or Fedora{nbsp}39: . Install the LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision} packages: + -- [role="term"] ---- # yum install lttng-tools # yum install lttng-ust ---- -- . Download, build, and install the latest LTTng-modules{nbsp}{revision}: + -- [role="term",subs="attributes,specialcharacters"] ---- $ cd $(mktemp -d) && wget http://lttng.org/files/lttng-modules/lttng-modules-latest-{revision}.tar.bz2 && tar -xf lttng-modules-latest-{revision}.tar.bz2 && cd lttng-modules-{revision}.* && make && sudo make modules_install && sudo depmod -a ---- -- [IMPORTANT] .Java and Python application instrumentation and tracing ==== If you need to instrument and trace <> on Fedora, you need to build and install LTTng-UST{nbsp}{revision} <> and pass the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the `configure` script, depending on which Java logging framework you use. If you need to instrument and trace <> on Fedora, you need to build and install LTTng-UST{nbsp}{revision} from source and pass the `--enable-python-agent` option to the `configure` script. ==== [[arch-linux]] === Arch Linux LTTng-UST{nbsp}{revision} is available in the _extra_ repository of Arch Linux, while LTTng-tools{nbsp}{revision} and LTTng-modules{nbsp}{revision} are available in the https://aur.archlinux.org/[AUR]. To install LTTng{nbsp}{revision} on Arch Linux, using https://github.com/Jguer/yay[yay] for the AUR packages: . Install the main LTTng{nbsp}{revision} packages: + -- [role="term"] ---- # pacman -Sy lttng-ust $ yay -Sy lttng-tools $ yay -Sy lttng-modules ---- -- . **If you need to instrument and trace <>**, install the LTTng-UST Python agent: + -- [role="term"] ---- # pacman -Sy python-lttngust ---- -- [[alpine-linux]] === Alpine Linux To install LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision} on Alpine Linux{nbsp}3.16, Alpine Linux{nbsp}3.17, or Alpine Linux{nbsp}3.18: . Add the LTTng packages: + -- [role="term"] ---- # apk add lttng-tools # apk add lttng-ust-dev ---- -- . Download, build, and install the latest LTTng-modules{nbsp}{revision}: + -- [role="term",subs="attributes,specialcharacters"] ---- $ cd $(mktemp -d) && wget http://lttng.org/files/lttng-modules/lttng-modules-latest-{revision}.tar.bz2 && tar -xf lttng-modules-latest-{revision}.tar.bz2 && cd lttng-modules-{revision}.* && make && sudo make modules_install && sudo depmod -a ---- -- [[buildroot]] === Buildroot To install LTTng{nbsp}{revision} on Buildroot{nbsp}2022.02, Buildroot{nbsp}2022.05, Buildroot{nbsp}2022.08, Buildroot{nbsp}2022.11, Buildroot{nbsp}2023.02, Buildroot{nbsp}2023.05, or Buildroot{nbsp}2023.08: . Launch the Buildroot configuration tool: + -- [role="term"] ---- $ make menuconfig ---- -- . In **Kernel**, check **Linux kernel**. . In **Toolchain**, check **Enable WCHAR support**. . In **Target packages**{nbsp}→ **Debugging, profiling and benchmark**, check **lttng-modules** and **lttng-tools**. . In **Target packages**{nbsp}→ **Libraries**{nbsp}→ **Other**, check **lttng-libust**. [[oe-yocto]] === OpenEmbedded and Yocto LTTng{nbsp}{revision} recipes are available in the https://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`] layer for Yocto Project{nbsp}3.3 _Honister_, Yocto Project{nbsp}4.0 _Kirkstone_, Yocto Project{nbsp}4.1 _Langdale_, Yocto Project{nbsp}4.2 _Mickledore_, and Yocto Project{nbsp}4.3 _Nanbield_ under the following names: * `lttng-tools` * `lttng-modules` * `lttng-ust` With BitBake, the simplest way to include LTTng recipes in your target image is to add them to `IMAGE_INSTALL_append` in path:{conf/local.conf}: ---- IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust" ---- If you use Hob: . Select a machine and an image recipe. . Click **Edit image recipe**. . Under the **All recipes** tab, search for **lttng**. . Check the desired LTTng recipes. [[building-from-source]] === Build from source To build and install LTTng{nbsp}{revision} from source: . Using the package manager of your distribution, or from source, install the following dependencies of LTTng-tools and LTTng-UST: + -- * https://sourceforge.net/projects/libuuid/[libuuid] * https://directory.fsf.org/wiki/Popt[popt] * https://liburcu.org/[Userspace RCU] * http://www.xmlsoft.org/[libxml2] * **Optional**: https://github.com/numactl/numactl[numactl] -- . Download, build, and install the latest LTTng-modules{nbsp}{revision}: + -- [role="term"] ---- $ cd $(mktemp -d) && wget https://lttng.org/files/lttng-modules/lttng-modules-latest-2.13.tar.bz2 && tar -xf lttng-modules-latest-2.13.tar.bz2 && cd lttng-modules-2.13.* && make && sudo make modules_install && sudo depmod -a ---- -- . Download, build, and install the latest LTTng-UST{nbsp}{revision}: + -- [role="term"] ---- $ cd $(mktemp -d) && wget https://lttng.org/files/lttng-ust/lttng-ust-latest-2.13.tar.bz2 && tar -xf lttng-ust-latest-2.13.tar.bz2 && cd lttng-ust-2.13.* && ./configure && make && sudo make install && sudo ldconfig ---- -- + Add `--disable-numa` to `./configure` if you don't have https://github.com/numactl/numactl[numactl]. + -- [IMPORTANT] .Java and Python application tracing ==== If you need to instrument and have LTTng trace <>, pass the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the `configure` script, depending on which Java logging framework you use. If you need to instrument and have LTTng trace <>, pass the `--enable-python-agent` option to the `configure` script. You can set the env:PYTHON environment variable to the path to the Python interpreter for which to install the LTTng-UST Python agent package. ==== -- + -- [NOTE] ==== By default, LTTng-UST libraries are installed to dir:{/usr/local/lib}, which is the de facto directory in which to keep self-compiled and third-party libraries. When <>: * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment variable. * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to man:gcc(1), man:g++(1), or man:clang(1). ==== -- . Download, build, and install the latest LTTng-tools{nbsp}{revision}: + -- [role="term"] ---- $ cd $(mktemp -d) && wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 && tar -xf lttng-tools-latest-2.13.tar.bz2 && cd lttng-tools-2.13.* && ./configure && make && sudo make install && sudo ldconfig ---- -- TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the previous steps automatically for a given version of LTTng and confine the installed files to a specific directory. This can be useful to try LTTng without installing it on your system. [[linux-kernel-sig]] === Linux kernel module signature Linux kernel modules require trusted signatures in order to be loaded when any of the following is true: * The system boots with https://uefi.org/specs/UEFI/2.10/32_Secure_Boot_and_Driver_Signing.html#secure-boot-and-driver-signing[Secure Boot] enabled. * The Linux kernel which boots is configured with `CONFIG_MODULE_SIG_FORCE`. * The Linux kernel boots with a command line containing `module.sig_enforce=1`. .`root` user running <> which fails to load a required <> due to the signature enforcement policies. ==== [role="term"] ---- # lttng-sessiond Warning: No tracing group detected modprobe: ERROR: could not insert 'lttng_ring_buffer_client_discard': Key was rejected by service Error: Unable to load required module lttng-ring-buffer-client-discard Warning: No kernel tracer available ---- ==== There are several methods to enroll trusted keys for signing modules that are built from source. The precise details vary from one Linux version to another, and distributions may have their own mechanisms. For example, https://github.com/dell/dkms[DKMS] may autogenerate a key and sign modules, but the key isn't automatically enrolled. See https://www.kernel.org/doc/html/latest/admin-guide/module-signing.html[Kernel module signing facility] and the documentation of your distribution to learn more about signing Linux kernel modules. [[getting-started]] == Quick start This is a short guide to get started quickly with LTTng kernel and user space tracing. Before you follow this guide, make sure to <> LTTng. This tutorial walks you through the steps to: . <>. . <> written in C. . <>. [[tracing-the-linux-kernel]] === Record Linux kernel events NOTE: The following command lines start with the `#` prompt because you need root privileges to control the Linux kernel LTTng tracer. You can also control the kernel tracer as a regular user if your Unix user is a member of the <>. . Create a <> to write LTTng traces to dir:{/tmp/my-kernel-trace}: + -- [role="term"] ---- # lttng create my-kernel-session --output=/tmp/my-kernel-trace ---- -- . List the available kernel tracepoints and system calls: + -- [role="term"] ---- # lttng list --kernel # lttng list --kernel --syscall ---- -- . Create <> which match events having the desired names, for example the `sched_switch` and `sched_process_fork` tracepoints, and the man:open(2) and man:close(2) system calls: + -- [role="term"] ---- # lttng enable-event --kernel sched_switch,sched_process_fork # lttng enable-event --kernel --syscall open,close ---- -- + Create a recording event rule which matches _all_ the Linux kernel tracepoint events with the opt:lttng-enable-event(1):--all option (recording with such a recording event rule generates a lot of data): + -- [role="term"] ---- # lttng enable-event --kernel --all ---- -- . <>: + -- [role="term"] ---- # lttng start ---- -- . Do some operation on your system for a few seconds. For example, load a website, or list the files of a directory. . <> the current recording session: + -- [role="term"] ---- # lttng destroy ---- -- + The man:lttng-destroy(1) command doesn't destroy the trace data; it only destroys the state of the recording session. + The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command implicitly (see ``<>''). You need to stop recording to make LTTng flush the remaining trace data and make the trace readable. . For the sake of this example, make the recorded trace accessible to the non-root users: + -- [role="term"] ---- # chown -R $(whoami) /tmp/my-kernel-trace ---- -- See ``<>'' to view the recorded events. [[tracing-your-own-user-application]] === Record user application events This section walks you through a simple example to record the events of a _Hello world_ program written in{nbsp}C. To create the traceable user application: . Create the tracepoint provider header file, which defines the tracepoints and the events they can generate: + -- [source,c] .path:{hello-tp.h} ---- #undef LTTNG_UST_TRACEPOINT_PROVIDER #define LTTNG_UST_TRACEPOINT_PROVIDER hello_world #undef LTTNG_UST_TRACEPOINT_INCLUDE #define LTTNG_UST_TRACEPOINT_INCLUDE "./hello-tp.h" #if !defined(_HELLO_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ) #define _HELLO_TP_H #include LTTNG_UST_TRACEPOINT_EVENT( hello_world, my_first_tracepoint, LTTNG_UST_TP_ARGS( int, my_integer_arg, char *, my_string_arg ), LTTNG_UST_TP_FIELDS( lttng_ust_field_string(my_string_field, my_string_arg) lttng_ust_field_integer(int, my_integer_field, my_integer_arg) ) ) #endif /* _HELLO_TP_H */ #include ---- -- . Create the tracepoint provider package source file: + -- [source,c] .path:{hello-tp.c} ---- #define LTTNG_UST_TRACEPOINT_CREATE_PROBES #define LTTNG_UST_TRACEPOINT_DEFINE #include "hello-tp.h" ---- -- . Build the tracepoint provider package: + -- [role="term"] ---- $ gcc -c -I. hello-tp.c ---- -- . Create the _Hello World_ application source file: + -- [source,c] .path:{hello.c} ---- #include #include "hello-tp.h" int main(int argc, char *argv[]) { unsigned int i; puts("Hello, World!\nPress Enter to continue..."); /* * The following getchar() call only exists for the purpose of this * demonstration, to pause the application in order for you to have * time to list its tracepoints. You don't need it otherwise. */ getchar(); /* * An lttng_ust_tracepoint() call. * * Arguments, as defined in `hello-tp.h`: * * 1. Tracepoint provider name (required) * 2. Tracepoint name (required) * 3. `my_integer_arg` (first user-defined argument) * 4. `my_string_arg` (second user-defined argument) * * Notice the tracepoint provider and tracepoint names are * C identifiers, NOT strings: they're in fact parts of variables * that the macros in `hello-tp.h` create. */ lttng_ust_tracepoint(hello_world, my_first_tracepoint, 23, "hi there!"); for (i = 0; i < argc; i++) { lttng_ust_tracepoint(hello_world, my_first_tracepoint, i, argv[i]); } puts("Quitting now!"); lttng_ust_tracepoint(hello_world, my_first_tracepoint, i * i, "i^2"); return 0; } ---- -- . Build the application: + -- [role="term"] ---- $ gcc -c hello.c ---- -- . Link the application with the tracepoint provider package, `liblttng-ust` and `libdl`: + -- [role="term"] ---- $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl ---- -- Here's the whole build process: [role="img-100"] .Build steps of the user space tracing tutorial. image::ust-flow.png[] To record the events of the user application: . Run the application with a few arguments: + -- [role="term"] ---- $ ./hello world and beyond ---- -- + You see: + -- ---- Hello, World! Press Enter to continue... ---- -- . Start an LTTng <>: + -- [role="term"] ---- $ lttng-sessiond --daemonize ---- -- + NOTE: A session daemon might already be running, for example as a service that the service manager of your distribution started. . List the available user space tracepoints: + -- [role="term"] ---- $ lttng list --userspace ---- -- + You see the `hello_world:my_first_tracepoint` tracepoint listed under the `./hello` process. . Create a <>: + -- [role="term"] ---- $ lttng create my-user-space-session ---- -- . Create a <> which matches user space tracepoint events named `hello_world:my_first_tracepoint`: + -- [role="term"] ---- $ lttng enable-event --userspace hello_world:my_first_tracepoint ---- -- . <>: + -- [role="term"] ---- $ lttng start ---- -- . Go back to the running `hello` application and press **Enter**. + The program executes all `lttng_ust_tracepoint()` instrumentation points, emitting events as the event rule you created in step{nbsp}5 matches them, and exits. . <> the current recording session: + -- [role="term"] ---- $ lttng destroy ---- -- + The man:lttng-destroy(1) command doesn't destroy the trace data; it only destroys the state of the recording session. + The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command implicitly (see ``<>''). You need to stop recording to make LTTng flush the remaining trace data and make the trace readable. By default, LTTng saves the traces to the +$LTTNG_HOME/lttng-traces/__NAME__-__DATE__-__TIME__+ directory, where +__NAME__+ is the recording session name. The env:LTTNG_HOME environment variable defaults to `$HOME` if not set. [[viewing-and-analyzing-your-traces]] === View and analyze the recorded events Once you have completed the <> and <> tutorials, you can inspect the recorded events. There are tools you can use to read LTTng traces: https://babeltrace.org/[Babeltrace{nbsp}2]:: A rich, flexible trace manipulation toolkit which includes a versatile command-line interface (man:babeltrace2(1)), a https://babeltrace.org/docs/v2.0/libbabeltrace2/[C{nbsp}library], and https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings] so that you can easily process or convert an LTTng trace with your own script. + The Babeltrace{nbsp}2 project ships with a plugin (man:babeltrace2-plugin-ctf(7)) which supports the format of the traces which LTTng produces, https://diamon.org/ctf/[CTF]. http://tracecompass.org/[Trace Compass]:: A graphical user interface for viewing and analyzing any type of logs or traces, including those of LTTng. NOTE: This section assumes that LTTng wrote the traces it recorded during the previous tutorials to their default location, in the dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME environment variable defaults to `$HOME` if not set. [[viewing-and-analyzing-your-traces-bt]] ==== Use the cmd:babeltrace2 command-line tool The simplest way to list all the recorded events of an LTTng trace is to pass its path to man:babeltrace2(1), without options: [role="term"] ---- $ babeltrace2 ~/lttng-traces/my-user-space-session* ---- The cmd:babeltrace2 command finds all traces recursively within the given path and prints all their events, sorting them chronologically. Pipe the output of cmd:babeltrace2 into a tool like man:grep(1) for further filtering: [role="term"] ---- $ babeltrace2 /tmp/my-kernel-trace | grep _switch ---- Pipe the output of cmd:babeltrace2 into a tool like man:wc(1) to count the recorded events: [role="term"] ---- $ babeltrace2 /tmp/my-kernel-trace | grep _open | wc --lines ---- [[viewing-and-analyzing-your-traces-bt-python]] ==== Use the Babeltrace{nbsp}2 Python bindings The <> is useful to isolate event records by simple matching using man:grep(1) and similar utilities. However, more elaborate filters, such as keeping only event records with a field value falling within a specific range, are not trivial to write using a shell. Moreover, reductions and even the most basic computations involving multiple event records are virtually impossible to implement. Fortunately, Babeltrace{nbsp}2 ships with https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings] which make it easy to read the event records of an LTTng trace sequentially and compute the desired information. The following script accepts an LTTng Linux kernel trace path as its first argument and prints the short names of the top five running processes on CPU{nbsp}0 during the whole trace: [source,python] .path:{top5proc.py} ---- import bt2 import sys import collections def top5proc(): # Get the trace path from the first command-line argument it = bt2.TraceCollectionMessageIterator(sys.argv[1]) # This counter dictionary will hold execution times: # # Task command name -> Total execution time (ns) exec_times = collections.Counter() # This holds the last `sched_switch` timestamp last_ts = None for msg in it: # We only care about event messages if type(msg) is not bt2._EventMessageConst: continue # Event of the event message event = msg.event # Keep only `sched_switch` events if event.cls.name != 'sched_switch': continue # Keep only records of events which LTTng emitted from CPU 0 if event.packet.context_field['cpu_id'] != 0: continue # Event timestamp (ns) cur_ts = msg.default_clock_snapshot.ns_from_origin if last_ts is None: # Start here last_ts = cur_ts # (Short) name of the previous task command prev_comm = str(event.payload_field['prev_comm']) # Initialize an entry in our dictionary if not done yet if prev_comm not in exec_times: exec_times[prev_comm] = 0 # Compute previous command execution time diff = cur_ts - last_ts # Update execution time of this command exec_times[prev_comm] += diff # Update last timestamp last_ts = cur_ts # Print top 5 for name, ns in exec_times.most_common(5): print('{:20}{} s'.format(name, ns / 1e9)) if __name__ == '__main__': top5proc() ---- Run this script: [role="term"] ---- $ python3 top5proc.py /tmp/my-kernel-trace/kernel ---- Output example: ---- swapper/0 48.607245889 s chromium 7.192738188 s pavucontrol 0.709894415 s Compositor 0.660867933 s Xorg.bin 0.616753786 s ---- Note that `swapper/0` is the ``idle'' process of CPU{nbsp}0 on Linux; since we weren't using the CPU that much when recording, its first position in the list makes sense. [[core-concepts]] == [[understanding-lttng]]Core concepts From a user's perspective, the LTTng system is built on a few concepts, or objects, on which the <> operates by sending commands to the <> (through <>). Understanding how those objects relate to each other is key to master the toolkit. The core concepts of LTTng are: * <<"event-rule","Instrumentation point, event rule, and event">> * <> * <> * <> * <> * <> NOTE: The man:lttng-concepts(7) manual page also documents the core concepts of LTTng, with more links to other LTTng-tools manual pages. [[event-rule]] === Instrumentation point, event rule, and event An _instrumentation point_ is a point, within a piece of software, which, when executed, creates an LTTng _event_. LTTng offers various <>. An _event rule_ is a set of conditions to match a set of events. When LTTng creates an event{nbsp}__E__, an event rule{nbsp}__ER__ is said to __match__{nbsp}__E__ when{nbsp}__E__ satisfies _all_ the conditions of{nbsp}__ER__. This concept is similar to a https://en.wikipedia.org/wiki/Regular_expression[regular expression] which matches a set of strings. When an event rule matches an event, LTTng _emits_ the event, therefore attempting to execute one or more actions. [IMPORTANT] ==== [[event-creation-emission-opti]]The event creation and emission processes are documentation concepts to help understand the journey from an instrumentation point to the execution of actions. The actual creation of an event can be costly because LTTng needs to evaluate the arguments of the instrumentation point. In practice, LTTng implements various optimizations for the Linux kernel and user space <> to avoid actually creating an event when the tracer knows, thanks to properties which are independent from the event payload and current context, that it would never emit such an event. Those properties are: * The <>. * The instrumentation point name. * The instrumentation point log level. * For a <>: ** The status of the rule itself. ** The status of the <>. ** The activity of the <>. ** Whether or not the process for which LTTng would create the event is <>. In other words: if, for a given instrumentation point{nbsp}__IP__, the LTTng tracer knows that it would never emit an event, executing{nbsp}__IP__ represents a simple boolean variable check and, for a Linux kernel recording event rule, a few process attribute checks. ==== As of LTTng{nbsp}{revision}, there are two places where you can find an event rule: <>:: A specific type of event rule of which the action is to record the matched event as an event record. + See ``<>'' to learn more. ``Event rule matches'' <> condition (since LTTng{nbsp}2.13):: When the event rule of the trigger condition matches an event, LTTng can execute user-defined actions such as sending an LTTng notification, <>, and more. + See “<>” to learn more. For LTTng to emit an event{nbsp}__E__,{nbsp}__E__ must satisfy _all_ the basic conditions of an event rule{nbsp}__ER__, that is: * The instrumentation point from which LTTng creates{nbsp}__E__ has a specific <>. * A pattern matches the name of{nbsp}__E__ while another pattern doesn't. * The log level of the instrumentation point from which LTTng creates{nbsp}__E__ is at least as severe as some value, or is exactly some value. * The fields of the payload of{nbsp}__E__ and the current context fields satisfy a filter expression. A <> has additional, implicit conditions to satisfy. [[instrumentation-point-types]] ==== Instrumentation point types As of LTTng{nbsp}{revision}, the available instrumentation point types are, depending on the <>: Linux kernel:: LTTng tracepoint::: A statically defined point in the source code of the kernel image or of a kernel module using the <> macros. Linux kernel system call::: Entry, exit, or both of a Linux kernel system call. Linux https://www.kernel.org/doc/html/latest/trace/kprobes.html[kprobe]::: A single probe dynamically placed in the compiled kernel code. + When you create such an instrumentation point, you set its memory address or symbol name. Linux user space probe::: A single probe dynamically placed at the entry of a compiled user space application/library function through the kernel. + When you create such an instrumentation point, you set: + -- With the ELF method:: Its application/library path and its symbol name. With the USDT method:: Its application/library path, its provider name, and its probe name. + ``USDT'' stands for _SystemTap User-level Statically Defined Tracing_, a http://dtrace.org/blogs/about/[DTrace]-style marker. -- + As of LTTng{nbsp}{revision}, LTTng only supports USDT probes which are _not_ reference-counted. Linux https://www.kernel.org/doc/html/latest/trace/kprobes.html[kretprobe]::: Entry, exit, or both of a Linux kernel function. + When you create such an instrumentation point, you set the memory address or symbol name of its function. User space:: LTTng tracepoint::: A statically defined point in the source code of a C/$$C++$$ application/library using the <> macros. `java.util.logging`, Apache log4j, and Python:: Java or Python logging statement::: A method call on a Java or Python logger attached to an LTTng-UST handler. See ``<>'' to learn how to list available Linux kernel, user space, and logging instrumentation points. [[trigger]] === Trigger A _trigger_ associates a condition to one or more actions. When the condition of a trigger is satisfied, LTTng attempts to execute its actions. As of LTTng{nbsp}{revision}, the available trigger conditions and actions are: Conditions:: + * The consumed buffer size of a given <> becomes greater than some value. * The buffer usage of a given <> becomes greater than some value. * The buffer usage of a given channel becomes less than some value. * There's an ongoing <>. * A recording session rotation becomes completed. * An <> an event. Actions:: + * <> to a user application. * <> a given recording session. * <> a given recording session. * <> of a given recording session (rotate). * <> of a given recording session. A trigger belongs to a <>, not to a specific recording session. For a given session daemon, each Unix user has its own, private triggers. Note, however, that the `root` Unix user may, for the root session daemon: * Add a trigger as another Unix user. * List all the triggers, regardless of their owner. * Remove a trigger which belongs to another Unix user. For a given session daemon and Unix user, a trigger has a unique name. [[tracing-session]] === Recording session A _recording session_ (named ``tracing session'' prior to LTTng{nbsp}2.13) is a stateful dialogue between you and a <> for everything related to <>. Everything that you do when you control LTTng tracers to record events happens within a recording session. In particular, a recording session: * Has its own name, unique for a given session daemon. * Has its own set of trace files, if any. * Has its own state of activity (started or stopped). + An active recording session is an implicit <> condition. * Has its own <> (local, network streaming, snapshot, or live). * Has its own <> to which are attached their own recording event rules. * Has its own <>. [role="img-100"] .A _recording session_ contains <> that are members of <> and contain <>. image::concepts.png[] Those attributes and objects are completely isolated between different recording sessions. A recording session is like an https://en.wikipedia.org/wiki/Automated_teller_machine[ATM] session: the operations you do on the banking system through the ATM don't alter the data of other users of the same system. In the case of the ATM, a session lasts as long as your bank card is inside. In the case of LTTng, a recording session lasts from the man:lttng-create(1) command to the man:lttng-destroy(1) command. [role="img-100"] .Each Unix user has its own set of recording sessions. image::many-sessions.png[] A recording session belongs to a <>. For a given session daemon, each Unix user has its own, private recording sessions. Note, however, that the `root` Unix user may operate on or destroy another user's recording session. [[tracing-session-mode]] ==== Recording session mode LTTng offers four recording session modes: [[local-mode]]Local mode:: Write the trace data to the local file system. [[net-streaming-mode]]Network streaming mode:: Send the trace data over the network to a listening <>. [[snapshot-mode]]Snapshot mode:: Only write the trace data to the local file system or send it to a listening relay daemon when LTTng <>. + LTTng forces all the <> to be created to be configured to be snapshot-ready. + LTTng takes a snapshot of such a recording session when: + -- * You run the man:lttng-snapshot(1) command. * LTTng executes a `snapshot-session` <> action. -- [[live-mode]]Live mode:: Send the trace data over the network to a listening relay daemon for <>. + An LTTng live reader (for example, man:babeltrace2(1)) can connect to the same relay daemon to receive trace data while the recording session is active. [[domain]] === Tracing domain A _tracing domain_ identifies a type of LTTng tracer. A tracing domain has its own properties and features. There are currently five available tracing domains: * Linux kernel * User space * `java.util.logging` (JUL) * log4j * Python You must specify a tracing domain to target a type of LTTng tracer when using some <> commands to avoid ambiguity. For example, because the Linux kernel and user space tracing domains support named tracepoints as <>, you need to specify a tracing domain when you <> because both tracing domains could have tracepoints sharing the same name. You can create <> in the Linux kernel and user space tracing domains. The other tracing domains have a single, default channel. [[channel]] === Channel and ring buffer A _channel_ is an object which is responsible for a set of _ring buffers_. Each ring buffer is divided into multiple _sub-buffers_. When a <> matches an event, LTTng can record it to one or more sub-buffers of one or more channels. When you <>, you set its final attributes, that is: * Its <>. * What to do <> for a new event record because all sub-buffers are full. * The <> a ring buffer has. * The <> of trace files. * The periods of its <>, <>, and <> timers. * For a Linux kernel channel: its output type. + See the opt:lttng-enable-channel(1):--output option of the man:lttng-enable-channel(1) command. * For a user space channel: the value of its <>. A channel is always associated to a <>. The `java.util.logging` (JUL), log4j, and Python tracing domains each have a default channel which you can't configure. A channel owns <>. [[channel-buffering-schemes]] ==== Buffering scheme A channel has at least one ring buffer _per CPU_. LTTng always records an event to the ring buffer dedicated to the CPU which emits it. The buffering scheme of a user space channel determines what has its own set of per-CPU ring buffers: Per-user buffering:: Allocate one set of ring buffers--one per CPU--shared by all the instrumented processes of: If your Unix user is `root`::: Each Unix user. + -- [role="img-100"] .Per-user buffering scheme (recording session belongs to the `root` Unix user). image::per-user-buffering-root.png[] -- Otherwise::: Your Unix user. + -- [role="img-100"] .Per-user buffering scheme (recording session belongs to the `Bob` Unix user). image::per-user-buffering.png[] -- Per-process buffering:: Allocate one set of ring buffers--one per CPU--for each instrumented process of: If your Unix user is `root`::: All Unix users. + -- [role="img-100"] .Per-process buffering scheme (recording session belongs to the `root` Unix user). image::per-process-buffering-root.png[] -- Otherwise::: Your Unix user. + -- [role="img-100"] .Per-process buffering scheme (recording session belongs to the `Bob` Unix user). image::per-process-buffering.png[] -- The per-process buffering scheme tends to consume more memory than the per-user option because systems generally have more instrumented processes than Unix users running instrumented processes. However, the per-process buffering scheme ensures that one process having a high event throughput won't fill all the shared sub-buffers of the same Unix user, only its own. The buffering scheme of a Linux kernel channel is always to allocate a single set of ring buffers for the whole system. This scheme is similar to the per-user option, but with a single, global user ``running'' the kernel. [[channel-overwrite-mode-vs-discard-mode]] ==== Event record loss mode When LTTng emits an event, LTTng can record it to a specific, available sub-buffer within the ring buffers of specific channels. When there's no space left in a sub-buffer, the tracer marks it as consumable and another, available sub-buffer starts receiving the following event records. An LTTng <> eventually consumes the marked sub-buffer, which returns to the available state. [NOTE] [role="docsvg-channel-subbuf-anim"] ==== {note-no-anim} ==== In an ideal world, sub-buffers are consumed faster than they're filled, as it's the case in the previous animation. In the real world, however, all sub-buffers can be full at some point, leaving no space to record the following events. By default, <> and <> are _non-blocking_ tracers: when there's no available sub-buffer to record an event, it's acceptable to lose event records when the alternative would be to cause substantial delays in the execution of the instrumented application. LTTng privileges performance over integrity; it aims at perturbing the instrumented application as little as possible in order to make the detection of subtle race conditions and rare interrupt cascades possible. Since LTTng{nbsp}2.10, the LTTng user space tracer, LTTng-UST, supports a _blocking mode_. See the <> to learn how to use the blocking mode. When it comes to losing event records because there's no available sub-buffer, or because the blocking timeout of the channel is reached, the _event record loss mode_ of the channel determines what to do. The available event record loss modes are: [[discard-mode]]Discard mode:: Drop the newest event records until a sub-buffer becomes available. + This is the only available mode when you specify a blocking timeout. + With this mode, LTTng increments a count of lost event records when an event record is lost and saves this count to the trace. A trace reader can use the saved discarded event record count of the trace to decide whether or not to perform some analysis even if trace data is known to be missing. [[overwrite-mode]]Overwrite mode:: Clear the sub-buffer containing the oldest event records and start writing the newest event records there. + This mode is sometimes called _flight recorder mode_ because it's similar to a https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]: always keep a fixed amount of the latest data. It's also similar to the roll mode of an oscilloscope. + Since LTTng{nbsp}2.8, with this mode, LTTng writes to a given sub-buffer its sequence number within its data stream. With a <>, <>, or <> recording session, a trace reader can use such sequence numbers to report lost packets. A trace reader can use the saved discarded sub-buffer (packet) count of the trace to decide whether or not to perform some analysis even if trace data is known to be missing. + With this mode, LTTng doesn't write to the trace the exact number of lost event records in the lost sub-buffers. Which mechanism you should choose depends on your context: prioritize the newest or the oldest event records in the ring buffer? Beware that, in overwrite mode, the tracer abandons a _whole sub-buffer_ as soon as a there's no space left for a new event record, whereas in discard mode, the tracer only discards the event record that doesn't fit. There are a few ways to decrease your probability of losing event records. The ``<>'' section shows how to fine-tune the sub-buffer size and count of a channel to virtually stop losing event records, though at the cost of greater memory usage. [[channel-subbuf-size-vs-subbuf-count]] ==== Sub-buffer size and count A channel has one or more ring buffer for each CPU of the target system. See the ``<>'' section to learn how many ring buffers of a given channel are dedicated to each CPU depending on its buffering scheme. Set the size of each sub-buffer the ring buffers of a channel contain and how many there are when you <>. Note that LTTng switching the current sub-buffer of a ring buffer (marking a full one as consumable and switching to an available one for LTTng to record the next events) introduces noticeable CPU overhead. Knowing this, the following list presents a few practical situations along with how to configure the sub-buffer size and count for them: High event throughput:: In general, prefer large sub-buffers to lower the risk of losing event records. + Having larger sub-buffers also ensures a lower sub-buffer switching frequency. + The sub-buffer count is only meaningful if you create the channel in <>: in this case, if LTTng overwrites a sub-buffer, then the other sub-buffers are left unaltered. Low event throughput:: In general, prefer smaller sub-buffers since the risk of losing event records is low. + Because LTTng emits events less frequently, the sub-buffer switching frequency should remain low and therefore the overhead of the tracer shouldn't be a problem. Low memory system:: If your target system has a low memory limit, prefer fewer first, then smaller sub-buffers. + Even if the system is limited in memory, you want to keep the sub-buffers as large as possible to avoid a high sub-buffer switching frequency. Note that LTTng uses https://diamon.org/ctf/[CTF] as its trace format, which means event record data is very compact. For example, the average LTTng kernel event record weights about 32{nbsp}bytes. Therefore, a sub-buffer size of 1{nbsp}MiB is considered large. The previous scenarios highlight the major trade-off between a few large sub-buffers and more, smaller sub-buffers: sub-buffer switching frequency vs. how many event records are lost in overwrite mode. Assuming a constant event throughput and using the overwrite mode, the two following configurations have the same ring buffer total size: [NOTE] [role="docsvg-channel-subbuf-size-vs-count-anim"] ==== {note-no-anim} ==== Two sub-buffers of 4{nbsp}MiB each:: Expect a very low sub-buffer switching frequency, but if LTTng ever needs to overwrite a sub-buffer, half of the event records so far (4{nbsp}MiB) are definitely lost. Eight sub-buffers of 1{nbsp}MiB each:: Expect four times the tracer overhead of the configuration above, but if LTTng needs to overwrite a sub-buffer, only the eighth of event records so far (1{nbsp}MiB) are definitely lost. In <>, the sub-buffer count parameter is pointless: use two sub-buffers and set their size according to your requirements. [[tracefile-rotation]] ==== Maximum trace file size and count (trace file rotation) By default, trace files can grow as large as needed. Set the maximum size of each trace file that LTTng writes of a given channel when you <>. When the size of a trace file reaches the fixed maximum size of the channel, LTTng creates another file to contain the next event records. LTTng appends a file count to each trace file name in this case. If you set the trace file size attribute when you create a channel, the maximum number of trace files that LTTng creates is _unlimited_ by default. To limit them, set a maximum number of trace files. When the number of trace files reaches the fixed maximum count of the channel, LTTng overwrites the oldest trace file. This mechanism is called _trace file rotation_. [IMPORTANT] ==== Even if you don't limit the trace file count, always assume that LTTng manages all the trace files of the recording session. In other words, there's no safe way to know if LTTng still holds a given trace file open with the trace file rotation feature. The only way to obtain an unmanaged, self-contained LTTng trace before you <> is with the <> feature, which is available since LTTng{nbsp}2.11. ==== [[channel-timers]] ==== Timers Each channel can have up to three optional timers: [[channel-switch-timer]]Switch timer:: When this timer expires, a sub-buffer switch happens: for each ring buffer of the channel, LTTng marks the current sub-buffer as consumable and _switches_ to an available one to record the next events. + [NOTE] [role="docsvg-channel-switch-timer"] ==== {note-no-anim} ==== + A switch timer is useful to ensure that LTTng consumes and commits trace data to trace files or to a distant <> periodically in case of a low event throughput. + Such a timer is also convenient when you use large <> to cope with a sporadic high event throughput, even if the throughput is otherwise low. + Set the period of the switch timer of a channel when you <> with the opt:lttng-enable-channel(1):--switch-timer option. [[channel-read-timer]]Read timer:: When this timer expires, LTTng checks for full, consumable sub-buffers. + By default, the LTTng tracers use an asynchronous message mechanism to signal a full sub-buffer so that a <> can consume it. + When such messages must be avoided, for example in real-time applications, use this timer instead. + Set the period of the read timer of a channel when you <> with the opt:lttng-enable-channel(1):--read-timer option. [[channel-monitor-timer]]Monitor timer:: When this timer expires, the consumer daemon samples some channel statistics to evaluate the following <> conditions: + -- . The consumed buffer size of a given <> becomes greater than some value. . The buffer usage of a given channel becomes greater than some value. . The buffer usage of a given channel becomes less than some value. -- + If you disable the monitor timer of a channel{nbsp}__C__: + -- * The consumed buffer size value of the recording session of{nbsp}__C__ could be wrong for trigger condition type{nbsp}1: the consumed buffer size of{nbsp}__C__ won't be part of the grand total. * The buffer usage trigger conditions (types{nbsp}2 and{nbsp}3) for{nbsp}__C__ will never be satisfied. -- + Set the period of the monitor timer of a channel when you <> with the opt:lttng-enable-channel(1):--monitor-timer option. [[event]] === Recording event rule and event record A _recording event rule_ is a specific type of <> of which the action is to serialize and record the matched event as an _event record_. Set the explicit conditions of a recording event rule when you <>. A recording event rule also has the following implicit conditions: * The recording event rule itself is enabled. + A recording event rule is enabled on creation. * The <> to which the recording event rule is attached is enabled. + A channel is enabled on creation. * The <> of the recording event rule is <> (started). + A recording session is inactive (stopped) on creation. * The process for which LTTng creates an event to match is <>. + All processes are allowed to record events on recording session creation. You always attach a recording event rule to a channel, which belongs to a recording session, when you create it. When a recording event rule{nbsp}__ER__ matches an event{nbsp}__E__, LTTng attempts to serialize and record{nbsp}__E__ to one of the available sub-buffers of the channel to which{nbsp}__E__ is attached. When multiple matching recording event rules are attached to the same channel, LTTng attempts to serialize and record the matched event _once_. In the following example, the second recording event rule is redundant when both are enabled: [role="term"] ---- $ lttng enable-event --userspace hello:world $ lttng enable-event --userspace hello:world --loglevel=INFO ---- [role="img-100"] .Logical path from an instrumentation point to an event record. image::event-rule.png[] As of LTTng{nbsp}{revision}, you cannot remove a recording event rule: it exists as long as its recording session exists. [[plumbing]] == Components of noch:{LTTng} The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong to call LTTng a simple _tool_ since it's composed of multiple interacting components. This section describes those components, explains their respective roles, and shows how they connect together to form the LTTng ecosystem. The following diagram shows how the most important components of LTTng interact with user applications, the Linux kernel, and you: [role="img-100"] .Control and trace data paths between LTTng components. image::plumbing.png[] The LTTng project integrates: LTTng-tools:: Libraries and command-line interface to control recording sessions: + * <> (man:lttng-sessiond(8)). * <> (cmd:lttng-consumerd). * <> (man:lttng-relayd(8)). * <> (`liblttng-ctl`). * <> (man:lttng(1)). * <> (man:lttng-crash(1)). LTTng-UST:: Libraries and Java/Python packages to instrument and trace user applications: + * <> (`liblttng-ust`) and its headers to instrument and trace any native user application. * <>: ** `liblttng-ust-libc-wrapper` ** `liblttng-ust-pthread-wrapper` ** `liblttng-ust-cyg-profile` ** `liblttng-ust-cyg-profile-fast` ** `liblttng-ust-dl` * <> to instrument and trace Java applications using `java.util.logging` or Apache log4j{nbsp}1.2 logging. * <> to instrument Python applications using the standard `logging` package. LTTng-modules:: <> to instrument and trace the kernel: + * LTTng kernel tracer module. * Recording ring buffer kernel modules. * Probe kernel modules. * LTTng logger kernel module. [[lttng-cli]] === Tracing control command-line interface The _man:lttng(1) command-line tool_ is the standard user interface to control LTTng <>. The cmd:lttng tool is part of LTTng-tools. The cmd:lttng tool is linked with <> to communicate with one or more <> behind the scenes. The cmd:lttng tool has a Git-like interface: [role="term"] ---- $ lttng [GENERAL OPTIONS] [COMMAND OPTIONS] ---- The ``<>'' section explores the available features of LTTng through its cmd:lttng tool. [[liblttng-ctl-lttng]] === Tracing control library [role="img-100"] .The tracing control library. image::plumbing-liblttng-ctl.png[] The _LTTng control library_, `liblttng-ctl`, is used to communicate with a <> using a C{nbsp}API that hides the underlying details of the protocol. `liblttng-ctl` is part of LTTng-tools. The <> is linked with `liblttng-ctl`. Use `liblttng-ctl` in C or $$C++$$ source code by including its ``master'' header: [source,c] ---- #include ---- As of LTTng{nbsp}{revision}, the best available developer documentation for `liblttng-ctl` is its installed header files. Functions and structures are documented with header comments. [[lttng-ust]] === User space tracing library [role="img-100"] .The user space tracing library. image::plumbing-liblttng-ust.png[] The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)), is the LTTng user space tracer. `liblttng-ust` receives commands from a <>, for example to allow specific instrumentation points to emit LTTng <>, and writes event records to <> shared with a <>. `liblttng-ust` is part of LTTng-UST. `liblttng-ust` can also send asynchronous messages to the session daemon when it emits an event. This supports the ``event rule matches'' <> condition feature (see “<>”). Public C{nbsp}header files are installed beside `liblttng-ust` to instrument any <>. <>, which are regular Java and Python packages, use their own <> which is linked with `liblttng-ust`. An application or library doesn't have to initialize `liblttng-ust` manually: its constructor does the necessary tasks to register the application to a session daemon. The initialization phase also configures instrumentation points depending on the <> that you already created. [[lttng-ust-agents]] === User space tracing agents [role="img-100"] .The user space tracing agents. image::plumbing-lttng-ust-agents.png[] The _LTTng-UST Java and Python agents_ are regular Java and Python packages which add LTTng tracing capabilities to the native logging frameworks. The LTTng-UST agents are part of LTTng-UST. In the case of Java, the https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging` core logging facilities] and https://logging.apache.org/log4j/1.2/[Apache log4j{nbsp}1.2] are supported. Note that Apache Log4j{nbsp}2 isn't supported. In the case of Python, the standard https://docs.python.org/3/library/logging.html[`logging`] package is supported. Both Python{nbsp}2 and Python{nbsp}3 modules can import the LTTng-UST Python agent package. The applications using the LTTng-UST agents are in the `java.util.logging` (JUL), log4j, and Python <>. Both agents use the same mechanism to convert log statements to LTTng events. When an agent initializes, it creates a log handler that attaches to the root logger. The agent also registers to a <>. When the user application executes a log statement, the root logger passes it to the log handler of the agent. The custom log handler of the agent calls a native function in a tracepoint provider package shared library linked with <>, passing the formatted log message and other fields, like its logger name and its log level. This native function contains a user space instrumentation point, therefore tracing the log statement. The log level condition of a <> is considered when tracing a Java or a Python application, and it's compatible with the standard `java.util.logging`, log4j, and Python log levels. [[lttng-modules]] === LTTng kernel modules [role="img-100"] .The LTTng kernel modules. image::plumbing-lttng-modules.png[] The _LTTng kernel modules_ are a set of Linux kernel modules which implement the kernel tracer of the LTTng project. The LTTng kernel modules are part of LTTng-modules. The LTTng kernel modules include: * A set of _probe_ modules. + Each module attaches to a specific subsystem of the Linux kernel using its tracepoint instrument points. + There are also modules to attach to the entry and return points of the Linux system call functions. * _Ring buffer_ modules. + A ring buffer implementation is provided as kernel modules. The LTTng kernel tracer writes to ring buffers; a <> reads from ring buffers. * The _LTTng kernel tracer_ module. * The <> module. + The LTTng logger module implements the special path:{/proc/lttng-logger} (and path:{/dev/lttng-logger}, since LTTng{nbsp}2.11) files so that any executable can generate LTTng events by opening those files and writing to them. The LTTng kernel tracer can also send asynchronous messages to the <> when it emits an event. This supports the ``event rule matches'' <> condition feature (see “<>”). Generally, you don't have to load the LTTng kernel modules manually (using man:modprobe(8), for example): a root session daemon loads the necessary modules when starting. If you have extra probe modules, you can specify to load them to the session daemon on the command line (see the opt:lttng-sessiond(8):--extra-kmod-probes option). See also <>. The LTTng kernel modules are installed in +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is the kernel release (output of `uname --kernel-release`). [[lttng-sessiond]] === Session daemon [role="img-100"] .The session daemon. image::plumbing-sessiond.png[] The _session daemon_, man:lttng-sessiond(8), is a https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] which: * Manages <>. * Controls the various components (like tracers and <>) of LTTng. * Sends <> to user applications. The session daemon is part of LTTng-tools. The session daemon sends control requests to and receives control responses from: * The <>. + Any instance of the user space tracing library first registers to a session daemon. Then, the session daemon can send requests to this instance, such as: + -- ** Get the list of tracepoints. ** Share a <> so that the user space tracing library can decide whether or not a given tracepoint can emit events. Amongst the possible conditions of a recording event rule is a filter expression which `liblttng-ust` evaluates before it emits an event. ** Share <> attributes and ring buffer locations. -- + The session daemon and the user space tracing library use a Unix domain socket to communicate. * The <>. + Any instance of a user space tracing agent first registers to a session daemon. Then, the session daemon can send requests to this instance, such as: + -- ** Get the list of loggers. ** Enable or disable a specific logger. -- + The session daemon and the user space tracing agent use a TCP connection to communicate. * The <>. * The <>. + The session daemon sends requests to the consumer daemon to instruct it where to send the trace data streams, amongst other information. * The <>. The session daemon receives commands from the <>. The session daemon can receive asynchronous messages from the <> and <> tracers when they emit events. This supports the ``event rule matches'' <> condition feature (see “<>”). The root session daemon loads the appropriate <> on startup. It also spawns one or more <> as soon as you create a <>. The session daemon doesn't send and receive trace data: this is the role of the <> and <>. It does, however, generate the https://diamon.org/ctf/[CTF] metadata stream. Each Unix user can have its own session daemon instance. The recording sessions which different session daemons manage are completely independent. The root user's session daemon is the only one which is allowed to control the LTTng kernel tracer, and its spawned consumer daemon is the only one which is allowed to consume trace data from the LTTng kernel tracer. Note, however, that any Unix user which is a member of the <> is allowed to create <> in the Linux kernel <>, and therefore to use the Linux kernel LTTng tracer. The <> automatically starts a session daemon when using its `create` command if none is currently running. You can also start the session daemon manually. [[lttng-consumerd]] === Consumer daemon [role="img-100"] .The consumer daemon. image::plumbing-consumerd.png[] The _consumer daemon_, cmd:lttng-consumerd, is a https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] which shares ring buffers with user applications or with the LTTng kernel modules to collect trace data and send it to some location (file system or to a <> over the network). The consumer daemon is part of LTTng-tools. You don't start a consumer daemon manually: a consumer daemon is always spawned by a <> as soon as you create a <>, that is, before you start recording. When you kill its owner session daemon, the consumer daemon also exits because it's the child process of the session daemon. Command-line options of man:lttng-sessiond(8) target the consumer daemon process. There are up to two running consumer daemons per Unix user, whereas only one session daemon can run per user. This is because each process can be either 32-bit or 64-bit: if the target system runs a mixture of 32-bit and 64-bit processes, it's more efficient to have separate corresponding 32-bit and 64-bit consumer daemons. The root user is an exception: it can have up to _three_ running consumer daemons: 32-bit and 64-bit instances for its user applications, and one more reserved for collecting kernel trace data. [[lttng-relayd]] === Relay daemon [role="img-100"] .The relay daemon. image::plumbing-relayd.png[] The _relay daemon_, man:lttng-relayd(8), is a https://en.wikipedia.org/wiki/Daemon_(computing)[daemon] acting as a bridge between remote session and consumer daemons, local trace files, and a remote live trace reader. The relay daemon is part of LTTng-tools. The main purpose of the relay daemon is to implement a receiver of <>. This is useful when the target system doesn't have much file system space to write trace files locally. The relay daemon is also a server to which a <> can connect. The live trace reader sends requests to the relay daemon to receive trace data as the target system records events. The communication protocol is named _LTTng live_; it's used over TCP connections. Note that you can start the relay daemon on the target system directly. This is the setup of choice when the use case is to view/analyze events as the target system records them without the need of a remote system. [[instrumenting]] == [[using-lttng]]Instrumentation There are many examples of tracing and monitoring in our everyday life: * You have access to real-time and historical weather reports and forecasts thanks to weather stations installed around the country. * You know your heart is safe thanks to an electrocardiogram. * You make sure not to drive your car too fast and to have enough fuel to reach your destination thanks to gauges visible on your dashboard. All the previous examples have something in common: they rely on **instruments**. Without the electrodes attached to the surface of your body skin, cardiac monitoring is futile. LTTng, as a tracer, is no different from those real life examples. If you're about to trace a software system or, in other words, record its history of execution, you better have **instrumentation points** in the subject you're tracing, that is, the actual software system. <> were developed to instrument a piece of software for LTTng tracing. The most straightforward one is to manually place static instrumentation points, called _tracepoints_, in the source code of the application. The Linux kernel <> also makes it possible to dynamically add instrumentation points. If you're only interested in tracing the Linux kernel, your instrumentation needs are probably already covered by the built-in <> of LTTng. You may also wish to have LTTng trace a user application which is already instrumented for LTTng tracing. In such cases, skip this whole section and read the topics of the ``<>'' section. Many methods are available to instrument a piece of software for LTTng tracing: * <>. * <>. * <>. * <>. * <>. * <>. [[c-application]] === [[cxx-application]]Instrument a C/$$C++$$ user application The high level procedure to instrument a C or $$C++$$ user application with the <>, `liblttng-ust`, is: . <>. . <>. . <>. If you need quick, man:printf(3)-like instrumentation, skip those steps and use <> or <> instead. IMPORTANT: You need to <> LTTng-UST to instrument a user application with `liblttng-ust`. [[tracepoint-provider]] ==== Create the source files of a tracepoint provider package A _tracepoint provider_ is a set of compiled functions which provide **tracepoints** to an application, the type of instrumentation point which LTTng-UST provides. Those functions can make LTTng emit events with user-defined fields and serialize those events as event records to one or more LTTng-UST <> sub-buffers. The `lttng_ust_tracepoint()` macro, which you <>, calls those functions. A _tracepoint provider package_ is an object file (`.o`) or a shared library (`.so`) which contains one or more tracepoint providers. Its source files are: * One or more <> (`.h`). * A <> (`.c`). A tracepoint provider package is dynamically linked with `liblttng-ust`, the LTTng user space tracer, at run time. [role="img-100"] .User application linked with `liblttng-ust` and containing a tracepoint provider. image::ust-app.png[] NOTE: If you need quick, man:printf(3)-like instrumentation, skip creating and using a tracepoint provider and use <> or <> instead. [[tpp-header]] ===== Create a tracepoint provider header file template A _tracepoint provider header file_ contains the tracepoint definitions of a tracepoint provider. To create a tracepoint provider header file: . Start from this template: + -- [source,c] .Tracepoint provider header file template (`.h` file extension). ---- #undef LTTNG_UST_TRACEPOINT_PROVIDER #define LTTNG_UST_TRACEPOINT_PROVIDER provider_name #undef LTTNG_UST_TRACEPOINT_INCLUDE #define LTTNG_UST_TRACEPOINT_INCLUDE "./tp.h" #if !defined(_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ) #define _TP_H #include /* * Use LTTNG_UST_TRACEPOINT_EVENT(), LTTNG_UST_TRACEPOINT_EVENT_CLASS(), * LTTNG_UST_TRACEPOINT_EVENT_INSTANCE(), and * LTTNG_UST_TRACEPOINT_LOGLEVEL() here. */ #endif /* _TP_H */ #include ---- -- + Replace: + * +__provider_name__+ with the name of your tracepoint provider. * `"tp.h"` with the name of your tracepoint provider header file. . Below the `#include ` line, put your <>. Your tracepoint provider name must be unique amongst all the possible tracepoint provider names used on the same target system. We suggest to include the name of your project or company in the name, for example, `org_lttng_my_project_tpp`. [[defining-tracepoints]] ===== Create a tracepoint definition A _tracepoint definition_ defines, for a given tracepoint: * Its **input arguments**. + They're the macro parameters that the `lttng_ust_tracepoint()` macro accepts for this particular tracepoint in the source code of the user application. * Its **output event fields**. + They're the sources of event fields that form the payload of any event that the execution of the `lttng_ust_tracepoint()` macro emits for this particular tracepoint. Create a tracepoint definition with the `LTTNG_UST_TRACEPOINT_EVENT()` macro below the `#include ` line in the <>. The syntax of the `LTTNG_UST_TRACEPOINT_EVENT()` macro is: [source,c] .`LTTNG_UST_TRACEPOINT_EVENT()` macro syntax. ---- LTTNG_UST_TRACEPOINT_EVENT( /* Tracepoint provider name */ provider_name, /* Tracepoint name */ tracepoint_name, /* Input arguments */ LTTNG_UST_TP_ARGS( arguments ), /* Output event fields */ LTTNG_UST_TP_FIELDS( fields ) ) ---- Replace: * +__provider_name__+ with your tracepoint provider name. * +__tracepoint_name__+ with your tracepoint name. * +__arguments__+ with the <>. * +__fields__+ with the <> definitions. The full name of this tracepoint is `provider_name:tracepoint_name`. [IMPORTANT] .Event name length limitation ==== The concatenation of the tracepoint provider name and the tracepoint name must not exceed **254{nbsp}characters**. If it does, the instrumented application compiles and runs, but LTTng throws multiple warnings and you could experience serious issues. ==== [[tpp-def-input-args]]The syntax of the `LTTNG_UST_TP_ARGS()` macro is: [source,c] .`LTTNG_UST_TP_ARGS()` macro syntax. ---- LTTNG_UST_TP_ARGS( type, arg_name ) ---- Replace: * +__type__+ with the C{nbsp}type of the argument. * +__arg_name__+ with the argument name. You can repeat +__type__+ and +__arg_name__+ up to 10{nbsp}times to have more than one argument. .`LTTNG_UST_TP_ARGS()` usage with three arguments. ==== [source,c] ---- LTTNG_UST_TP_ARGS( int, count, float, ratio, const char*, query ) ---- ==== The `LTTNG_UST_TP_ARGS()` and `LTTNG_UST_TP_ARGS(void)` forms are valid to create a tracepoint definition with no input arguments. [[tpp-def-output-fields]]The `LTTNG_UST_TP_FIELDS()` macro contains a list of `lttng_ust_field_*()` macros. Each `lttng_ust_field_*()` macro defines one event field. See man:lttng-ust(3) for a complete description of the available `lttng_ust_field_*()` macros. A `lttng_ust_field_*()` macro specifies the type, size, and byte order of one event field. Each `lttng_ust_field_*()` macro takes an _argument expression_ parameter. This is a C{nbsp}expression that the tracer evaluates at the `lttng_ust_tracepoint()` macro site in the source code of the application. This expression provides the source of data of a field. The argument expression can include input argument names listed in the `LTTNG_UST_TP_ARGS()` macro. Each `lttng_ust_field_*()` macro also takes a _field name_ parameter. Field names must be unique within a given tracepoint definition. Here's a complete tracepoint definition example: .Tracepoint definition. ==== The following tracepoint definition defines a tracepoint which takes three input arguments and has four output event fields. [source,c] ---- #include "my-custom-structure.h" LTTNG_UST_TRACEPOINT_EVENT( my_provider, my_tracepoint, LTTNG_UST_TP_ARGS( const struct my_custom_structure *, my_custom_structure, float, ratio, const char *, query ), LTTNG_UST_TP_FIELDS( lttng_ust_field_string(query_field, query) lttng_ust_field_float(double, ratio_field, ratio) lttng_ust_field_integer(int, recv_size, my_custom_structure->recv_size) lttng_ust_field_integer(int, send_size, my_custom_structure->send_size) ) ) ---- Refer to this tracepoint definition with the `lttng_ust_tracepoint()` macro in the source code of your application like this: [source,c] ---- lttng_ust_tracepoint(my_provider, my_tracepoint, my_structure, some_ratio, the_query); ---- ==== NOTE: The LTTng-UST tracer only evaluates the arguments of a tracepoint at run time when such a tracepoint _could_ emit an event. See <> to learn more. [[using-tracepoint-classes]] ===== Use a tracepoint class A _tracepoint class_ is a class of tracepoints which share the same output event field definitions. A _tracepoint instance_ is one instance of such a defined tracepoint class, with its own tracepoint name. The <> is actually a shorthand which defines both a tracepoint class and a tracepoint instance at the same time. When you build a tracepoint provider package, the C or $$C++$$ compiler creates one serialization function for each **tracepoint class**. A serialization function is responsible for serializing the event fields of a tracepoint to a sub-buffer when recording. For various performance reasons, when your situation requires multiple tracepoint definitions with different names, but with the same event fields, we recommend that you manually create a tracepoint class and instantiate as many tracepoint instances as needed. One positive effect of such a design, amongst other advantages, is that all tracepoint instances of the same tracepoint class reuse the same serialization function, thus reducing https://en.wikipedia.org/wiki/Cache_pollution[cache pollution]. .Use a tracepoint class and tracepoint instances. ==== Consider the following three tracepoint definitions: [source,c] ---- LTTNG_UST_TRACEPOINT_EVENT( my_app, get_account, LTTNG_UST_TP_ARGS( int, userid, size_t, len ), LTTNG_UST_TP_FIELDS( lttng_ust_field_integer(int, userid, userid) lttng_ust_field_integer(size_t, len, len) ) ) LTTNG_UST_TRACEPOINT_EVENT( my_app, get_settings, LTTNG_UST_TP_ARGS( int, userid, size_t, len ), LTTNG_UST_TP_FIELDS( lttng_ust_field_integer(int, userid, userid) lttng_ust_field_integer(size_t, len, len) ) ) LTTNG_UST_TRACEPOINT_EVENT( my_app, get_transaction, LTTNG_UST_TP_ARGS( int, userid, size_t, len ), LTTNG_UST_TP_FIELDS( lttng_ust_field_integer(int, userid, userid) lttng_ust_field_integer(size_t, len, len) ) ) ---- In this case, we create three tracepoint classes, with one implicit tracepoint instance for each of them: `get_account`, `get_settings`, and `get_transaction`. However, they all share the same event field names and types. Hence three identical, yet independent serialization functions are created when you build the tracepoint provider package. A better design choice is to define a single tracepoint class and three tracepoint instances: [source,c] ---- /* The tracepoint class */ LTTNG_UST_TRACEPOINT_EVENT_CLASS( /* Tracepoint class provider name */ my_app, /* Tracepoint class name */ my_class, /* Input arguments */ LTTNG_UST_TP_ARGS( int, userid, size_t, len ), /* Output event fields */ LTTNG_UST_TP_FIELDS( lttng_ust_field_integer(int, userid, userid) lttng_ust_field_integer(size_t, len, len) ) ) /* The tracepoint instances */ LTTNG_UST_TRACEPOINT_EVENT_INSTANCE( /* Tracepoint class provider name */ my_app, /* Tracepoint class name */ my_class, /* Instance provider name */ my_app, /* Tracepoint name */ get_account, /* Input arguments */ LTTNG_UST_TP_ARGS( int, userid, size_t, len ) ) LTTNG_UST_TRACEPOINT_EVENT_INSTANCE( my_app, my_class, get_settings, LTTNG_UST_TP_ARGS( int, userid, size_t, len ) ) LTTNG_UST_TRACEPOINT_EVENT_INSTANCE( my_app, my_class, get_transaction, LTTNG_UST_TP_ARGS( int, userid, size_t, len ) ) ---- ==== The tracepoint class and instance provider names must be the same if the `LTTNG_UST_TRACEPOINT_EVENT_CLASS()` and `LTTNG_UST_TRACEPOINT_EVENT_INSTANCE()` expansions are part of the same translation unit. See man:lttng-ust(3) to learn more. [[assigning-log-levels]] ===== Assign a log level to a tracepoint definition Assign a _log level_ to a <> with the `LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro. Assigning different levels of severity to tracepoint definitions can be useful: when you <>, you can target tracepoints having a log level at least as severe as a specific value. The concept of LTTng-UST log levels is similar to the levels found in typical logging frameworks: * In a logging framework, the log level is given by the function or method name you use at the log statement site: `debug()`, `info()`, `warn()`, `error()`, and so on. * In LTTng-UST, you statically assign the log level to a tracepoint definition; any `lttng_ust_tracepoint()` macro invocation which refers to this definition has this log level. You must use `LTTNG_UST_TRACEPOINT_LOGLEVEL()` _after_ the <> or <> macro for a given tracepoint. The syntax of the `LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro is: [source,c] .`LTTNG_UST_TRACEPOINT_LOGLEVEL()` macro syntax. ---- LTTNG_UST_TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level) ---- Replace: * +__provider_name__+ with the tracepoint provider name. * +__tracepoint_name__+ with the tracepoint name. * +__log_level__+ with the log level to assign to the tracepoint definition named +__tracepoint_name__+ in the +__provider_name__+ tracepoint provider. + See man:lttng-ust(3) for a list of available log level names. .Assign the `LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_UNIT` log level to a tracepoint definition. ==== [source,c] ---- /* Tracepoint definition */ LTTNG_UST_TRACEPOINT_EVENT( my_app, get_transaction, LTTNG_UST_TP_ARGS( int, userid, size_t, len ), LTTNG_UST_TP_FIELDS( lttng_ust_field_integer(int, userid, userid) lttng_ust_field_integer(size_t, len, len) ) ) /* Log level assignment */ LTTNG_UST_TRACEPOINT_LOGLEVEL(my_app, get_transaction, LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_UNIT) ---- ==== [[tpp-source]] ===== Create a tracepoint provider package source file A _tracepoint provider package source file_ is a C source file which includes a <> to expand its macros into event serialization and other functions. Use the following tracepoint provider package source file template: [source,c] .Tracepoint provider package source file template. ---- #define LTTNG_UST_TRACEPOINT_CREATE_PROBES #include "tp.h" ---- Replace `tp.h` with the name of your <> name. You may also include more than one tracepoint provider header file here to create a tracepoint provider package holding more than one tracepoint providers. [[probing-the-application-source-code]] ==== Add tracepoints to the source code of an application Once you <>, use the `lttng_ust_tracepoint()` macro in the source code of your application to insert the tracepoints that this header <>. The `lttng_ust_tracepoint()` macro takes at least two parameters: the tracepoint provider name and the tracepoint name. The corresponding tracepoint definition defines the other parameters. .`lttng_ust_tracepoint()` usage. ==== The following <> defines a tracepoint which takes two input arguments and has two output event fields. [source,c] .Tracepoint provider header file. ---- #include "my-custom-structure.h" LTTNG_UST_TRACEPOINT_EVENT( my_provider, my_tracepoint, LTTNG_UST_TP_ARGS( int, argc, const char *, cmd_name ), LTTNG_UST_TP_FIELDS( lttng_ust_field_string(cmd_name, cmd_name) lttng_ust_field_integer(int, number_of_args, argc) ) ) ---- Refer to this tracepoint definition with the `lttng_ust_tracepoint()` macro in the source code of your application like this: [source,c] .Application source file. ---- #include "tp.h" int main(int argc, char* argv[]) { lttng_ust_tracepoint(my_provider, my_tracepoint, argc, argv[0]); return 0; } ---- Note how the source code of the application includes the tracepoint provider header file containing the tracepoint definitions to use, path:{tp.h}. ==== .`lttng_ust_tracepoint()` usage with a complex tracepoint definition. ==== Consider this complex tracepoint definition, where multiple event fields refer to the same input arguments in their argument expression parameter: [source,c] .Tracepoint provider header file. ---- /* For `struct stat` */ #include #include #include LTTNG_UST_TRACEPOINT_EVENT( my_provider, my_tracepoint, LTTNG_UST_TP_ARGS( int, my_int_arg, char *, my_str_arg, struct stat *, st ), LTTNG_UST_TP_FIELDS( lttng_ust_field_integer(int, my_constant_field, 23 + 17) lttng_ust_field_integer(int, my_int_arg_field, my_int_arg) lttng_ust_field_integer(int, my_int_arg_field2, my_int_arg * my_int_arg) lttng_ust_field_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] + my_str_arg[2] + my_str_arg[3]) lttng_ust_field_string(my_str_arg_field, my_str_arg) lttng_ust_field_integer_hex(off_t, size_field, st->st_size) lttng_ust_field_float(double, size_dbl_field, (double) st->st_size) lttng_ust_field_sequence_text(char, half_my_str_arg_field, my_str_arg, size_t, strlen(my_str_arg) / 2) ) ) ---- Refer to this tracepoint definition with the `lttng_ust_tracepoint()` macro in the source code of your application like this: [source,c] .Application source file. ---- #define LTTNG_UST_TRACEPOINT_DEFINE #include "tp.h" int main(void) { struct stat s; stat("/etc/fstab", &s); lttng_ust_tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s); return 0; } ---- If you look at the event record that LTTng writes when recording this program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes, it should look like this: .Event record fields |==== |Field name |Field value |`my_constant_field` |40 |`my_int_arg_field` |23 |`my_int_arg_field2` |529 |`sum4_field` |389 |`my_str_arg_field` |`Hello, World!` |`size_field` |0x12d |`size_dbl_field` |301.0 |`half_my_str_arg_field` |`Hello,` |==== ==== Sometimes, the arguments you pass to `lttng_ust_tracepoint()` are expensive to evaluate--they use the call stack, for example. To avoid this computation when LTTng wouldn't emit any event anyway, use the `lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros. The syntax of the `lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros is: [source,c] .`lttng_ust_tracepoint_enabled()` and `lttng_ust_do_tracepoint()` macros syntax. ---- lttng_ust_tracepoint_enabled(provider_name, tracepoint_name) lttng_ust_do_tracepoint(provider_name, tracepoint_name, ...) ---- Replace: * +__provider_name__+ with the tracepoint provider name. * +__tracepoint_name__+ with the tracepoint name. `lttng_ust_tracepoint_enabled()` returns a non-zero value if executing the tracepoint named `tracepoint_name` from the provider named `provider_name` _could_ make LTTng emit an event, depending on the payload of said event. `lttng_ust_do_tracepoint()` is like `lttng_ust_tracepoint()`, except that it doesn't check what `lttng_ust_tracepoint_enabled()` checks. Using `lttng_ust_tracepoint()` with `lttng_ust_tracepoint_enabled()` is dangerous because `lttng_ust_tracepoint()` also contains the `lttng_ust_tracepoint_enabled()` check; therefore, a race condition is possible in this situation: [source,c] .Possible race condition when using `lttng_ust_tracepoint_enabled()` with `lttng_ust_tracepoint()`. ---- if (lttng_ust_tracepoint_enabled(my_provider, my_tracepoint)) { stuff = prepare_stuff(); } lttng_ust_tracepoint(my_provider, my_tracepoint, stuff); ---- If `lttng_ust_tracepoint_enabled()` is false, but would be true after the conditional block, then `stuff` isn't prepared: the emitted event will either contain wrong data, or the whole application could crash (with a segmentation fault, for example). NOTE: Neither `lttng_ust_tracepoint_enabled()` nor `lttng_ust_do_tracepoint()` have an `STAP_PROBEV()` call. If you need it, you must emit this call yourself. [[building-tracepoint-providers-and-user-application]] ==== Build and link a tracepoint provider package and an application Once you have one or more <> and a <>, create the tracepoint provider package by compiling its source file. From here, multiple build and run scenarios are possible. The following table shows common application and library configurations along with the required command lines to achieve them. In the following diagrams, we use the following file names: `app`:: Executable application. `app.o`:: Application object file. `tpp.o`:: Tracepoint provider package object file. `tpp.a`:: Tracepoint provider package archive file. `libtpp.so`:: Tracepoint provider package shared object file. `emon.o`:: User library object file. `libemon.so`:: User library shared object file. We use the following symbols in the diagrams of table below: [role="img-100"] .Symbols used in the build scenario diagrams. image::ust-sit-symbols.png[] We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment variable in the following instructions. [role="growable ust-scenarios",cols="asciidoc,asciidoc"] .Common tracepoint provider package scenarios. |==== |Scenario |Instructions | The instrumented application is statically linked with the tracepoint provider package object. image::ust-sit+app-linked-with-tp-o+app-instrumented.png[] | include::../common/ust-sit-step-tp-o.txt[] To build the instrumented application: . In path:{app.c}, before including path:{tpp.h}, add the following line: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE ---- -- . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o tpp.o -llttng-ust -ldl ---- -- To run the instrumented application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The instrumented application is statically linked with the tracepoint provider package archive file. image::ust-sit+app-linked-with-tp-a+app-instrumented.png[] | To create the tracepoint provider package archive file: . Compile the <>: + -- [role="term"] ---- $ gcc -I. -c tpp.c ---- -- . Create the tracepoint provider package archive file: + -- [role="term"] ---- $ ar rcs tpp.a tpp.o ---- -- To build the instrumented application: . In path:{app.c}, before including path:{tpp.h}, add the following line: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE ---- -- . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o tpp.a -llttng-ust -ldl ---- -- To run the instrumented application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The instrumented application is linked with the tracepoint provider package shared object. image::ust-sit+app-linked-with-tp-so+app-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented application: . In path:{app.c}, before including path:{tpp.h}, add the following line: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE ---- -- . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -ldl -L. -ltpp ---- -- To run the instrumented application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The tracepoint provider package shared object is preloaded before the instrumented application starts. image::ust-sit+tp-so-preloaded+app-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented application: . In path:{app.c}, before including path:{tpp.h}, add the following lines: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE ---- -- . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -ldl ---- -- To run the instrumented application with tracing support: * Preload the tracepoint provider package shared object and start the application: + -- [role="term"] ---- $ LD_PRELOAD=./libtpp.so ./app ---- -- To run the instrumented application without tracing support: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The instrumented application dynamically loads the tracepoint provider package shared object. image::ust-sit+app-dlopens-tp-so+app-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented application: . In path:{app.c}, before including path:{tpp.h}, add the following lines: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE ---- -- . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -ldl ---- -- To run the instrumented application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The application is linked with the instrumented user library. The instrumented user library is statically linked with the tracepoint provider package object file. image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[] | include::../common/ust-sit-step-tp-o-fpic.txt[] To build the instrumented user library: . In path:{emon.c}, before including path:{tpp.h}, add the following line: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE ---- -- . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl ---- -- To build the application: . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -L. -lemon ---- -- To run the application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The application is linked with the instrumented user library. The instrumented user library is linked with the tracepoint provider package shared object. image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented user library: . In path:{emon.c}, before including path:{tpp.h}, add the following line: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE ---- -- . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp ---- -- To build the application: . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -L. -lemon ---- -- To run the application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The tracepoint provider package shared object is preloaded before the application starts. The application is linked with the instrumented user library. image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented user library: . In path:{emon.c}, before including path:{tpp.h}, add the following lines: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE ---- -- . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o -ldl ---- -- To build the application: . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -L. -lemon ---- -- To run the application with tracing support: * Preload the tracepoint provider package shared object and start the application: + -- [role="term"] ---- $ LD_PRELOAD=./libtpp.so ./app ---- -- To run the application without tracing support: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The application is linked with the instrumented user library. The instrumented user library dynamically loads the tracepoint provider package shared object. image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented user library: . In path:{emon.c}, before including path:{tpp.h}, add the following lines: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE ---- -- . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o -ldl ---- -- To build the application: . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -L. -lemon ---- -- To run the application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The application dynamically loads the instrumented user library. The instrumented user library is linked with the tracepoint provider package shared object. image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented user library: . In path:{emon.c}, before including path:{tpp.h}, add the following line: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE ---- -- . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp ---- -- To build the application: . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -ldl -L. -lemon ---- -- To run the application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The application dynamically loads the instrumented user library. The instrumented user library dynamically loads the tracepoint provider package shared object. image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented user library: . In path:{emon.c}, before including path:{tpp.h}, add the following lines: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE ---- -- . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o -ldl ---- -- To build the application: . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -ldl -L. -lemon ---- -- To run the application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The tracepoint provider package shared object is preloaded before the application starts. The application dynamically loads the instrumented user library. image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[] | include::../common/ust-sit-step-tp-so.txt[] To build the instrumented user library: . In path:{emon.c}, before including path:{tpp.h}, add the following lines: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE #define LTTNG_UST_TRACEPOINT_PROBE_DYNAMIC_LINKAGE ---- -- . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o -ldl ---- -- To build the application: . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o -L. -lemon ---- -- To run the application with tracing support: * Preload the tracepoint provider package shared object and start the application: + -- [role="term"] ---- $ LD_PRELOAD=./libtpp.so ./app ---- -- To run the application without tracing support: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The application is statically linked with the tracepoint provider package object file. The application is linked with the instrumented user library. image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[] | include::../common/ust-sit-step-tp-o.txt[] To build the instrumented user library: . In path:{emon.c}, before including path:{tpp.h}, add the following line: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE ---- -- . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o ---- -- To build the application: . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon ---- -- To run the instrumented application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- | The application is statically linked with the tracepoint provider package object file. The application dynamically loads the instrumented user library. image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[] | include::../common/ust-sit-step-tp-o.txt[] To build the application: . In path:{app.c}, before including path:{tpp.h}, add the following line: + -- [source,c] ---- #define LTTNG_UST_TRACEPOINT_DEFINE ---- -- . Compile the application source file: + -- [role="term"] ---- $ gcc -c app.c ---- -- . Build the application: + -- [role="term"] ---- $ gcc -Wl,--export-dynamic -o app app.o tpp.o \ -llttng-ust -ldl ---- -- + The `--export-dynamic` option passed to the linker is necessary for the dynamically loaded library to ``see'' the tracepoint symbols defined in the application. To build the instrumented user library: . Compile the user library source file: + -- [role="term"] ---- $ gcc -I. -fpic -c emon.c ---- -- . Build the user library shared object: + -- [role="term"] ---- $ gcc -shared -o libemon.so emon.o ---- -- To run the application: * Start the application: + -- [role="term"] ---- $ ./app ---- -- |==== [[using-lttng-ust-with-daemons]] ===== Use noch:{LTTng-UST} with daemons If your instrumented application calls man:fork(2), man:clone(2), or BSD's man:rfork(2), without a following man:exec(3)-family system call, you must preload the path:{liblttng-ust-fork.so} shared object when you start the application. [role="term"] ---- $ LD_PRELOAD=liblttng-ust-fork.so ./my-app ---- If your tracepoint provider package is a shared library which you also preload, you must put both shared objects in env:LD_PRELOAD: [role="term"] ---- $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app ---- [role="since-2.9"] [[liblttng-ust-fd]] ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them If your instrumented application closes one or more file descriptors which it did not open itself, you must preload the path:{liblttng-ust-fd.so} shared object when you start the application: [role="term"] ---- $ LD_PRELOAD=liblttng-ust-fd.so ./my-app ---- Typical use cases include closing all the file descriptors after man:fork(2) or man:rfork(2) and buggy applications doing ``double closes''. [[lttng-ust-pkg-config]] ===== Use noch:{pkg-config} On some distributions, LTTng-UST ships with a https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config] metadata file. If this is your case, then use cmd:pkg-config to build an application on the command line: [role="term"] ---- $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust) ---- [[instrumenting-32-bit-app-on-64-bit-system]] ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system In order to trace a 32-bit application running on a 64-bit system, LTTng must use a dedicated 32-bit <>. The following steps show how to build and install a 32-bit consumer daemon, which is _not_ part of the default 64-bit LTTng build, how to build and install the 32-bit LTTng-UST libraries, and how to build and link an instrumented 32-bit application in that context. To build a 32-bit instrumented application for a 64-bit target system, assuming you have a fresh target system with no installed Userspace RCU or LTTng packages: . Download, build, and install a 32-bit version of Userspace RCU: + -- [role="term"] ---- $ cd $(mktemp -d) && wget https://lttng.org/files/urcu/userspace-rcu-latest-0.13.tar.bz2 && tar -xf userspace-rcu-latest-0.13.tar.bz2 && cd userspace-rcu-0.13.* && ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 && make && sudo make install && sudo ldconfig ---- -- . Using the package manager of your distribution, or from source, install the 32-bit versions of the following dependencies of LTTng-tools and LTTng-UST: + -- * https://sourceforge.net/projects/libuuid/[libuuid] * https://directory.fsf.org/wiki/Popt[popt] * https://www.xmlsoft.org/[libxml2] * **Optional**: https://github.com/numactl/numactl[numactl] -- . Download, build, and install a 32-bit version of the latest LTTng-UST{nbsp}{revision}: + -- [role="term"] ---- $ cd $(mktemp -d) && wget https://lttng.org/files/lttng-ust/lttng-ust-latest-2.13.tar.bz2 && tar -xf lttng-ust-latest-2.13.tar.bz2 && cd lttng-ust-2.13.* && ./configure --libdir=/usr/local/lib32 \ CFLAGS=-m32 CXXFLAGS=-m32 \ LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' && make && sudo make install && sudo ldconfig ---- -- + Add `--disable-numa` to `./configure` if you don't have https://github.com/numactl/numactl[numactl]. + [NOTE] ==== Depending on your distribution, 32-bit libraries could be installed at a different location than `/usr/lib32`. For example, Debian is known to install some 32-bit libraries in `/usr/lib/i386-linux-gnu`. In this case, make sure to set `LDFLAGS` to all the relevant 32-bit library paths, for example: [role="term"] ---- $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32' ---- ==== . Download the latest LTTng-tools{nbsp}{revision}, build, and install the 32-bit consumer daemon: + -- [role="term"] ---- $ cd $(mktemp -d) && wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 && tar -xf lttng-tools-latest-2.13.tar.bz2 && cd lttng-tools-2.13.* && ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \ LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \ --disable-bin-lttng --disable-bin-lttng-crash \ --disable-bin-lttng-relayd --disable-bin-lttng-sessiond && make && cd src/bin/lttng-consumerd && sudo make install && sudo ldconfig ---- -- . From your distribution or from source, <> the 64-bit versions of LTTng-UST and Userspace RCU. . Download, build, and install the 64-bit version of the latest LTTng-tools{nbsp}{revision}: + -- [role="term"] ---- $ cd $(mktemp -d) && wget https://lttng.org/files/lttng-tools/lttng-tools-latest-2.13.tar.bz2 && tar -xf lttng-tools-latest-2.13.tar.bz2 && cd lttng-tools-2.13.* && ./configure --with-consumerd32-libdir=/usr/local/lib32 \ --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd && make && sudo make install && sudo ldconfig ---- -- . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1) when linking your 32-bit application: + ---- -m32 -L/usr/lib32 -L/usr/local/lib32 \ -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 ---- + For example, let's rebuild the quick start example in ``<>'' as an instrumented 32-bit application: + -- [role="term"] ---- $ gcc -m32 -c -I. hello-tp.c $ gcc -m32 -c hello.c $ gcc -m32 -o hello hello.o hello-tp.o \ -L/usr/lib32 -L/usr/local/lib32 \ -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \ -llttng-ust -ldl ---- -- No special action is required to execute the 32-bit application and for LTTng to trace it: use the command-line man:lttng(1) tool as usual. [role="since-2.5"] [[tracef]] ==== Use `lttng_ust_tracef()` man:lttng_ust_tracef(3) is a small LTTng-UST API designed for quick, man:printf(3)-like instrumentation without the burden of <> and <> a tracepoint provider package. To use `lttng_ust_tracef()` in your application: . In the C or $$C++$$ source files where you need to use `lttng_ust_tracef()`, include ``: + -- [source,c] ---- #include ---- -- . In the source code of the application, use `lttng_ust_tracef()` like you would use man:printf(3): + -- [source,c] ---- /* ... */ lttng_ust_tracef("my message: %d (%s)", my_integer, my_string); /* ... */ ---- -- . Link your application with `liblttng-ust`: + -- [role="term"] ---- $ gcc -o app app.c -llttng-ust ---- -- To record the events that `lttng_ust_tracef()` calls emit: * <> which matches user space events named `lttng_ust_tracef:*`: + -- [role="term"] ---- $ lttng enable-event --userspace 'lttng_ust_tracef:*' ---- -- [IMPORTANT] .Limitations of `lttng_ust_tracef()` ==== The `lttng_ust_tracef()` utility function was developed to make user space tracing super simple, albeit with notable disadvantages compared to <>: * All the created events have the same tracepoint provider and tracepoint names, respectively `lttng_ust_tracef` and `event`. * There's no static type checking. * The only event record field you actually get, named `msg`, is a string potentially containing the values you passed to `lttng_ust_tracef()` using your own format string. This also means that you can't filter events with a custom expression at run time because there are no isolated fields. * Since `lttng_ust_tracef()` uses the man:vasprintf(3) function of the C{nbsp}standard library behind the scenes to format the strings at run time, its expected performance is lower than with user-defined tracepoints, which don't require a conversion to a string. Taking this into consideration, `lttng_ust_tracef()` is useful for some quick prototyping and debugging, but you shouldn't consider it for any permanent and serious applicative instrumentation. ==== [role="since-2.7"] [[tracelog]] ==== Use `lttng_ust_tracelog()` The man:tracelog(3) API is very similar to <>, with the difference that it accepts an additional log level parameter. The goal of `lttng_ust_tracelog()` is to ease the migration from logging to tracing. To use `lttng_ust_tracelog()` in your application: . In the C or $$C++$$ source files where you need to use `tracelog()`, include ``: + -- [source,c] ---- #include ---- -- . In the source code of the application, use `lttng_ust_tracelog()` like you would use man:printf(3), except for the first parameter which is the log level: + -- [source,c] ---- /* ... */ tracelog(LTTNG_UST_TRACEPOINT_LOGLEVEL_WARNING, "my message: %d (%s)", my_integer, my_string); /* ... */ ---- -- + See man:lttng-ust(3) for a list of available log level names. . Link your application with `liblttng-ust`: + -- [role="term"] ---- $ gcc -o app app.c -llttng-ust ---- -- To record the events that `lttng_ust_tracelog()` calls emit with a log level _at least as severe as_ a specific log level: * <> which matches user space tracepoint events named `lttng_ust_tracelog:*` and with some minimum level of severity: + -- [role="term"] ---- $ lttng enable-event --userspace 'lttng_ust_tracelog:*' \ --loglevel=WARNING ---- -- To record the events that `lttng_ust_tracelog()` calls emit with a _specific log level_: * Create a recording event rule which matches tracepoint events named `lttng_ust_tracelog:*` and with a specific log level: + -- [role="term"] ---- $ lttng enable-event --userspace 'lttng_ust_tracelog:*' \ --loglevel-only=INFO ---- -- [[prebuilt-ust-helpers]] === Load a prebuilt user space tracing helper The LTTng-UST package provides a few helpers in the form of preloadable shared objects which automatically instrument system functions and calls. The helper shared objects are normally found in dir:{/usr/lib}. If you built LTTng-UST <>, they're probably located in dir:{/usr/local/lib}. The installed user space tracing helpers in LTTng-UST{nbsp}{revision} are: path:{liblttng-ust-libc-wrapper.so}:: path:{liblttng-ust-pthread-wrapper.so}:: <>. path:{liblttng-ust-cyg-profile.so}:: path:{liblttng-ust-cyg-profile-fast.so}:: <>. path:{liblttng-ust-dl.so}:: <>. To use a user space tracing helper with any user application: * Preload the helper shared object when you start the application: + -- [role="term"] ---- $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app ---- -- + You can preload more than one helper: + -- [role="term"] ---- $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app ---- -- [role="since-2.3"] [[liblttng-ust-libc-pthread-wrapper]] ==== Instrument C standard library memory and POSIX threads functions The path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so} helpers add instrumentation to some C standard library and POSIX threads functions. [role="growable"] .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}. |==== |TP provider name |TP name |Instrumented function .6+|`lttng_ust_libc` |`malloc` |man:malloc(3) |`calloc` |man:calloc(3) |`realloc` |man:realloc(3) |`free` |man:free(3) |`memalign` |man:memalign(3) |`posix_memalign` |man:posix_memalign(3) |==== [role="growable"] .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}. |==== |TP provider name |TP name |Instrumented function .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time) |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time) |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p) |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p) |==== When you preload the shared object, it replaces the functions listed in the previous tables by wrappers which contain tracepoints and call the replaced functions. [[liblttng-ust-cyg-profile]] ==== Instrument function entry and exit The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation to the entry and exit points of functions. man:gcc(1) and man:clang(1) have an option named https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`] which generates instrumentation calls for entry and exit to functions. The LTTng-UST function tracing helpers, path:{liblttng-ust-cyg-profile.so} and path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature to add tracepoints to the two generated functions (which contain `cyg_profile` in their names, hence the name of the helper). To use the LTTng-UST function tracing helper, the source files to instrument must be built using the `-finstrument-functions` compiler flag. There are two versions of the LTTng-UST function tracing helper: * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant that you should only use when it can be _guaranteed_ that the complete event stream is recorded without any lost event record. Any kind of duplicate information is left out. + Assuming no event record is lost, having only the function addresses on entry is enough to create a call graph, since an event record always contains the ID of the CPU that generated it. + Use a tool like man:addr2line(1) to convert function addresses back to source file names and line numbers. * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant which also works in use cases where event records might get discarded or not recorded from application startup. In these cases, the trace analyzer needs more information to be able to reconstruct the program flow. See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation points of this helper. All the tracepoints that this helper provides have the log level `LTTNG_UST_TRACEPOINT_LOGLEVEL_DEBUG_FUNCTION` (see man:lttng-ust(3)). TIP: It's sometimes a good idea to limit the number of source files that you compile with the `-finstrument-functions` option to prevent LTTng from writing an excessive amount of trace data at run time. When using man:gcc(1), use the `-finstrument-functions-exclude-function-list` option to avoid instrument entries and exits of specific function names. [role="since-2.4"] [[liblttng-ust-dl]] ==== Instrument the dynamic linker The path:{liblttng-ust-dl.so} helper adds instrumentation to the man:dlopen(3) and man:dlclose(3) function calls. See man:lttng-ust-dl(3) to learn more about the instrumentation points of this helper. [role="since-2.4"] [[java-application]] === Instrument a Java application You can instrument any Java application which uses one of the following logging frameworks: * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**] (JUL) core logging facilities. * https://logging.apache.org/log4j/1.2/[**Apache log4j{nbsp}1.2**], since LTTng{nbsp}2.6. Note that Apache Log4j{nbsp}2 isn't supported. [role="img-100"] .LTTng-UST Java agent imported by a Java application. image::java-app.png[] Note that the methods described below are new in LTTng{nbsp}2.8. Previous LTTng versions use another technique. NOTE: We use https://openjdk.java.net/[OpenJDK]{nbsp}8 for development and https://ci.lttng.org/[continuous integration], thus this version is directly supported. However, the LTTng-UST Java agent is also tested with OpenJDK{nbsp}7. [role="since-2.8"] [[jul]] ==== Use the LTTng-UST Java agent for `java.util.logging` To use the LTTng-UST Java agent in a Java application which uses `java.util.logging` (JUL): . In the source code of the Java application, import the LTTng-UST log handler package for `java.util.logging`: + -- [source,java] ---- import org.lttng.ust.agent.jul.LttngLogHandler; ---- -- . Create an LTTng-UST `java.util.logging` log handler: + -- [source,java] ---- Handler lttngUstLogHandler = new LttngLogHandler(); ---- -- . Add this handler to the `java.util.logging` loggers which should emit LTTng events: + -- [source,java] ---- Logger myLogger = Logger.getLogger("some-logger"); myLogger.addHandler(lttngUstLogHandler); ---- -- . Use `java.util.logging` log statements and configuration as usual. The loggers with an attached LTTng-UST log handler can emit LTTng events. . Before exiting the application, remove the LTTng-UST log handler from the loggers attached to it and call its `close()` method: + -- [source,java] ---- myLogger.removeHandler(lttngUstLogHandler); lttngUstLogHandler.close(); ---- -- + This isn't strictly necessary, but it's recommended for a clean disposal of the resources of the handler. . Include the common and JUL-specific JAR files of the LTTng-UST Java agent, path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar}, in the https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class path] when you build the Java application. + The JAR files are typically located in dir:{/usr/share/java}. + IMPORTANT: The LTTng-UST Java agent must be <> for the logging framework your application uses. .Use the LTTng-UST Java agent for `java.util.logging`. ==== [source,java] .path:{Test.java} ---- import java.io.IOException; import java.util.logging.Handler; import java.util.logging.Logger; import org.lttng.ust.agent.jul.LttngLogHandler; public class Test { private static final int answer = 42; public static void main(String[] argv) throws Exception { // Create a logger Logger logger = Logger.getLogger("jello"); // Create an LTTng-UST log handler Handler lttngUstLogHandler = new LttngLogHandler(); // Add the LTTng-UST log handler to our logger logger.addHandler(lttngUstLogHandler); // Log at will! logger.info("some info"); logger.warning("some warning"); Thread.sleep(500); logger.finer("finer information; the answer is " + answer); Thread.sleep(123); logger.severe("error!"); // Not mandatory, but cleaner logger.removeHandler(lttngUstLogHandler); lttngUstLogHandler.close(); } } ---- Build this example: [role="term"] ---- $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java ---- <>, <> matching JUL events named `jello`, and <>: [role="term"] ---- $ lttng create $ lttng enable-event --jul jello $ lttng start ---- Run the compiled class: [role="term"] ---- $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test ---- <> and inspect the recorded events: [role="term"] ---- $ lttng stop $ lttng view ---- ==== In the resulting trace, an <> which a Java application using `java.util.logging` generated is named `lttng_jul:event` and has the following fields: `msg`:: Log record message. `logger_name`:: Logger name. `class_name`:: Name of the class in which the log statement was executed. `method_name`:: Name of the method in which the log statement was executed. `long_millis`:: Logging time (timestamp in milliseconds). `int_loglevel`:: Log level integer value. `int_threadid`:: ID of the thread in which the log statement was executed. Use the opt:lttng-enable-event(1):--loglevel or opt:lttng-enable-event(1):--loglevel-only option of the man:lttng-enable-event(1) command to target a range of `java.util.logging` log levels or a specific `java.util.logging` log level. [role="since-2.8"] [[log4j]] ==== Use the LTTng-UST Java agent for Apache log4j To use the LTTng-UST Java agent in a Java application which uses Apache log4j{nbsp}1.2: . In the source code of the Java application, import the LTTng-UST log appender package for Apache log4j: + -- [source,java] ---- import org.lttng.ust.agent.log4j.LttngLogAppender; ---- -- . Create an LTTng-UST log4j log appender: + -- [source,java] ---- Appender lttngUstLogAppender = new LttngLogAppender(); ---- -- . Add this appender to the log4j loggers which should emit LTTng events: + -- [source,java] ---- Logger myLogger = Logger.getLogger("some-logger"); myLogger.addAppender(lttngUstLogAppender); ---- -- . Use Apache log4j log statements and configuration as usual. The loggers with an attached LTTng-UST log appender can emit LTTng events. . Before exiting the application, remove the LTTng-UST log appender from the loggers attached to it and call its `close()` method: + -- [source,java] ---- myLogger.removeAppender(lttngUstLogAppender); lttngUstLogAppender.close(); ---- -- + This isn't strictly necessary, but it's recommended for a clean disposal of the resources of the appender. . Include the common and log4j-specific JAR files of the LTTng-UST Java agent, path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-log4j.jar}, in the https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class path] when you build the Java application. + The JAR files are typically located in dir:{/usr/share/java}. + IMPORTANT: The LTTng-UST Java agent must be <> for the logging framework your application uses. .Use the LTTng-UST Java agent for Apache log4j. ==== [source,java] .path:{Test.java} ---- import org.apache.log4j.Appender; import org.apache.log4j.Logger; import org.lttng.ust.agent.log4j.LttngLogAppender; public class Test { private static final int answer = 42; public static void main(String[] argv) throws Exception { // Create a logger Logger logger = Logger.getLogger("jello"); // Create an LTTng-UST log appender Appender lttngUstLogAppender = new LttngLogAppender(); // Add the LTTng-UST log appender to our logger logger.addAppender(lttngUstLogAppender); // Log at will! logger.info("some info"); logger.warn("some warning"); Thread.sleep(500); logger.debug("debug information; the answer is " + answer); Thread.sleep(123); logger.fatal("error!"); // Not mandatory, but cleaner logger.removeAppender(lttngUstLogAppender); lttngUstLogAppender.close(); } } ---- Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR file): [role="term"] ---- $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java ---- <>, <> matching log4j events named `jello`, and <>: [role="term"] ---- $ lttng create $ lttng enable-event --log4j jello $ lttng start ---- Run the compiled class: [role="term"] ---- $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test ---- <> and inspect the recorded events: [role="term"] ---- $ lttng stop $ lttng view ---- ==== In the resulting trace, an <> which a Java application using log4j generated is named `lttng_log4j:event` and has the following fields: `msg`:: Log record message. `logger_name`:: Logger name. `class_name`:: Name of the class in which the log statement was executed. `method_name`:: Name of the method in which the log statement was executed. `filename`:: Name of the file in which the executed log statement is located. `line_number`:: Line number at which the log statement was executed. `timestamp`:: Logging timestamp. `int_loglevel`:: Log level integer value. `thread_name`:: Name of the Java thread in which the log statement was executed. Use the opt:lttng-enable-event(1):--loglevel or opt:lttng-enable-event(1):--loglevel-only option of the man:lttng-enable-event(1) command to target a range of Apache log4j log levels or a specific log4j log level. [role="since-2.8"] [[java-application-context]] ==== Provide application-specific context fields in a Java application A Java application-specific context field is a piece of state which the Java application provides. You can <> such a context field to be recorded, using the man:lttng-add-context(1) command, to each <> which the log statements of this application produce. For example, a given object might have a current request ID variable. You can create a context information retriever for this object and assign a name to this current request ID. You can then, using the man:lttng-add-context(1) command, add this context field by name so that LTTng writes it to the event records of a given `java.util.logging` or log4j <>. To provide application-specific context fields in a Java application: . In the source code of the Java application, import the LTTng-UST Java agent context classes and interfaces: + -- [source,java] ---- import org.lttng.ust.agent.context.ContextInfoManager; import org.lttng.ust.agent.context.IContextInfoRetriever; ---- -- . Create a context information retriever class, that is, a class which implements the `IContextInfoRetriever` interface: + -- [source,java] ---- class MyContextInfoRetriever implements IContextInfoRetriever { @Override public Object retrieveContextInfo(String key) { if (key.equals("intCtx")) { return (short) 17; } else if (key.equals("strContext")) { return "context value!"; } else { return null; } } } ---- -- + This `retrieveContextInfo()` method is the only member of the `IContextInfoRetriever` interface. Its role is to return the current value of a state by name to create a context field. The names of the context fields and which state variables they return depends on your specific scenario. + All primitive types and objects are supported as context fields. When `retrieveContextInfo()` returns an object, the context field serializer calls its `toString()` method to add a string field to event records. The method can also return `null`, which means that no context field is available for the required name. . Register an instance of your context information retriever class to the context information manager singleton: + -- [source,java] ---- IContextInfoRetriever cir = new MyContextInfoRetriever(); ContextInfoManager cim = ContextInfoManager.getInstance(); cim.registerContextInfoRetriever("retrieverName", cir); ---- -- . Before exiting the application, remove your context information retriever from the context information manager singleton: + -- [source,java] ---- ContextInfoManager cim = ContextInfoManager.getInstance(); cim.unregisterContextInfoRetriever("retrieverName"); ---- -- + This isn't strictly necessary, but it's recommended for a clean disposal of some resources of the manager. . Build your Java application with LTTng-UST Java agent support as usual, following the procedure for either the <> or <> framework. .Provide application-specific context fields in a Java application. ==== [source,java] .path:{Test.java} ---- import java.util.logging.Handler; import java.util.logging.Logger; import org.lttng.ust.agent.jul.LttngLogHandler; import org.lttng.ust.agent.context.ContextInfoManager; import org.lttng.ust.agent.context.IContextInfoRetriever; public class Test { // Our context information retriever class private static class MyContextInfoRetriever implements IContextInfoRetriever { @Override public Object retrieveContextInfo(String key) { if (key.equals("intCtx")) { return (short) 17; } else if (key.equals("strContext")) { return "context value!"; } else { return null; } } } private static final int answer = 42; public static void main(String args[]) throws Exception { // Get the context information manager instance ContextInfoManager cim = ContextInfoManager.getInstance(); // Create and register our context information retriever IContextInfoRetriever cir = new MyContextInfoRetriever(); cim.registerContextInfoRetriever("myRetriever", cir); // Create a logger Logger logger = Logger.getLogger("jello"); // Create an LTTng-UST log handler Handler lttngUstLogHandler = new LttngLogHandler(); // Add the LTTng-UST log handler to our logger logger.addHandler(lttngUstLogHandler); // Log at will! logger.info("some info"); logger.warning("some warning"); Thread.sleep(500); logger.finer("finer information; the answer is " + answer); Thread.sleep(123); logger.severe("error!"); // Not mandatory, but cleaner logger.removeHandler(lttngUstLogHandler); lttngUstLogHandler.close(); cim.unregisterContextInfoRetriever("myRetriever"); } } ---- Build this example: [role="term"] ---- $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java ---- <> and <> matching `java.util.logging` events named `jello`: [role="term"] ---- $ lttng create $ lttng enable-event --jul jello ---- <> to be recorded to the event records of the `java.util.logging` channel: [role="term"] ---- $ lttng add-context --jul --type='$app.myRetriever:intCtx' $ lttng add-context --jul --type='$app.myRetriever:strContext' ---- <>: [role="term"] ---- $ lttng start ---- Run the compiled class: [role="term"] ---- $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test ---- <> and inspect the recorded events: [role="term"] ---- $ lttng stop $ lttng view ---- ==== [role="since-2.7"] [[python-application]] === Instrument a Python application You can instrument a Python{nbsp}2 or Python{nbsp}3 application which uses the standard https://docs.python.org/3/library/logging.html[`logging`] package. Each log statement creates an LTTng event once the application module imports the <> package. [role="img-100"] .A Python application importing the LTTng-UST Python agent. image::python-app.png[] To use the LTTng-UST Python agent: . In the source code of the Python application, import the LTTng-UST Python agent: + -- [source,python] ---- import lttngust ---- -- + The LTTng-UST Python agent automatically adds its logging handler to the root logger at import time. + A log statement that the application executes before this import doesn't create an LTTng event. + IMPORTANT: The LTTng-UST Python agent must be <>. . Use log statements and logging configuration as usual. Since the LTTng-UST Python agent adds a handler to the _root_ logger, any log statement from any logger can emit an LTTng event. .Use the LTTng-UST Python agent. ==== [source,python] .path:{test.py} ---- import lttngust import logging import time def example(): logging.basicConfig() logger = logging.getLogger('my-logger') while True: logger.debug('debug message') logger.info('info message') logger.warn('warn message') logger.error('error message') logger.critical('critical message') time.sleep(1) if __name__ == '__main__': example() ---- NOTE: `logging.basicConfig()`, which adds to the root logger a basic logging handler which prints to the standard error stream, isn't strictly required for LTTng-UST tracing to work, but in versions of Python preceding{nbsp}3.2, you could see a warning message which indicates that no handler exists for the logger `my-logger`. <>, <> matching Python logging events named `my-logger`, and <>: [role="term"] ---- $ lttng create $ lttng enable-event --python my-logger $ lttng start ---- Run the Python script: [role="term"] ---- $ python test.py ---- <> and inspect the recorded events: [role="term"] ---- $ lttng stop $ lttng view ---- ==== In the resulting trace, an <> which a Python application generated is named `lttng_python:event` and has the following fields: `asctime`:: Logging time (string). `msg`:: Log record message. `logger_name`:: Logger name. `funcName`:: Name of the function in which the log statement was executed. `lineno`:: Line number at which the log statement was executed. `int_loglevel`:: Log level integer value. `thread`:: ID of the Python thread in which the log statement was executed. `threadName`:: Name of the Python thread in which the log statement was executed. Use the opt:lttng-enable-event(1):--loglevel or opt:lttng-enable-event(1):--loglevel-only option of the man:lttng-enable-event(1) command to target a range of Python log levels or a specific Python log level. When an application imports the LTTng-UST Python agent, the agent tries to register to a <>. Note that you must <> _before_ you run the Python application. If a session daemon is found, the agent tries to register to it during five seconds, after which the application continues without LTTng tracing support. Override this timeout value with the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable (milliseconds). If the session daemon stops while a Python application with an imported LTTng-UST Python agent runs, the agent retries to connect and to register to a session daemon every three seconds. Override this delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment variable. [role="since-2.5"] [[proc-lttng-logger-abi]] === Use the LTTng logger The `lttng-tracer` Linux kernel module, part of <>, creates the special LTTng logger files path:{/proc/lttng-logger} and path:{/dev/lttng-logger} (since LTTng{nbsp}2.11) when it's loaded. Any application can write text data to any of those files to create one or more LTTng events. [role="img-100"] .An application writes to the LTTng logger file to create one or more LTTng events. image::lttng-logger.png[] The LTTng logger is the quickest method--not the most efficient, however--to add instrumentation to an application. It's designed mostly to instrument shell scripts: [role="term"] ---- $ echo "Some message, some $variable" > /dev/lttng-logger ---- Any event that the LTTng logger creates is named `lttng_logger` and belongs to the Linux kernel <>. However, unlike other instrumentation points in the kernel tracing domain, **any Unix user** can <> which matches events named `lttng_logger`, not only the root user or users in the <>. To use the LTTng logger: * From any application, write text data to the path:{/dev/lttng-logger} file. The `msg` field of `lttng_logger` event records contains the recorded message. NOTE: The maximum message length of an LTTng logger event is 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more than one event to contain the remaining data. You shouldn't use the LTTng logger to trace a user application which you can instrument in a more efficient way, namely: * <>. * <>. * <>. .Use the LTTng logger. ==== [source,bash] .path:{test.bash} ---- echo 'Hello, World!' > /dev/lttng-logger sleep 2 df --human-readable --print-type / > /dev/lttng-logger ---- <>, <> matching Linux kernel tracepoint events named `lttng_logger`, and <>: [role="term"] ---- $ lttng create $ lttng enable-event --kernel lttng_logger $ lttng start ---- Run the Bash script: [role="term"] ---- $ bash test.bash ---- <> and inspect the recorded events: [role="term"] ---- $ lttng stop $ lttng view ---- ==== [[instrumenting-linux-kernel]] === Instrument a Linux kernel image or module NOTE: This section shows how to _add_ instrumentation points to the Linux kernel. The subsystems of the kernel are already thoroughly instrumented at strategic points for LTTng when you <> the <> package. [[linux-add-lttng-layer]] ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint This section shows how to add an LTTng layer to existing ftrace instrumentation using the `TRACE_EVENT()` API. This section doesn't document the `TRACE_EVENT()` macro. Read the following articles to learn more about this API: * https://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part{nbsp}1)] * https://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part{nbsp}2)] * https://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part{nbsp}3)] The following procedure assumes that your ftrace tracepoints are correctly defined in their own header and that they're created in one source file using the `CREATE_TRACE_POINTS` definition. To add an LTTng layer over an existing ftrace tracepoint: . Make sure the following kernel configuration options are enabled: + -- * `CONFIG_MODULES` * `CONFIG_KALLSYMS` * `CONFIG_HIGH_RES_TIMERS` * `CONFIG_TRACEPOINTS` -- . Build the Linux source tree with your custom ftrace tracepoints. . Boot the resulting Linux image on your target system. + Confirm that the tracepoints exist by looking for their names in the dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys` is your subsystem name. . Get a copy of the latest LTTng-modules{nbsp}{revision}: + -- [role="term"] ---- $ cd $(mktemp -d) && wget https://lttng.org/files/lttng-modules/lttng-modules-latest-2.13.tar.bz2 && tar -xf lttng-modules-latest-2.13.tar.bz2 && cd lttng-modules-2.13.* ---- -- . In dir:{instrumentation/events/lttng-module}, relative to the root of the LTTng-modules source tree, create a header file named +__subsys__.h+ for your custom subsystem +__subsys__+ and write your LTTng-modules tracepoint definitions using the LTTng-modules macros in it. + Start with this template: + -- [source,c] .path:{instrumentation/events/lttng-module/my_subsys.h} ---- #undef TRACE_SYSTEM #define TRACE_SYSTEM my_subsys #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) #define _LTTNG_MY_SUBSYS_H #include "../../../probes/lttng-tracepoint-event.h" #include LTTNG_TRACEPOINT_EVENT( /* * Format is identical to the TRACE_EVENT() version for the three * following macro parameters: */ my_subsys_my_event, TP_PROTO(int my_int, const char *my_string), TP_ARGS(my_int, my_string), /* LTTng-modules specific macros */ TP_FIELDS( ctf_integer(int, my_int_field, my_int) ctf_string(my_bar_field, my_bar) ) ) #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */ #include "../../../probes/define_trace.h" ---- -- + The entries in the `TP_FIELDS()` section are the list of fields for the LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of the `TRACE_EVENT()` ftrace macro. + See ``<>'' for a complete description of the available `ctf_*()` macros. . Create the kernel module C{nbsp}source file of the LTTng-modules probe, +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your subsystem name: + -- [source,c] .path:{probes/lttng-probe-my-subsys.c} ---- #include #include "../lttng-tracer.h" /* * Build-time verification of mismatch between mainline * TRACE_EVENT() arguments and the LTTng-modules adaptation * layer LTTNG_TRACEPOINT_EVENT() arguments. */ #include /* Create LTTng tracepoint probes */ #define LTTNG_PACKAGE_BUILD #define CREATE_TRACE_POINTS #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module #include "../instrumentation/events/lttng-module/my_subsys.h" MODULE_LICENSE("GPL and additional rights"); MODULE_AUTHOR("Your name "); MODULE_DESCRIPTION("LTTng my_subsys probes"); MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "." __stringify(LTTNG_MODULES_MINOR_VERSION) "." __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION) LTTNG_MODULES_EXTRAVERSION); ---- -- . Edit path:{probes/KBuild} and add your new kernel module object next to the existing ones: + -- [source,make] .path:{probes/KBuild} ---- # ... obj-m += lttng-probe-module.o obj-m += lttng-probe-power.o obj-m += lttng-probe-my-subsys.o # ... ---- -- . Build and install the LTTng kernel modules: + -- [role="term"] ---- $ make KERNELDIR=/path/to/linux # make modules_install && depmod -a ---- -- + Replace `/path/to/linux` with the path to the Linux source tree where you defined and used tracepoints with the `TRACE_EVENT()` ftrace macro. Note that you can also use the <> instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and C{nbsp}code that need to be executed before LTTng records the event fields. The best way to learn how to use the previous LTTng-modules macros is to inspect the existing LTTng-modules tracepoint definitions in the dir:{instrumentation/events/lttng-module} header files. Compare them with the Linux kernel mainline versions in the dir:{include/trace/events} directory of the Linux source tree. [role="since-2.7"] [[lttng-tracepoint-event-code]] ===== Use custom C code to access the data for tracepoint fields Although we recommended to always use the <> macro to describe the arguments and fields of an LTTng-modules tracepoint when possible, sometimes you need a more complex process to access the data that the tracer records as event record fields. In other words, you need local variables and multiple C{nbsp}statements instead of simple argument-based expressions that you pass to the <>. Use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define a block of C{nbsp}code to be executed before LTTng records the fields. The structure of this macro is: [source,c] .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax. ---- LTTNG_TRACEPOINT_EVENT_CODE( /* * Format identical to the LTTNG_TRACEPOINT_EVENT() * version for the following three macro parameters: */ my_subsys_my_event, TP_PROTO(int my_int, const char *my_string), TP_ARGS(my_int, my_string), /* Declarations of custom local variables */ TP_locvar( int a = 0; unsigned long b = 0; const char *name = "(undefined)"; struct my_struct *my_struct; ), /* * Custom code which uses both tracepoint arguments * (in TP_ARGS()) and local variables (in TP_locvar()). * * Local variables are actually members of a structure pointed * to by the special variable tp_locvar. */ TP_code( if (my_int) { tp_locvar->a = my_int + 17; tp_locvar->my_struct = get_my_struct_at(tp_locvar->a); tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct); tp_locvar->name = my_struct_get_name(tp_locvar->my_struct); put_my_struct(tp_locvar->my_struct); if (tp_locvar->b) { tp_locvar->a = 1; } } ), /* * Format identical to the LTTNG_TRACEPOINT_EVENT() * version for this, except that tp_locvar members can be * used in the argument expression parameters of * the ctf_*() macros. */ TP_FIELDS( ctf_integer(unsigned long, my_struct_b, tp_locvar->b) ctf_integer(int, my_struct_a, tp_locvar->a) ctf_string(my_string_field, my_string) ctf_string(my_struct_name, tp_locvar->name) ) ) ---- IMPORTANT: The C code defined in `TP_code()` must not have any side effects when executed. In particular, the code must not allocate memory or get resources without deallocating this memory or putting those resources afterwards. [[instrumenting-linux-kernel-tracing]] ==== Load and unload a custom probe kernel module You must load a <> in the kernel before it can emit LTTng events. To load the default probe kernel modules and a custom probe kernel module: * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra probe modules to load when starting a root <>: + -- .Load the `my_subsys`, `usb`, and the default probe modules. ==== [role="term"] ---- # lttng-sessiond --extra-kmod-probes=my_subsys,usb ---- ==== -- + You only need to pass the subsystem name, not the whole kernel module name. To load _only_ a given custom probe kernel module: * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe modules to load when starting a root session daemon: + -- .Load only the `my_subsys` and `usb` probe modules. ==== [role="term"] ---- # lttng-sessiond --kmod-probes=my_subsys,usb ---- ==== -- To confirm that a probe module is loaded: * Use man:lsmod(8): + -- [role="term"] ---- $ lsmod | grep lttng_probe_usb ---- -- To unload the loaded probe modules: * Kill the session daemon with `SIGTERM`: + -- [role="term"] ---- # pkill lttng-sessiond ---- -- + You can also use the `--remove` option of man:modprobe(8) if the session daemon terminates abnormally. [[controlling-tracing]] == Tracing control Once an application or a Linux kernel is <> for LTTng tracing, you can _trace_ it. In the LTTng context, _tracing_ means making sure that LTTng attempts to execute some action(s) when a CPU executes an instrumentation point. This section is divided in topics on how to use the various <>, in particular the <>, to _control_ the LTTng daemons and tracers. NOTE: In the following subsections, we refer to an man:lttng(1) command using its man page name. For example, instead of ``Run the `create` command to'', we write ``Run the man:lttng-create(1) command to''. [[start-sessiond]] === Start a session daemon In some situations, you need to run a <> (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1) command-line tool. You will see the following error when you run a command while no session daemon is running: ---- Error: No session daemon is available ---- The only command that automatically runs a session daemon is man:lttng-create(1), which you use to <>. While this could be your most used first operation, sometimes it's not. Some examples are: * <>. * <>. * <>. All the examples above don't require a recording session to operate on. [[tracing-group]] Each Unix user can have its own running session daemon to use the user space LTTng tracer. The session daemon that the `root` user starts is the only one allowed to control the LTTng kernel tracer. Members of the Unix _tracing group_ may connect to and control the root session daemon, even for user space tracing. See the ``Session daemon connection'' section of man:lttng(1) to learn more about the Unix tracing group. To start a user session daemon: * Run man:lttng-sessiond(8): + -- [role="term"] ---- $ lttng-sessiond --daemonize ---- -- To start the root session daemon: * Run man:lttng-sessiond(8) as the `root` user: + -- [role="term"] ---- # lttng-sessiond --daemonize ---- -- In both cases, remove the opt:lttng-sessiond(8):--daemonize option to start the session daemon in foreground. To stop a session daemon, kill its process (see man:kill(1)) with the standard `TERM` signal. Note that some Linux distributions could manage the LTTng session daemon as a service. In this case, we suggest that you use the service manager to start, restart, and stop session daemons. [[creating-destroying-tracing-sessions]] === Create and destroy a recording session Many LTTng control operations happen in the scope of a <>, which is the dialogue between the <> and you for everything related to <>. To create a recording session with a generated name: * Use the man:lttng-create(1) command: + -- [role="term"] ---- $ lttng create ---- -- The name of the created recording session is `auto` followed by the creation date. To create a recording session with a specific name: * Use the optional argument of the man:lttng-create(1) command: + -- [role="term"] ---- $ lttng create SESSION ---- -- + Replace +__SESSION__+ with your specific recording session name. In <>, LTTng writes the traces of a recording session to the +$LTTNG_HOME/lttng-traces/__NAME__-__DATE__-__TIME__+ directory by default, where +__NAME__+ is the name of the recording session. Note that the env:LTTNG_HOME environment variable defaults to `$HOME` if not set. To output LTTng traces to a non-default location: * Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command: + -- [role="term"] ---- $ lttng create my-session --output=/tmp/some-directory ---- -- You may create as many recording sessions as you wish. To list all the existing recording sessions for your Unix user, or for all users if your Unix user is `root`: * Use the man:lttng-list(1) command: + -- [role="term"] ---- $ lttng list ---- -- [[cur-tracing-session]]When you create a recording session, the man:lttng-create(1) command sets it as the _current recording session_. The following man:lttng(1) commands operate on the current recording session when you don't specify one: [role="list-3-cols"] * man:lttng-add-context(1) * man:lttng-clear(1) * man:lttng-destroy(1) * man:lttng-disable-channel(1) * man:lttng-disable-event(1) * man:lttng-disable-rotation(1) * man:lttng-enable-channel(1) * man:lttng-enable-event(1) * man:lttng-enable-rotation(1) * man:lttng-load(1) * man:lttng-regenerate(1) * man:lttng-rotate(1) * man:lttng-save(1) * man:lttng-snapshot(1) * man:lttng-start(1) * man:lttng-status(1) * man:lttng-stop(1) * man:lttng-track(1) * man:lttng-untrack(1) * man:lttng-view(1) To change the current recording session: * Use the man:lttng-set-session(1) command: + -- [role="term"] ---- $ lttng set-session SESSION ---- -- + Replace +__SESSION__+ with the name of the new current recording session. When you're done recording in a given recording session, destroy it. This operation frees the resources taken by the recording session to destroy; it doesn't destroy the trace data that LTTng wrote for this recording session (see ``<>'' for one way to do this). To destroy the current recording session: * Use the man:lttng-destroy(1) command: + -- [role="term"] ---- $ lttng destroy ---- -- The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command implicitly (see ``<>''). You need to stop recording to make LTTng flush the remaining trace data and make the trace readable. [[list-instrumentation-points]] === List the available instrumentation points The <> can query the running instrumented user applications and the Linux kernel to get a list of available instrumentation points: * LTTng tracepoints and system calls for the Linux kernel <>. * LTTng tracepoints for the user space tracing domain. To list the available instrumentation points: . <> there's a running <> to which your Unix user can connect. . Use the man:lttng-list(1) command with the option of the requested tracing domain amongst: + -- opt:lttng-list(1):--kernel:: Linux kernel tracepoints. + Your Unix user must be `root`, or it must be a member of the Unix <>. opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall:: Linux kernel system calls. + Your Unix user must be `root`, or it must be a member of the Unix <>. opt:lttng-list(1):--userspace:: User space tracepoints. opt:lttng-list(1):--jul:: `java.util.logging` loggers. opt:lttng-list(1):--log4j:: Apache log4j loggers. opt:lttng-list(1):--python:: Python loggers. -- .List the available user space tracepoints. ==== [role="term"] ---- $ lttng list --userspace ---- ==== .List the available Linux kernel system calls. ==== [role="term"] ---- $ lttng list --kernel --syscall ---- ==== [[enabling-disabling-events]] === Create and enable a recording event rule Once you <>, you can create <> with the man:lttng-enable-event(1) command. The man:lttng-enable-event(1) command always attaches an event rule to a <> on creation. The command can create a _default channel_, named `channel0`, for you. The man:lttng-enable-event(1) command reuses the default channel each time you run it for the same tracing domain and session. A recording event rule is always enabled at creation time. The following examples show how to combine the command-line arguments of the man:lttng-enable-event(1) command to create simple to more complex recording event rules within the <>. .Create a recording event rule matching specific Linux kernel tracepoint events (default channel). ==== [role="term"] ---- # lttng enable-event --kernel sched_switch ---- ==== .Create a recording event rule matching Linux kernel system call events with four specific names (default channel). ==== [role="term"] ---- # lttng enable-event --kernel --syscall open,write,read,close ---- ==== .Create recording event rules matching tracepoint events which satisfy a filter expressions (default channel). ==== [role="term"] ---- # lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"' ---- [role="term"] ---- # lttng enable-event --kernel --all \ --filter='$ctx.tid == 1988 || $ctx.tid == 1534' ---- [role="term"] ---- $ lttng enable-event --jul my_logger \ --filter='$app.retriever:cur_msg_id > 3' ---- IMPORTANT: Make sure to always single-quote the filter string when you run man:lttng(1) from a shell. See also ``<>'' which offers another, more efficient filtering mechanism for process ID, user ID, and group ID attributes. ==== .Create a recording event rule matching any user space event from the `my_app` tracepoint provider and with a log level range (default channel). ==== [role="term"] ---- $ lttng enable-event --userspace my_app:'*' --loglevel=INFO ---- IMPORTANT: Make sure to always single-quote the wildcard character when you run man:lttng(1) from a shell. ==== .Create a recording event rule matching user space events named specifically, but with name exclusions (default channel). ==== [role="term"] ---- $ lttng enable-event --userspace my_app:'*' \ --exclude=my_app:set_user,my_app:handle_sig ---- ==== .Create a recording event rule matching any Apache log4j event with a specific log level (default channel). ==== [role="term"] ---- $ lttng enable-event --log4j --all --loglevel-only=WARN ---- ==== .Create a recording event rule, attached to a specific channel, and matching user space tracepoint events named `my_app:my_tracepoint`. ==== [role="term"] ---- $ lttng enable-event --userspace my_app:my_tracepoint \ --channel=my-channel ---- ==== .Create a recording event rule matching user space probe events for the `malloc` function entry in path:{/usr/lib/libc.so.6}: ==== [role="term"] ---- # lttng enable-event --kernel \ --userspace-probe=/usr/lib/libc.so.6:malloc \ libc_malloc ---- ==== .Create a recording event rule matching user space probe events for the `server`/`accept_request` https://www.sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps[USDT probe] in path:{/usr/bin/serv}: ==== [role="term"] ---- # lttng enable-event --kernel \ --userspace-probe=sdt:serv:server:accept_request \ server_accept_request ---- ==== The recording event rules of a given channel form a whitelist: as soon as an event rule matches an event, LTTng emits it _once_ and therefore <> record it. For example, the following rules both match user space tracepoint events named `my_app:my_tracepoint` with an `INFO` log level: [role="term"] ---- $ lttng enable-event --userspace my_app:my_tracepoint $ lttng enable-event --userspace my_app:my_tracepoint \ --loglevel=INFO ---- The second recording event rule is redundant: the first one includes the second one. [[disable-event-rule]] === Disable a recording event rule To disable a <> that you <> previously, use the man:lttng-disable-event(1) command. man:lttng-disable-event(1) can only find recording event rules to disable by their <> and event name conditions. Therefore, you cannot disable recording event rules having a specific instrumentation point log level condition, for example. LTTng doesn't emit (and, therefore, won't record) an event which only _disabled_ recording event rules match. .Disable event rules matching Python logging events from the `my-logger` logger (default <>, <>). ==== [role="term"] ---- $ lttng disable-event --python my-logger ---- ==== .Disable event rules matching all `java.util.logging` events (default channel, recording session `my-session`). ==== [role="term"] ---- $ lttng disable-event --jul --session=my-session '*' ---- ==== .Disable _all_ the Linux kernel recording event rules (channel `my-chan`, current recording session). ==== The opt:lttng-disable-event(1):--all-events option isn't, like the opt:lttng-enable-event(1):--all option of the man:lttng-enable-event(1) command, an alias for the event name globbing pattern `*`: it disables _all_ the recording event rules of a given channel. [role="term"] ---- # lttng disable-event --kernel --channel=my-chan --all-events ---- ==== NOTE: You can't _remove_ a recording event rule once you create it. [[status]] === Get the status of a recording session To get the status of the <>, that is, its parameters, its channels, recording event rules, and their attributes: * Use the man:lttng-status(1) command: + -- [role="term"] ---- $ lttng status ---- -- To get the status of any recording session: * Use the man:lttng-list(1) command with the name of the recording session: + -- [role="term"] ---- $ lttng list SESSION ---- -- + Replace +__SESSION__+ with the recording session name. [[basic-tracing-session-control]] === Start and stop a recording session Once you <> and <>, you can start and stop the tracers for this recording session. To start the <>: * Use the man:lttng-start(1) command: + -- [role="term"] ---- $ lttng start ---- -- LTTng is flexible: you can launch user applications before or after you start the tracers. An LTTng tracer only <> if a recording event rule matches it, which means the tracer is active. The `start-session` <> action can also start a recording session. To stop the current recording session: * Use the man:lttng-stop(1) command: + -- [role="term"] ---- $ lttng stop ---- -- + If there were <> or lost sub-buffers since the last time you ran man:lttng-start(1), the man:lttng-stop(1) command prints corresponding warnings. IMPORTANT: You need to stop recording to make LTTng flush the remaining trace data and make the trace readable. Note that the man:lttng-destroy(1) command (see ``<>'') also runs the man:lttng-stop(1) command implicitly. The `stop-session` <> action can also stop a recording session. [role="since-2.12"] [[clear]] === Clear a recording session You might need to remove all the current tracing data of one or more <> between multiple attempts to reproduce a problem without interrupting the LTTng recording activity. To clear the tracing data of the <>: * Use the man:lttng-clear(1) command: + -- [role="term"] ---- $ lttng clear ---- -- To clear the tracing data of all the recording sessions: * Use the `lttng clear` command with its opt:lttng-clear(1):--all option: + -- [role="term"] ---- $ lttng clear --all ---- -- [[enabling-disabling-channels]] === Create a channel Once you <>, you can create a <> with the man:lttng-enable-channel(1) command. Note that LTTng can automatically create a default channel when you <>. Therefore, you only need to create a channel when you need non-default attributes. Specify each non-default channel attribute with a command-line option when you run the man:lttng-enable-channel(1) command. You can only create a custom channel in the Linux kernel and user space <>: the Java/Python logging tracing domains have their own default channel which LTTng automatically creates when you <>. [IMPORTANT] ==== As of LTTng{nbsp}{revision}, you may _not_ perform the following operations with the man:lttng-enable-channel(1) command: * Change an attribute of an existing channel. * Enable a disabled channel once its recording session has been <> at least once. * Create a channel once its recording session has been active at least once. * Create a user space channel with a given <> and create a second user space channel with a different buffering scheme in the same recording session. ==== The following examples show how to combine the command-line options of the man:lttng-enable-channel(1) command to create simple to more complex channels within the <>. .Create a Linux kernel channel with default attributes. ==== [role="term"] ---- # lttng enable-channel --kernel my-channel ---- ==== .Create a user space channel with four sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process. ==== [role="term"] ---- $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \ --buffers-pid my-channel ---- ==== .[[blocking-timeout-example]]Create a default user space channel with an infinite blocking timeout. ==== <>, create the channel, <>, and <>: [role="term"] ---- $ lttng create $ lttng enable-channel --userspace --blocking-timeout=inf blocking-chan $ lttng enable-event --userspace --channel=blocking-chan --all $ lttng start ---- Run an application instrumented with LTTng-UST tracepoints and allow it to block: [role="term"] ---- $ LTTNG_UST_ALLOW_BLOCKING=1 my-app ---- ==== .Create a Linux kernel channel which rotates eight trace files of 4{nbsp}MiB each for each stream. ==== [role="term"] ---- # lttng enable-channel --kernel --tracefile-count=8 \ --tracefile-size=4194304 my-channel ---- ==== .Create a user space channel in <> (or ``flight recorder'') mode. ==== [role="term"] ---- $ lttng enable-channel --userspace --overwrite my-channel ---- ==== .<> the same <> attached to two different channels. ==== [role="term"] ---- $ lttng enable-event --userspace --channel=my-channel app:tp $ lttng enable-event --userspace --channel=other-channel app:tp ---- When a CPU executes the `app:tp` <>, the two recording event rules above match the created event, making LTTng emit the event. Because the recording event rules are not attached to the same channel, LTTng records the event twice. ==== [[disable-channel]] === Disable a channel To disable a specific channel that you <> previously, use the man:lttng-disable-channel(1) command. .Disable a specific Linux kernel channel (<>). ==== [role="term"] ---- # lttng disable-channel --kernel my-channel ---- ==== An enabled channel is an implicit <> condition. NOTE: As of LTTng{nbsp}{revision}, you may _not_ enable a disabled channel once its recording session has been <> at least once. [[adding-context]] === Add context fields to be recorded to the event records of a channel <> fields in trace files provide important information about previously emitted events, but sometimes some external context may help you solve a problem faster. Examples of context fields are: * The **process ID**, **thread ID**, **process name**, and **process priority** of the thread from which LTTng emits the event. * The **hostname** of the system on which LTTng emits the event. * The Linux kernel and user call stacks (since LTTng{nbsp}2.11). * The current values of many possible **performance counters** using perf, for example: ** CPU cycles, stalled cycles, idle cycles, and the other cycle types. ** Cache misses. ** Branch instructions, misses, and loads. ** CPU faults. * Any state defined at the application level (supported for the `java.util.logging` and Apache log4j <>). To get the full list of available context fields: * Use the opt:lttng-add-context(1):--list option of the man:lttng-add-context(1) command: + [role="term"] ---- $ lttng add-context --list ---- .Add context fields to be recorded to the event records of all the <> of the <>. ==== The following command line adds the virtual process identifier and the per-thread CPU cycles count fields to all the user space channels of the current recording session. [role="term"] ---- $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles ---- ==== .Add performance counter context fields by raw ID ==== See man:lttng-add-context(1) for the exact format of the context field type, which is partly compatible with the format used in man:perf-record(1). [role="term"] ---- # lttng add-context --userspace --type=perf:thread:raw:r0110:test # lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted ---- ==== .Add context fields to be recorded to the event records of a specific channel. ==== The following command line adds the thread identifier and user call stack context fields to the Linux kernel channel named `my-channel` of the <>. [role="term"] ---- # lttng add-context --kernel --channel=my-channel \ --type=tid --type=callstack-user ---- ==== .Add an <> to be recorded to the event records of a specific channel. ==== The following command line makes sure LTTng writes the `cur_msg_id` context field of the `retriever` context retriever to all the Java logging <> of the channel named `my-channel`: [role="term"] ---- # lttng add-context --kernel --channel=my-channel \ --type='$app:retriever:cur_msg_id' ---- IMPORTANT: Make sure to always single-quote the `$` character when you run man:lttng-add-context(1) from a shell. ==== NOTE: You can't undo what the man:lttng-add-context(1) command does. [role="since-2.7"] [[pid-tracking]] === Allow specific processes to record events It's often useful to only allow processes with specific attributes to record events. For example, you may wish to record all the system calls which a given process makes (à la man:strace(1)). The man:lttng-track(1) and man:lttng-untrack(1) commands serve this purpose. Both commands operate on _inclusion sets_ of process attributes. The available process attribute types are: Linux kernel <>:: + * Process ID (PID). * Virtual process ID (VPID). + This is the PID as seen by the application. * Unix user ID (UID). * Virtual Unix user ID (VUID). + This is the UID as seen by the application. * Unix group ID (GID). * Virtual Unix group ID (VGID). + This is the GID as seen by the application. User space tracing domain:: + * VPID * VUID * VGID A <> has nine process attribute inclusion sets: six for the Linux kernel <> and three for the user space tracing domain. For a given recording session, a process{nbsp}__P__ is allowed to record LTTng events for a given <>{nbsp}__D__ if _all_ the attributes of{nbsp}__P__ are part of the inclusion sets of{nbsp}__D__. Whether a process is allowed or not to record LTTng events is an implicit condition of all <>. Therefore, if LTTng creates an event{nbsp}__E__ for a given process, but this process may not record events, then no recording event rule matches{nbsp}__E__, which means LTTng won't emit and record{nbsp}__E__. When you <>, all its process attribute inclusion sets contain all the possible values. In other words, all processes are allowed to record events. Add values to an inclusion set with the man:lttng-track(1) command and remove values with the man:lttng-untrack(1) command. [NOTE] ==== The process attribute values are _numeric_. Should a process with a given ID (part of an inclusion set), for example, exit, and then a new process be given this same ID, then the latter would also be allowed to record events. With the man:lttng-track(1) command, you can add Unix user and group _names_ to the user and group inclusion sets: the <> finds the corresponding UID, VUID, GID, or VGID once on _addition_ to the inclusion set. This means that if you rename the user or group after you run the man:lttng-track(1) command, its user/group ID remains part of the inclusion sets. ==== .Allow processes to record events based on their virtual process ID (VPID). ==== For the sake of the following example, assume the target system has 16{nbsp}possible VPIDs. When you <>, the user space VPID inclusion set contains _all_ the possible VPIDs: [role="img-100"] .The VPID inclusion set is full. image::track-all.png[] When the inclusion set is full and you run the man:lttng-track(1) command to specify some VPIDs, LTTng: . Clears the inclusion set. . Adds the specific VPIDs to the inclusion set. After: [role="term"] ---- $ lttng track --userspace --vpid=3,4,7,10,13 ---- the VPID inclusion set is: [role="img-100"] .The VPID inclusion set contains the VPIDs 3, 4, 7, 10, and 13. image::track-3-4-7-10-13.png[] Add more VPIDs to the inclusion set afterwards: [role="term"] ---- $ lttng track --userspace --vpid=1,15,16 ---- The result is: [role="img-100"] .VPIDs 1, 15, and 16 are added to the inclusion set. image::track-1-3-4-7-10-13-15-16.png[] The man:lttng-untrack(1) command removes entries from process attribute inclusion sets. Given the previous example, the following command: [role="term"] ---- $ lttng untrack --userspace --vpid=3,7,10,13 ---- leads to this VPID inclusion set: [role="img-100"] .VPIDs 3, 7, 10, and 13 are removed from the inclusion set. image::track-1-4-15-16.png[] You can make the VPID inclusion set full again with the opt:lttng-track(1):--all option: [role="term"] ---- $ lttng track --userspace --vpid --all ---- The result is, again: [role="img-100"] .The VPID inclusion set is full. image::track-all.png[] ==== .Allow specific processes to record events based on their user ID (UID). ==== A typical use case with process attribute inclusion sets is to start with an empty inclusion set, then <>, and finally add values manually while the tracers are active. Use the opt:lttng-untrack(1):--all option of the man:lttng-untrack(1) command to clear the inclusion set after you <>, for example (with UIDs): [role="term"] ---- # lttng untrack --kernel --uid --all ---- gives: [role="img-100"] .The UID inclusion set is empty. image::untrack-all.png[] If the LTTng tracer runs with this inclusion set configuration, it records no events within the <> because no processes is allowed to do so. Use the man:lttng-track(1) command as usual to add specific values to the UID inclusion set when you need to, for example: [role="term"] ---- # lttng track --kernel --uid=http,11 ---- Result: [role="img-100"] .UIDs 6 (`http`) and 11 are part of the UID inclusion set. image::track-6-11.png[] ==== [role="since-2.5"] [[saving-loading-tracing-session]] === Save and load recording session configurations Configuring a <> can be long. Some of the tasks involved are: * <> with specific attributes. * <> to be recorded to the <> of specific channels. * <> with specific log level, filter, and other conditions. If you use LTTng to solve real world problems, chances are you have to record events using the same recording session setup over and over, modifying a few variables each time in your instrumented program or environment. To avoid constant recording session reconfiguration, the man:lttng(1) command-line tool can save and load recording session configurations to/from XML files. To save a given recording session configuration: * Use the man:lttng-save(1) command: + -- [role="term"] ---- $ lttng save SESSION ---- -- + Replace +__SESSION__+ with the name of the recording session to save. LTTng saves recording session configurations to dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the env:LTTNG_HOME environment variable defaults to `$HOME` if not set. See man:lttng-save(1) to learn more about the recording session configuration output path. LTTng saves all configuration parameters, for example: * The recording session name. * The trace data output path. * The <>, with their state and all their attributes. * The context fields you added to channels. * The <> with their state and conditions. To load a recording session: * Use the man:lttng-load(1) command: + -- [role="term"] ---- $ lttng load SESSION ---- -- + Replace +__SESSION__+ with the name of the recording session to load. When LTTng loads a configuration, it restores your saved recording session as if you just configured it manually. You can also save and load many sessions at a time; see man:lttng-save(1) and man:lttng-load(1) to learn more. [[sending-trace-data-over-the-network]] === Send trace data over the network LTTng can send the recorded trace data of a <> to a remote system over the network instead of writing it to the local file system. To send the trace data over the network: . On the _remote_ system (which can also be the target system), start an LTTng <> (man:lttng-relayd(8)): + -- [role="term"] ---- $ lttng-relayd ---- -- . On the _target_ system, create a recording session <> to send trace data over the network: + -- [role="term"] ---- $ lttng create my-session --set-url=net://remote-system ---- -- + Replace +__remote-system__+ with the host name or IP address of the remote system. See man:lttng-create(1) for the exact URL format. . On the target system, use the man:lttng(1) command-line tool as usual. + When recording is <>, the <> of the target sends the contents of <> to the remote relay daemon instead of flushing them to the local file system. The relay daemon writes the received packets to its local file system. See the ``Output directory'' section of man:lttng-relayd(8) to learn where a relay daemon writes its received trace data. [role="since-2.4"] [[lttng-live]] === View events as LTTng records them (noch:{LTTng} live) _LTTng live_ is a network protocol implemented by the <> (man:lttng-relayd(8)) to allow compatible trace readers to display or analyze <> as LTTng records events on the target system while recording is <>. The relay daemon creates a _tee_: it forwards the trace data to both the local file system and to connected live readers: [role="img-90"] .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live reader. image::live.png[] To use LTTng live: . On the _target system_, create a <> in _live mode_: + -- [role="term"] ---- $ lttng create my-session --live ---- -- + This operation spawns a local relay daemon. . Start the live reader and configure it to connect to the relay daemon. + For example, with man:babeltrace2(1): + -- [role="term"] ---- $ babeltrace2 net://localhost/host/HOSTNAME/my-session ---- -- + Replace +__HOSTNAME__+ with the host name of the target system. . Configure the recording session as usual with the man:lttng(1) command-line tool, and <>. List the available live recording sessions with man:babeltrace2(1): [role="term"] ---- $ babeltrace2 net://localhost ---- You can start the relay daemon on another system. In this case, you need to specify the URL of the relay daemon when you <> with the opt:lttng-create(1):--set-url option of the man:lttng-create(1) command. You also need to replace +__localhost__+ in the procedure above with the host name of the system on which the relay daemon runs. [role="since-2.3"] [[taking-a-snapshot]] === Take a snapshot of the current sub-buffers of a recording session The normal behavior of LTTng is to append full sub-buffers to growing trace data files. This is ideal to keep a full history of the events which the target system emitted, but it can represent too much data in some situations. For example, you may wish to have LTTng record your application continuously until some critical situation happens, in which case you only need the latest few recorded events to perform the desired analysis, not multi-gigabyte trace files. With the man:lttng-snapshot(1) command, you can take a _snapshot_ of the current <> of a given <>. LTTng can write the snapshot to the local file system or send it over the network. [role="img-100"] .A snapshot is a copy of the current sub-buffers, which LTTng does _not_ clear after the operation. image::snapshot.png[] The snapshot feature of LTTng is similar to how a https://en.wikipedia.org/wiki/Flight_recorder[flight recorder] or the ``roll'' mode of an oscilloscope work. TIP: If you wish to create unmanaged, self-contained, non-overlapping trace chunk archives instead of a simple copy of the current sub-buffers, see the <> feature (available since LTTng{nbsp}2.11). To take a snapshot of the <>: . Create a recording session in <>: + -- [role="term"] ---- $ lttng create my-session --snapshot ---- -- + The <> of <> created in this mode is automatically set to <>. . Configure the recording session as usual with the man:lttng(1) command-line tool, and <>. . **Optional**: When you need to take a snapshot, <>. + You can take a snapshot when the tracers are active, but if you stop them first, you're guaranteed that the trace data in the sub-buffers doesn't change before you actually take the snapshot. . Take a snapshot: + -- [role="term"] ---- $ lttng snapshot record --name=my-first-snapshot ---- -- + LTTng writes the current sub-buffers of all the channels of the <> to trace files on the local file system. Those trace files have `my-first-snapshot` in their name. There's no difference between the format of a normal trace file and the format of a snapshot: LTTng trace readers also support LTTng snapshots. By default, LTTng writes snapshot files to the path shown by [role="term"] ---- $ lttng snapshot list-output ---- You can change this path or decide to send snapshots over the network using either: . An output path or URL that you specify when you <>. . A snapshot output path or URL that you add using the `add-output` action of the man:lttng-snapshot(1) command. . An output path or URL that you provide directly to the `record` action of the man:lttng-snapshot(1) command. Method{nbsp}3 overrides method{nbsp}2, which overrides method 1. When you specify a URL, a <> must listen on a remote system (see ``<>''). The `snapshot-session` <> action can also take a recording session snapshot. [role="since-2.11"] [[session-rotation]] === Archive the current trace chunk (rotate a recording session) The <> shows how to dump the current sub-buffers of a recording session to the file system or send them over the network. When you take a snapshot, LTTng doesn't clear the ring buffers of the recording session: if you take another snapshot immediately after, both snapshots could contain overlapping trace data. Inspired by https://en.wikipedia.org/wiki/Log_rotation[log rotation], _recording session rotation_ is a feature which appends the content of the ring buffers to what's already on the file system or sent over the network since the creation of the recording session or since the last rotation, and then clears those ring buffers to avoid trace data overlaps. What LTTng is about to write when performing a recording session rotation is called the _current trace chunk_. When LTTng writes or sends over the network this current trace chunk, it becomes a _trace chunk archive_. Therefore, a recording session rotation operation _archives_ the current trace chunk. [role="img-100"] .A recording session rotation operation _archives_ the current trace chunk. image::rotation.png[] A trace chunk archive is a self-contained LTTng trace which LTTng doesn't manage anymore: you can read it, modify it, move it, or remove it. As of LTTng{nbsp}{revision}, there are three methods to perform a recording session rotation: * <>. * With a <>. * Through the execution of a `rotate-session` <> action. [[immediate-rotation]]To perform an immediate rotation of the <>: . <> in <> or <> (only those two recording session modes support recording session rotation): + -- [role="term"] ---- # lttng create my-session ---- -- . <> and <>: + -- [role="term"] ---- # lttng enable-event --kernel sched_'*' # lttng start ---- -- . When needed, immediately rotate the current recording session: + -- [role="term"] ---- # lttng rotate ---- -- + The man:lttng-rotate(1) command prints the path to the created trace chunk archive. See its manual page to learn about the format of trace chunk archive directory names. + Perform other immediate rotations while the recording session is active. It's guaranteed that all the trace chunk archives don't contain overlapping trace data. You can also perform an immediate rotation once you have <> the recording session. . When you're done recording, <>: + -- [role="term"] ---- # lttng destroy ---- -- + The recording session destruction operation creates one last trace chunk archive from the current trace chunk. [[rotation-schedule]]A recording session rotation schedule is a planned rotation which LTTng performs automatically based on one of the following conditions: * A timer with a configured period expires. * The total size of the _flushed_ part of the current trace chunk becomes greater than or equal to a configured value. To schedule a rotation of the <>, set a _rotation schedule_: . <> in <> or <> (only those two creation modes support recording session rotation): + -- [role="term"] ---- # lttng create my-session ---- -- . <>: + -- [role="term"] ---- # lttng enable-event --kernel sched_'*' ---- -- . Set a recording session rotation schedule: + -- [role="term"] ---- # lttng enable-rotation --timer=10s ---- -- + In this example, we set a rotation schedule so that LTTng performs a recording session rotation every ten seconds. + See man:lttng-enable-rotation(1) to learn more about other ways to set a rotation schedule. . <>: + -- [role="term"] ---- # lttng start ---- -- + LTTng performs recording session rotations automatically while the recording session is active thanks to the rotation schedule. . When you're done recording, <>: + -- [role="term"] ---- # lttng destroy ---- -- + The recording session destruction operation creates one last trace chunk archive from the current trace chunk. Unset a recording session rotation schedule with the man:lttng-disable-rotation(1) command. [role="since-2.13"] [[add-event-rule-matches-trigger]] === Add an ``event rule matches'' trigger to a session daemon With the man:lttng-add-trigger(1) command, you can add a <> to a <>. A trigger associates an LTTng tracing condition to one or more actions: when the condition is satisfied, LTTng attempts to execute the actions. A trigger doesn't need any <> to exist: it belongs to a session daemon. As of LTTng{nbsp}{revision}, many condition types are available through the <> C{nbsp}API, but the man:lttng-add-trigger(1) command only accepts the ``event rule matches'' condition. An ``event rule matches'' condition is satisfied when its event rule matches an event. Unlike a <>, the event rule of an ``event rule matches'' trigger condition has no implicit conditions, that is: * It has no enabled/disabled state. * It has no attached <>. * It doesn't belong to a <>. Both the man:lttng-add-trigger(1) and man:lttng-enable-event(1) commands accept command-line arguments to specify an <>. That being said, the former is a more recent command and therefore follows the common event rule specification format (see man:lttng-event-rule(7)). .Start a <> when an event rule matches. ==== This example shows how to add the following trigger to the root <>: Condition:: An event rule matches a Linux kernel system call event of which the name starts with `exec` and `*/ls` matches the `filename` payload field. + With such an event rule, LTTng emits an event when the cmd:ls program starts. Action:: <> named `pitou`. To add such a trigger to the root session daemon: . **If there's no currently running LTTng root session daemon**, start one: + [role="term"] ---- # lttng-sessiond --daemonize ---- . <> named `pitou` and <> matching all the system call events: + [role="term"] ---- # lttng create pitou # lttng enable-event --kernel --syscall --all ---- . Add the trigger to the root session daemon: + [role="term"] ---- # lttng add-trigger --condition=event-rule-matches \ --type=syscall --name='exec*' \ --filter='filename == "*/ls"' \ --action=start-session pitou ---- + Confirm that the trigger exists with the man:lttng-list-triggers(1) command: + [role="term"] ---- # lttng list-triggers ---- . Make sure the `pitou` recording session is still inactive (stopped): + [role="term"] ---- # lttng list pitou ---- + The first line should be something like: + ---- Recording session pitou: [inactive] ---- Run the cmd:ls program to fire the LTTng trigger above: [role="term"] ---- $ ls ~ ---- At this point, the `pitou` recording session should be active (started). Confirm this with the man:lttng-list(1) command again: [role="term"] ---- # lttng list pitou ---- The first line should now look like: ---- Recording session pitou: [active] ---- This line confirms that the LTTng trigger you added fired, therefore starting the `pitou` recording session. ==== .[[trigger-event-notif]]Send a notification to a user application when an event rule matches. ==== This example shows how to add the following trigger to the root <>: Condition:: An event rule matches a Linux kernel tracepoint event named `sched_switch` and of which the value of the `next_comm` payload field is `bash`. + With such an event rule, LTTng emits an event when Linux gives access to the processor to a process named `bash`. Action:: Send an LTTng notification to a user application. Moreover, we'll specify a _capture descriptor_ with the `event-rule-matches` trigger condition so that the user application can get the value of a specific `sched_switch` event payload field. First, write and build the user application: . Create the C{nbsp}source file of the application: + -- [source,c] .path:{notif-app.c} ---- #include #include #include #include #include #include /* * Subscribes to notifications, through the notification channel * `notification_channel`, which match the condition of the trigger * named `trigger_name`. * * Returns `true` on success. */ static bool subscribe(struct lttng_notification_channel *notification_channel, const char *trigger_name) { const struct lttng_condition *condition = NULL; struct lttng_triggers *triggers = NULL; unsigned int trigger_count; unsigned int i; enum lttng_error_code error_code; enum lttng_trigger_status trigger_status; bool ret = false; /* Get all LTTng triggers */ error_code = lttng_list_triggers(&triggers); assert(error_code == LTTNG_OK); /* Get the number of triggers */ trigger_status = lttng_triggers_get_count(triggers, &trigger_count); assert(trigger_status == LTTNG_TRIGGER_STATUS_OK); /* Find the trigger named `trigger_name` */ for (i = 0; i < trigger_count; i++) { const struct lttng_trigger *trigger; const char *this_trigger_name; trigger = lttng_triggers_get_at_index(triggers, i); trigger_status = lttng_trigger_get_name(trigger, &this_trigger_name); assert(trigger_status == LTTNG_TRIGGER_STATUS_OK); if (strcmp(this_trigger_name, trigger_name) == 0) { /* Trigger found: subscribe with its condition */ enum lttng_notification_channel_status notification_channel_status; notification_channel_status = lttng_notification_channel_subscribe( notification_channel, lttng_trigger_get_const_condition(trigger)); assert(notification_channel_status == LTTNG_NOTIFICATION_CHANNEL_STATUS_OK); ret = true; break; } } lttng_triggers_destroy(triggers); return ret; } /* * Handles the evaluation `evaluation` of a single notification. */ static void handle_evaluation(const struct lttng_evaluation *evaluation) { enum lttng_evaluation_status evaluation_status; const struct lttng_event_field_value *array_field_value; const struct lttng_event_field_value *string_field_value; enum lttng_event_field_value_status event_field_value_status; const char *string_field_string_value; /* Get the value of the first captured (string) field */ evaluation_status = lttng_evaluation_event_rule_matches_get_captured_values( evaluation, &array_field_value); assert(evaluation_status == LTTNG_EVALUATION_STATUS_OK); event_field_value_status = lttng_event_field_value_array_get_element_at_index( array_field_value, 0, &string_field_value); assert(event_field_value_status == LTTNG_EVENT_FIELD_VALUE_STATUS_OK); assert(lttng_event_field_value_get_type(string_field_value) == LTTNG_EVENT_FIELD_VALUE_TYPE_STRING); event_field_value_status = lttng_event_field_value_string_get_value( string_field_value, &string_field_string_value); assert(event_field_value_status == LTTNG_EVENT_FIELD_VALUE_STATUS_OK); /* Print the string value of the field */ puts(string_field_string_value); } int main(int argc, char *argv[]) { int exit_status = EXIT_SUCCESS; struct lttng_notification_channel *notification_channel; enum lttng_notification_channel_status notification_channel_status; const struct lttng_condition *condition; const char *trigger_name; bool subscribe_res; assert(argc >= 2); trigger_name = argv[1]; /* * Create a notification channel. * * A notification channel connects the user application to the LTTng * session daemon. * * You can use this notification channel to listen to various types * of notifications. */ notification_channel = lttng_notification_channel_create( lttng_session_daemon_notification_endpoint); assert(notification_channel); /* * Subscribe to notifications which match the condition of the * trigger named `trigger_name`. */ if (!subscribe(notification_channel, trigger_name)) { fprintf(stderr, "Error: Failed to subscribe to notifications (trigger `%s`).\n", trigger_name); exit_status = EXIT_FAILURE; goto end; } /* * Notification loop. * * Put this in a dedicated thread to avoid blocking the main thread. */ while (true) { struct lttng_notification *notification; enum lttng_notification_channel_status status; const struct lttng_evaluation *notification_evaluation; /* Receive the next notification */ status = lttng_notification_channel_get_next_notification( notification_channel, ¬ification); switch (status) { case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK: break; case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED: /* * The session daemon can drop notifications if a receiving * application doesn't consume the notifications fast * enough. */ continue; case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED: /* * The session daemon closed the notification channel. * * This is typically caused by a session daemon shutting * down. */ goto end; default: /* Unhandled conditions or errors */ exit_status = EXIT_FAILURE; goto end; } /* * Handle the condition evaluation. * * A notification provides, amongst other things: * * * The condition that caused LTTng to send this notification. * * * The condition evaluation, which provides more specific * information on the evaluation of the condition. */ handle_evaluation(lttng_notification_get_evaluation(notification)); /* Destroy the notification object */ lttng_notification_destroy(notification); } end: lttng_notification_channel_destroy(notification_channel); return exit_status; } ---- -- + This application prints the first captured string field value of the condition evaluation of each LTTng notification it receives. . Build the `notif-app` application, using https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config] to provide the right compiler and linker flags: + -- [role="term"] ---- $ gcc -o notif-app notif-app.c $(pkg-config --cflags --libs lttng-ctl) ---- -- Now, to add the trigger to the root session daemon: [start=3] . **If there's no currently running LTTng root session daemon**, start one: + [role="term"] ---- # lttng-sessiond --daemonize ---- . Add the trigger, naming it `sched-switch-notif`, to the root session daemon: + [role="term"] ---- # lttng add-trigger --name=sched-switch-notif \ --condition=event-rule-matches \ --type=kernel --name=sched_switch \ --filter='next_comm == "bash"' --capture=prev_comm \ --action=notify ---- + Confirm that the `sched-switch-notif` trigger exists with the man:lttng-list-triggers(1) command: + [role="term"] ---- # lttng list-triggers ---- Run the cmd:notif-app application, passing the name of the trigger of which to watch the notifications: [role="term"] ---- # ./notif-app sched-switch-notif ---- Now, in an interactive Bash, type a few keys to fire the `sched-switch-notif` trigger. Watch the `notif-app` application print the previous process names. ==== [role="since-2.6"] [[mi]] === Use the machine interface With any command of the man:lttng(1) command-line tool, set the opt:lttng(1):--mi option to `xml` (before the command name) to get an XML machine interface output, for example: [role="term"] ---- $ lttng --mi=xml list my-session ---- A schema definition (XSD) is https://github.com/lttng/lttng-tools/blob/stable-{revision}/src/common/mi-lttng-4.0.xsd[available] to ease the integration with external tools as much as possible. [role="since-2.8"] [[metadata-regenerate]] === Regenerate the metadata of an LTTng trace An LTTng trace, which is a https://diamon.org/ctf[CTF] trace, has both data stream files and a metadata stream file. This metadata file contains, amongst other things, information about the offset of the clock sources which LTTng uses to assign timestamps to <> when recording. If, once a <> is <>, a major https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction happens, the clock offset of the trace also needs to be updated. Use the `metadata` item of the man:lttng-regenerate(1) command to do so. The main use case of this command is to allow a system to boot with an incorrect wall time and have LTTng trace it before its wall time is corrected. Once the system is known to be in a state where its wall time is correct, you can run `lttng regenerate metadata`. To regenerate the metadata stream files of the <>: * Use the `metadata` item of the man:lttng-regenerate(1) command: + -- [role="term"] ---- $ lttng regenerate metadata ---- -- [role="since-2.9"] [[regenerate-statedump]] === Regenerate the state dump event records of a recording session The LTTng kernel and user space tracers generate state dump <> when the application starts or when you <>. An analysis can use the state dump event records to set an initial state before it builds the rest of the state from the subsequent event records. http://tracecompass.org/[Trace Compass] is a notable example of an application which uses the state dump of an LTTng trace. When you <>, it's possible that the state dump event records aren't included in the snapshot trace files because they were recorded to a <> that has been consumed or <> already. Use the `statedump` item of the man:lttng-regenerate(1) command to emit and record the state dump events again. To regenerate the state dump of the <>, provided you created it in <>, before you take a snapshot: . Use the `statedump` item of the man:lttng-regenerate(1) command: + -- [role="term"] ---- $ lttng regenerate statedump ---- -- . <>: + -- [role="term"] ---- $ lttng stop ---- -- . <>: + -- [role="term"] ---- $ lttng snapshot record --name=my-snapshot ---- -- Depending on the event throughput, you should run steps{nbsp}1 and{nbsp}2 as closely as possible. [NOTE] ==== To record the state dump events, you need to <> which enable them: * The names of LTTng-UST state dump tracepoints start with `lttng_ust_statedump:`. * The names of LTTng-modules state dump tracepoints start with `lttng_statedump_`. ==== [role="since-2.7"] [[persistent-memory-file-systems]] === Record trace data on persistent memory file systems https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory] (NVRAM) is random-access memory that retains its information when power is turned off (non-volatile). Systems with such memory can store data structures in RAM and retrieve them after a reboot, without flushing to typical _storage_. Linux supports NVRAM file systems thanks to either https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem] (requires Linux{nbsp}4.1+) or http://pramfs.sourceforge.net/[PRAMFS] (requires Linux{nbsp}<{nbsp}4). This section doesn't describe how to operate such file systems; we assume that you have a working persistent memory file system. When you <>, you can specify the path of the shared memory holding the sub-buffers. If you specify a location on an NVRAM file system, then you can retrieve the latest recorded trace data when the system reboots after a crash. To record trace data on a persistent memory file system and retrieve the trace data after a system crash: . Create a recording session with a <> shared memory path located on an NVRAM file system: + -- [role="term"] ---- $ lttng create my-session --shm-path=/path/to/shm/on/nvram ---- -- . Configure the recording session as usual with the man:lttng(1) command-line tool, and <>. . After a system crash, use the man:lttng-crash(1) command-line tool to read the trace data recorded on the NVRAM file system: + -- [role="term"] ---- $ lttng-crash /path/to/shm/on/nvram ---- -- The binary layout of the ring buffer files isn't exactly the same as the trace files layout. This is why you need to use man:lttng-crash(1) instead of some standard LTTng trace reader. To convert the ring buffer files to LTTng trace files: * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1): + -- [role="term"] ---- $ lttng-crash --extract=/path/to/trace /path/to/shm/on/nvram ---- -- [role="since-2.10"] [[notif-trigger-api]] === Get notified when the buffer usage of a channel is too high or too low With the notification and <> C{nbsp}API of <>, LTTng can notify your user application when the buffer usage of one or more <> becomes too low or too high. Use this API and enable or disable <> while a recording session <> to avoid <>, for example. .Send a notification to a user application when the buffer usage of an LTTng channel is too high. ==== In this example, we create and build an application which gets notified when the buffer usage of a specific LTTng channel is higher than 75{nbsp}%. We only print that it's the case in this example, but we could as well use the `liblttng-ctl` C{nbsp}API to <> when this happens, for example. . Create the C{nbsp}source file of the application: + -- [source,c] .path:{notif-app.c} ---- #include #include #include #include int main(int argc, char *argv[]) { int exit_status = EXIT_SUCCESS; struct lttng_notification_channel *notification_channel; struct lttng_condition *condition; struct lttng_action *action; struct lttng_trigger *trigger; const char *recording_session_name; const char *channel_name; assert(argc >= 3); recording_session_name = argv[1]; channel_name = argv[2]; /* * Create a notification channel. * * A notification channel connects the user application to the LTTng * session daemon. * * You can use this notification channel to listen to various types * of notifications. */ notification_channel = lttng_notification_channel_create( lttng_session_daemon_notification_endpoint); /* * Create a "buffer usage becomes greater than" condition. * * In this case, the condition is satisfied when the buffer usage * becomes greater than or equal to 75 %. * * We create the condition for a specific recording session name, * channel name, and for the user space tracing domain. * * The following condition types also exist: * * * The buffer usage of a channel becomes less than a given value. * * * The consumed data size of a recording session becomes greater * than a given value. * * * A recording session rotation becomes ongoing. * * * A recording session rotation becomes completed. * * * A given event rule matches an event. */ condition = lttng_condition_buffer_usage_high_create(); lttng_condition_buffer_usage_set_threshold_ratio(condition, .75); lttng_condition_buffer_usage_set_session_name(condition, recording_session_name); lttng_condition_buffer_usage_set_channel_name(condition, channel_name); lttng_condition_buffer_usage_set_domain_type(condition, LTTNG_DOMAIN_UST); /* * Create an action (receive a notification) to execute when the * condition created above is satisfied. */ action = lttng_action_notify_create(); /* * Create a trigger. * * A trigger associates a condition to an action: LTTng executes * the action when the condition is satisfied. */ trigger = lttng_trigger_create(condition, action); /* Register the trigger to the LTTng session daemon. */ lttng_register_trigger(trigger); /* * Now that we have registered a trigger, LTTng will send a * notification every time its condition is met through a * notification channel. * * To receive this notification, we must subscribe to notifications * which match the same condition. */ lttng_notification_channel_subscribe(notification_channel, condition); /* * Notification loop. * * Put this in a dedicated thread to avoid blocking the main thread. */ for (;;) { struct lttng_notification *notification; enum lttng_notification_channel_status status; const struct lttng_evaluation *notification_evaluation; const struct lttng_condition *notification_condition; double buffer_usage; /* Receive the next notification. */ status = lttng_notification_channel_get_next_notification( notification_channel, ¬ification); switch (status) { case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK: break; case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED: /* * The session daemon can drop notifications if a monitoring * application isn't consuming the notifications fast * enough. */ continue; case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED: /* * The session daemon closed the notification channel. * * This is typically caused by a session daemon shutting * down. */ goto end; default: /* Unhandled conditions or errors. */ exit_status = EXIT_FAILURE; goto end; } /* * A notification provides, amongst other things: * * * The condition that caused LTTng to send this notification. * * * The condition evaluation, which provides more specific * information on the evaluation of the condition. * * The condition evaluation provides the buffer usage * value at the moment the condition was satisfied. */ notification_condition = lttng_notification_get_condition( notification); notification_evaluation = lttng_notification_get_evaluation( notification); /* We're subscribed to only one condition. */ assert(lttng_condition_get_type(notification_condition) == LTTNG_CONDITION_TYPE_BUFFER_USAGE_HIGH); /* * Get the exact sampled buffer usage from the condition * evaluation. */ lttng_evaluation_buffer_usage_get_usage_ratio( notification_evaluation, &buffer_usage); /* * At this point, instead of printing a message, we could do * something to reduce the buffer usage of the channel, like * disable specific events, for example. */ printf("Buffer usage is %f %% in recording session \"%s\", " "user space channel \"%s\".\n", buffer_usage * 100, recording_session_name, channel_name); /* Destroy the notification object. */ lttng_notification_destroy(notification); } end: lttng_action_destroy(action); lttng_condition_destroy(condition); lttng_trigger_destroy(trigger); lttng_notification_channel_destroy(notification_channel); return exit_status; } ---- -- . Build the `notif-app` application, linking it with `liblttng-ctl`: + -- [role="term"] ---- $ gcc -o notif-app notif-app.c $(pkg-config --cflags --libs lttng-ctl) ---- -- . <>, <> matching all the user space tracepoint events, and <>: + -- [role="term"] ---- $ lttng create my-session $ lttng enable-event --userspace --all $ lttng start ---- -- + If you create the channel manually with the man:lttng-enable-channel(1) command, you can set its <> to control how frequently LTTng samples the current values of the channel properties to evaluate user conditions. . Run the `notif-app` application. + This program accepts the <> and user space channel names as its two first arguments. The channel which LTTng automatically creates with the man:lttng-enable-event(1) command above is named `channel0`: + -- [role="term"] ---- $ ./notif-app my-session channel0 ---- -- . In another terminal, run an application with a very high event throughput so that the 75{nbsp}% buffer usage condition is reached. + In the first terminal, the application should print lines like this: + ---- Buffer usage is 81.45197 % in recording session "my-session", user space channel "channel0". ---- + If you don't see anything, try to make the threshold of the condition in path:{notif-app.c} lower (0.1{nbsp}%, for example), and then rebuild the `notif-app` application (step{nbsp}2) and run it again (step{nbsp}4). ==== [[reference]] == Reference [[lttng-modules-ref]] === noch:{LTTng-modules} [role="since-2.9"] [[lttng-tracepoint-enum]] ==== `LTTNG_TRACEPOINT_ENUM()` usage Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration: [source,c] ---- LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries)) ---- Replace: * `name` with the name of the enumeration (C identifier, unique amongst all the defined enumerations). * `entries` with a list of enumeration entries. The available enumeration entry macros are: +ctf_enum_value(__name__, __value__)+:: Entry named +__name__+ mapped to the integral value +__value__+. +ctf_enum_range(__name__, __begin__, __end__)+:: Entry named +__name__+ mapped to the range of integral values between +__begin__+ (included) and +__end__+ (included). +ctf_enum_auto(__name__)+:: Entry named +__name__+ mapped to the integral value following the last mapping value. + The last value of a `ctf_enum_value()` entry is its +__value__+ parameter. + The last value of a `ctf_enum_range()` entry is its +__end__+ parameter. + If `ctf_enum_auto()` is the first entry in the list, its integral value is 0. Use the `ctf_enum()` <> to use a defined enumeration as a tracepoint field. .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`. ==== [source,c] ---- LTTNG_TRACEPOINT_ENUM( my_enum, TP_ENUM_VALUES( ctf_enum_auto("AUTO: EXPECT 0") ctf_enum_value("VALUE: 23", 23) ctf_enum_value("VALUE: 27", 27) ctf_enum_auto("AUTO: EXPECT 28") ctf_enum_range("RANGE: 101 TO 303", 101, 303) ctf_enum_auto("AUTO: EXPECT 304") ) ) ---- ==== [role="since-2.7"] [[lttng-modules-tp-fields]] ==== Tracepoint fields macros (for `TP_FIELDS()`) [[tp-fast-assign]][[tp-struct-entry]]The available macros to define tracepoint fields, which must be listed within `TP_FIELDS()` in `LTTNG_TRACEPOINT_EVENT()`, are: [role="func-desc growable",cols="asciidoc,asciidoc"] .Available macros to define LTTng-modules tracepoint fields |==== |Macro |Description and parameters | +ctf_integer(__t__, __n__, __e__)+ +ctf_integer_nowrite(__t__, __n__, __e__)+ +ctf_user_integer(__t__, __n__, __e__)+ +ctf_user_integer_nowrite(__t__, __n__, __e__)+ | Standard integer, displayed in base{nbsp}10. +__t__+:: Integer C type (`int`, `long`, `size_t`, ...). +__n__+:: Field name. +__e__+:: Argument expression. | +ctf_integer_hex(__t__, __n__, __e__)+ +ctf_user_integer_hex(__t__, __n__, __e__)+ | Standard integer, displayed in base{nbsp}16. +__t__+:: Integer C type. +__n__+:: Field name. +__e__+:: Argument expression. |+ctf_integer_oct(__t__, __n__, __e__)+ | Standard integer, displayed in base{nbsp}8. +__t__+:: Integer C type. +__n__+:: Field name. +__e__+:: Argument expression. | +ctf_integer_network(__t__, __n__, __e__)+ +ctf_user_integer_network(__t__, __n__, __e__)+ | Integer in network byte order (big-endian), displayed in base{nbsp}10. +__t__+:: Integer C type. +__n__+:: Field name. +__e__+:: Argument expression. | +ctf_integer_network_hex(__t__, __n__, __e__)+ +ctf_user_integer_network_hex(__t__, __n__, __e__)+ | Integer in network byte order, displayed in base{nbsp}16. +__t__+:: Integer C type. +__n__+:: Field name. +__e__+:: Argument expression. | +ctf_enum(__N__, __t__, __n__, __e__)+ +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+ +ctf_user_enum(__N__, __t__, __n__, __e__)+ +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+ | Enumeration. +__N__+:: Name of a <>. +__t__+:: Integer C type (`int`, `long`, `size_t`, ...). +__n__+:: Field name. +__e__+:: Argument expression. | +ctf_string(__n__, __e__)+ +ctf_string_nowrite(__n__, __e__)+ +ctf_user_string(__n__, __e__)+ +ctf_user_string_nowrite(__n__, __e__)+ | Null-terminated string; undefined behavior if +__e__+ is `NULL`. +__n__+:: Field name. +__e__+:: Argument expression. | +ctf_array(__t__, __n__, __e__, __s__)+ +ctf_array_nowrite(__t__, __n__, __e__, __s__)+ +ctf_user_array(__t__, __n__, __e__, __s__)+ +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+ | Statically-sized array of integers. +__t__+:: Array element C type. +__n__+:: Field name. +__e__+:: Argument expression. +__s__+:: Number of elements. | +ctf_array_bitfield(__t__, __n__, __e__, __s__)+ +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+ +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+ +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+ | Statically-sized array of bits. The type of +__e__+ must be an integer type. +__s__+ is the number of elements of such type in +__e__+, not the number of bits. +__t__+:: Array element C type. +__n__+:: Field name. +__e__+:: Argument expression. +__s__+:: Number of elements. | +ctf_array_text(__t__, __n__, __e__, __s__)+ +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+ +ctf_user_array_text(__t__, __n__, __e__, __s__)+ +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+ | Statically-sized array, printed as text. The string doesn't need to be null-terminated. +__t__+:: Array element C type (always `char`). +__n__+:: Field name. +__e__+:: Argument expression. +__s__+:: Number of elements. | +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+ +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+ +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+ +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+ | Dynamically-sized array of integers. The type of +__E__+ must be unsigned. +__t__+:: Array element C type. +__n__+:: Field name. +__e__+:: Argument expression. +__T__+:: Length expression C type. +__E__+:: Length expression. | +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+ +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+ | Dynamically-sized array of integers, displayed in base{nbsp}16. The type of +__E__+ must be unsigned. +__t__+:: Array element C type. +__n__+:: Field name. +__e__+:: Argument expression. +__T__+:: Length expression C type. +__E__+:: Length expression. |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+ | Dynamically-sized array of integers in network byte order (big-endian), displayed in base{nbsp}10. The type of +__E__+ must be unsigned. +__t__+:: Array element C type. +__n__+:: Field name. +__e__+:: Argument expression. +__T__+:: Length expression C type. +__E__+:: Length expression. | +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+ +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+ +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+ +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+ | Dynamically-sized array of bits. The type of +__e__+ must be an integer type. +__s__+ is the number of elements of such type in +__e__+, not the number of bits. The type of +__E__+ must be unsigned. +__t__+:: Array element C type. +__n__+:: Field name. +__e__+:: Argument expression. +__T__+:: Length expression C type. +__E__+:: Length expression. | +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+ +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+ +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+ +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+ | Dynamically-sized array, displayed as text. The string doesn't need to be null-terminated. The type of +__E__+ must be unsigned. The behaviour is undefined if +__e__+ is `NULL`. +__t__+:: Sequence element C type (always `char`). +__n__+:: Field name. +__e__+:: Argument expression. +__T__+:: Length expression C type. +__E__+:: Length expression. |==== Use the `_user` versions when the argument expression, `e`, is a user space address. In the cases of `ctf_user_integer*()` and `ctf_user_float*()`, `&e` must be a user space address, thus `e` must be addressable. The `_nowrite` versions omit themselves from the trace data, but are otherwise identical. This means LTTng won't write the `_nowrite` fields to the recorded trace. Their primary purpose is to make some of the event context available to the <> without having to commit the data to <>. [[glossary]] == Glossary Terms related to LTTng and to tracing in general: [[def-action]]action:: The part of a <> which LTTng executes when the trigger <> is satisfied. Babeltrace:: The https://diamon.org/babeltrace[Babeltrace] project, which includes: + * The https://babeltrace.org/docs/v2.0/man1/babeltrace2.1/[cmd:babeltrace2] command-line interface. * The libbabeltrace2 library which offers a https://babeltrace.org/docs/v2.0/libbabeltrace2/[C API]. * https://babeltrace.org/docs/v2.0/python/bt2/[Python{nbsp}3 bindings]. * Plugins. [[def-buffering-scheme]]<>:: A layout of <> applied to a given channel. [[def-channel]]<>:: An entity which is responsible for a set of <>. + <> are always attached to a specific channel. clock:: A source of time for a <>. [[def-condition]]condition:: The part of a <> which must be satisfied for LTTng to attempt to execute the trigger <>. [[def-consumer-daemon]]<>:: A program which is responsible for consuming the full <> and write them to a file system or send them over the network. [[def-current-trace-chunk]]current trace chunk:: A <> which includes the current content of all the <> of the <> and the stream files produced since the latest event amongst: + * The creation of the recording session. * The last <>, if any. <>:: The <> in which the <> _discards_ new <> when there's no <> space left to store them. [[def-event]]event:: The execution of an <>, like a <> that you manually place in some source code, or a Linux kprobe. + When an instrumentation point is executed, LTTng creates an event. + When an <> matches the event, <> executes some action, for example: + * Record its payload to a <> as an <>. * Attempt to execute the user-defined actions of a <> with an <> condition. [[def-event-name]]event name:: The name of an <>, which is also the name of the <>. + This is also called the _instrumentation point name_. [[def-event-record]]event record:: A record (binary serialization), in a <>, of the payload of an <>. + The payload of an event record has zero or more _fields_. [[def-event-record-loss-mode]]<>:: The mechanism by which event records of a given <> are lost (not recorded) when there's no <> space left to store them. [[def-event-rule]]<>:: Set of conditions which an <> must satisfy for LTTng to execute some action. + An event rule is said to _match_ events, like a https://en.wikipedia.org/wiki/Regular_expression[regular expression] matches strings. + A <> is a specific type of event rule of which the action is to <> the event to a <>. [[def-incl-set]]inclusion set:: In the <> context: a set of <> of a given type. <>:: The use of <> probes to make a kernel or <> traceable. [[def-instrumentation-point]]instrumentation point:: A point in the execution path of a kernel or <> which, when executed, create an <>. instrumentation point name:: See _<>_. `java.util.logging`:: The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities] of the Java platform. log4j:: A https://logging.apache.org/log4j/1.2/[logging library] for Java developed by the Apache Software Foundation. log level:: Level of severity of a log statement or user space <>. [[def-lttng]]LTTng:: The _Linux Trace Toolkit: next generation_ project. <>:: A command-line tool provided by the <> project which you can use to send and receive control messages to and from a <>. cmd:lttng-consumerd:: The name of the <> program. cmd:lttng-crash:: A utility provided by the <> project which can convert <> files (usually <>) to <> files. + See man:lttng-crash(1). LTTng Documentation:: This document. <>:: A communication protocol between the <> and live readers which makes it possible to show or analyze <> ``live'', as they're received by the <>. <>:: The https://github.com/lttng/lttng-modules[LTTng-modules] project, which contains the Linux kernel modules to make the Linux kernel <> available for <> tracing. cmd:lttng-relayd:: The name of the <> program. cmd:lttng-sessiond:: The name of the <> program. [[def-lttng-tools]]LTTng-tools:: The https://github.com/lttng/lttng-tools[LTTng-tools] project, which contains the various programs and libraries used to <>. [[def-lttng-ust]]<>:: The https://github.com/lttng/lttng-ust[LTTng-UST] project, which contains libraries to instrument <>. <>:: A Java package provided by the <> project to allow the LTTng instrumentation of `java.util.logging` and Apache log4j{nbsp}1.2 logging statements. <>:: A Python package provided by the <> project to allow the <> instrumentation of Python logging statements. <>:: The <> in which new <> _overwrite_ older event records when there's no <> space left to store them. <>:: A <> in which each instrumented process has its own <> for a given user space <>. <>:: A <> in which all the processes of a Unix user share the same <> for a given user space <>. [[def-proc-attr]]process attribute:: In the <> context: + * A process ID. * A virtual process ID. * A Unix user ID. * A virtual Unix user ID. * A Unix group ID. * A virtual Unix group ID. record (_noun_):: See <>. [[def-record]]record (_verb_):: Serialize the binary payload of an <> to a <>. [[def-recording-event-rule]]<>:: Specific type of <> of which the action is to <> the matched event to a <>. [[def-tracing-session]][[def-recording-session]]<>:: A stateful dialogue between you and a <>. [[def-tracing-session-rotation]]<>:: The action of archiving the <> of a <>. [[def-relay-daemon]]<>:: A process which is responsible for receiving the <> data which a distant <> sends. [[def-ring-buffer]]ring buffer:: A set of <>. rotation:: See _<>_. [[def-session-daemon]]<>:: A process which receives control commands from you and orchestrates the <> and various <> daemons. <>:: A copy of the current data of all the <> of a given <>, saved as <> files. [[def-sub-buffer]]sub-buffer:: One part of an <> <> which contains <>. timestamp:: The time information attached to an <> when LTTng creates it. [[def-trace]]trace (_noun_):: A set of: + * One https://diamon.org/ctf/[CTF] metadata stream file. * One or more CTF data stream files which are the concatenations of one or more flushed <>. [[def-trace-verb]]trace (_verb_):: From the perspective of a <>: attempt to execute one or more actions when emitting an <> in an application or in a system. [[def-trace-chunk]]trace chunk:: A self-contained <> which is part of a <>. Each <> produces a <>. [[def-trace-chunk-archive]]trace chunk archive:: The result of a <>. + <> doesn't manage any trace chunk archive, even if its containing <> is still active: you are free to read it, modify it, move it, or remove it. Trace Compass:: The http://tracecompass.org[Trace Compass] project and application. [[def-tracepoint]]tracepoint:: An instrumentation point using the tracepoint mechanism of the Linux kernel or of <>. tracepoint definition:: The definition of a single <>. tracepoint name:: The name of a <>. [[def-tracepoint-provider]]tracepoint provider:: A set of functions providing <> to an instrumented <>. + Not to be confused with a <>: many tracepoint providers can exist within a tracepoint provider package. [[def-tracepoint-provider-package]]tracepoint provider package:: One or more <> compiled as an https://en.wikipedia.org/wiki/Object_file[object file] or as a link:https://en.wikipedia.org/wiki/Library_(computing)#Shared_libraries[shared library]. [[def-tracer]]tracer:: A piece of software which executes some action when it emits an <>, like <> it to some buffer. <>:: A type of LTTng <>. <>:: The Unix group which a Unix user can be part of to be allowed to control the Linux kernel LTTng <>. [[def-trigger]]<>:: A <>-<> pair; when the condition of a trigger is satisfied, LTTng attempts to execute its actions. [[def-user-application]]user application:: An application (program or library) running in user space, as opposed to a Linux kernel module, for example.