2.10: notification API example: fix some comments and code
[lttng-docs.git] / 2.10 / lttng-docs-2.10.txt
CommitLineData
85c29972
PP
1The LTTng Documentation
2=======================
3Philippe Proulx <pproulx@efficios.com>
7568806b 4v2.10, 1 August 2017
85c29972
PP
5
6
7include::../common/copyright.txt[]
8
9
10include::../common/welcome.txt[]
11
12
13include::../common/audience.txt[]
14
15
16[[chapters]]
17=== What's in this documentation?
18
19The LTTng Documentation is divided into the following sections:
20
21* **<<nuts-and-bolts,Nuts and bolts>>** explains the
22 rudiments of software tracing and the rationale behind the
23 LTTng project.
24+
25You can skip this section if you’re familiar with software tracing and
26with the LTTng project.
27
28* **<<installing-lttng,Installation>>** describes the steps to
29 install the LTTng packages on common Linux distributions and from
30 their sources.
31+
32You can skip this section if you already properly installed LTTng on
33your target system.
34
35* **<<getting-started,Quick start>>** is a concise guide to
36 getting started quickly with LTTng kernel and user space tracing.
37+
38We recommend this section if you're new to LTTng or to software tracing
39in general.
40+
41You can skip this section if you're not new to LTTng.
42
43* **<<core-concepts,Core concepts>>** explains the concepts at
44 the heart of LTTng.
45+
46It's a good idea to become familiar with the core concepts
47before attempting to use the toolkit.
48
49* **<<plumbing,Components of LTTng>>** describes the various components
50 of the LTTng machinery, like the daemons, the libraries, and the
51 command-line interface.
52* **<<instrumenting,Instrumentation>>** shows different ways to
53 instrument user applications and the Linux kernel.
54+
55Instrumenting source code is essential to provide a meaningful
56source of events.
57+
58You can skip this section if you do not have a programming background.
59
60* **<<controlling-tracing,Tracing control>>** is divided into topics
61 which demonstrate how to use the vast array of features that
62 LTTng{nbsp}{revision} offers.
63* **<<reference,Reference>>** contains reference tables.
64* **<<glossary,Glossary>>** is a specialized dictionary of terms related
65 to LTTng or to the field of software tracing.
66
67
68include::../common/convention.txt[]
69
70
71include::../common/acknowledgements.txt[]
72
73
74[[whats-new]]
75== What's new in LTTng {revision}?
76
77LTTng{nbsp}{revision} bears the name _KeKriek_. From
78http://brasseriedunham.com/[Brasserie Dunham], the _**KeKriek**_ is a
79sour mashed golden wheat ale fermented with local sour cherries from
80Tougas orchards. Fresh sweet cherry notes with some tartness, lively
81carbonation with a dry finish.
82
83New features and changes in LTTng{nbsp}{revision}:
84
85* **Tracing control**:
86** You can put more than one wildcard special character (`*`), and not
87 only at the end, when you <<enabling-disabling-events,create an event
88 rule>>, in both the instrumentation point name and the literal
89 strings of
a2211984 90 link:/man/1/lttng-enable-event/v{revision}/#doc-filter-syntax[filter expressions]:
85c29972
PP
91+
92--
93[role="term"]
94----
95# lttng enable-event --kernel 'x86_*_local_timer_*' \
96 --filter='name == "*a*b*c*d*e" && count >= 23'
97----
98--
99+
100--
101[role="term"]
102----
103$ lttng enable-event --userspace '*_my_org:*msg*'
104----
105--
106
107** New trigger and notification API for
108 <<liblttng-ctl-lttng,`liblttng-ctl`>>. This new subsystem allows you
109 to register triggers which emit a notification when a given
110 condition is satisfied. As of LTTng{nbsp}{revision}, only
111 <<channel,channel>> buffer usage conditions are available.
112 Documentation is available in the
113 https://github.com/lttng/lttng-tools/tree/stable-{revision}/include/lttng[`liblttng-ctl`
90c4e38a
PP
114 header files] and in
115 <<notif-trigger-api,Get notified when a channel's buffer usage is too
116 high or too low>>.
85c29972
PP
117
118** You can now embed the whole textual LTTng-tools man pages into the
119 executables at build time with the `--enable-embedded-help`
120 configuration option. Thanks to this option, you don't need the
121 http://www.methods.co.nz/asciidoc/[AsciiDoc] and
122 https://directory.fsf.org/wiki/Xmlto[xmlto] tools at build time, and
123 a manual pager at run time, to get access to this documentation.
124
125* **User space tracing**:
126** New blocking mode: an LTTng-UST tracepoint can now block until
127 <<channel,sub-buffer>> space is available instead of discarding event
128 records in <<channel-overwrite-mode-vs-discard-mode,discard mode>>.
129 With this feature, you can be sure that no event records are
130 discarded during your application's execution at the expense of
131 performance.
132+
133For example, the following command lines create a user space tracing
134channel with an infinite blocking timeout and run an application
135instrumented with LTTng-UST which is explicitly allowed to block:
136+
137--
138[role="term"]
139----
140$ lttng create
141$ lttng enable-channel --userspace --blocking-timeout=-1 blocking-channel
142$ lttng enable-event --userspace --channel=blocking-channel --all
143$ lttng start
144$ LTTNG_UST_ALLOW_BLOCKING=1 my-app
145----
146--
147+
148See the complete <<blocking-timeout-example,blocking timeout example>>.
149
150* **Linux kernel tracing**:
151** Linux 4.10, 4.11, and 4.12 support.
152** The thread state dump events recorded by LTTng-modules now contain
153 the task's CPU identifier. This improves the precision of the
154 scheduler model for analyses.
155** Extended man:socketpair(2) system call tracing data.
156
157
158[[nuts-and-bolts]]
159== Nuts and bolts
160
161What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
162generation_ is a modern toolkit for tracing Linux systems and
163applications. So your first question might be:
164**what is tracing?**
165
166
167[[what-is-tracing]]
168=== What is tracing?
169
170As the history of software engineering progressed and led to what
171we now take for granted--complex, numerous and
172interdependent software applications running in parallel on
173sophisticated operating systems like Linux--the authors of such
174components, software developers, began feeling a natural
175urge to have tools that would ensure the robustness and good performance
176of their masterpieces.
177
178One major achievement in this field is, inarguably, the
179https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
180an essential tool for developers to find and fix bugs. But even the best
181debugger won't help make your software run faster, and nowadays, faster
182software means either more work done by the same hardware, or cheaper
183hardware for the same work.
184
185A _profiler_ is often the tool of choice to identify performance
186bottlenecks. Profiling is suitable to identify _where_ performance is
187lost in a given software. The profiler outputs a profile, a statistical
188summary of observed events, which you may use to discover which
189functions took the most time to execute. However, a profiler won't
190report _why_ some identified functions are the bottleneck. Bottlenecks
191might only occur when specific conditions are met, conditions that are
192sometimes impossible to capture by a statistical profiler, or impossible
193to reproduce with an application altered by the overhead of an
194event-based profiler. For a thorough investigation of software
195performance issues, a history of execution is essential, with the
196recorded values of variables and context fields you choose, and
197with as little influence as possible on the instrumented software. This
198is where tracing comes in handy.
199
200_Tracing_ is a technique used to understand what goes on in a running
201software system. The software used for tracing is called a _tracer_,
202which is conceptually similar to a tape recorder. When recording,
203specific instrumentation points placed in the software source code
204generate events that are saved on a giant tape: a _trace_ file. You
205can trace user applications and the operating system at the same time,
206opening the possibility of resolving a wide range of problems that would
207otherwise be extremely challenging.
208
209Tracing is often compared to _logging_. However, tracers and loggers are
210two different tools, serving two different purposes. Tracers are
211designed to record much lower-level events that occur much more
212frequently than log messages, often in the range of thousands per
213second, with very little execution overhead. Logging is more appropriate
214for a very high-level analysis of less frequent events: user accesses,
215exceptional conditions (errors and warnings, for example), database
216transactions, instant messaging communications, and such. Simply put,
217logging is one of the many use cases that can be satisfied with tracing.
218
219The list of recorded events inside a trace file can be read manually
220like a log file for the maximum level of detail, but it is generally
221much more interesting to perform application-specific analyses to
222produce reduced statistics and graphs that are useful to resolve a
223given problem. Trace viewers and analyzers are specialized tools
224designed to do this.
225
226In the end, this is what LTTng is: a powerful, open source set of
227tools to trace the Linux kernel and user applications at the same time.
228LTTng is composed of several components actively maintained and
229developed by its link:/community/#where[community].
230
231
232[[lttng-alternatives]]
233=== Alternatives to noch:{LTTng}
234
235Excluding proprietary solutions, a few competing software tracers
236exist for Linux:
237
238* https://github.com/dtrace4linux/linux[dtrace4linux] is a port of
239 Sun Microsystems's DTrace to Linux. The cmd:dtrace tool interprets
240 user scripts and is responsible for loading code into the
241 Linux kernel for further execution and collecting the outputted data.
242* https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF] is a
243 subsystem in the Linux kernel in which a virtual machine can execute
244 programs passed from the user space to the kernel. You can attach
245 such programs to tracepoints and KProbes thanks to a system call, and
246 they can output data to the user space when executed thanks to
247 different mechanisms (pipe, VM register values, and eBPF maps, to name
248 a few).
249* https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
250 is the de facto function tracer of the Linux kernel. Its user
251 interface is a set of special files in sysfs.
252* https://perf.wiki.kernel.org/[perf] is
253 a performance analyzing tool for Linux which supports hardware
254 performance counters, tracepoints, as well as other counters and
255 types of probes. perf's controlling utility is the cmd:perf command
256 line/curses tool.
257* http://linux.die.net/man/1/strace[strace]
258 is a command-line utility which records system calls made by a
259 user process, as well as signal deliveries and changes of process
260 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
261 to fulfill its function.
262* http://www.sysdig.org/[sysdig], like SystemTap, uses scripts to
263 analyze Linux kernel events. You write scripts, or _chisels_ in
264 sysdig's jargon, in Lua and sysdig executes them while the system is
265 being traced or afterwards. sysdig's interface is the cmd:sysdig
266 command-line tool as well as the curses-based cmd:csysdig tool.
267* https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and
268 user space tracer which uses custom user scripts to produce plain text
269 traces. SystemTap converts the scripts to the C language, and then
270 compiles them as Linux kernel modules which are loaded to produce
271 trace data. SystemTap's primary user interface is the cmd:stap
272 command-line tool.
273
274The main distinctive features of LTTng is that it produces correlated
275kernel and user space traces, as well as doing so with the lowest
276overhead amongst other solutions. It produces trace files in the
277http://diamon.org/ctf[CTF] format, a file format optimized
278for the production and analyses of multi-gigabyte data.
279
280LTTng is the result of more than 10 years of active open source
281development by a community of passionate developers.
282LTTng{nbsp}{revision} is currently available on major desktop and server
283Linux distributions.
284
285The main interface for tracing control is a single command-line tool
286named cmd:lttng. The latter can create several tracing sessions, enable
287and disable events on the fly, filter events efficiently with custom
288user expressions, start and stop tracing, and much more. LTTng can
289record the traces on the file system or send them over the network, and
290keep them totally or partially. You can view the traces once tracing
291becomes inactive or in real-time.
292
293<<installing-lttng,Install LTTng now>> and
294<<getting-started,start tracing>>!
295
296
297[[installing-lttng]]
298== Installation
299
300**LTTng** is a set of software <<plumbing,components>> which interact to
301<<instrumenting,instrument>> the Linux kernel and user applications, and
302to <<controlling-tracing,control tracing>> (start and stop
303tracing, enable and disable event rules, and the rest). Those
304components are bundled into the following packages:
305
306* **LTTng-tools**: Libraries and command-line interface to
307 control tracing.
308* **LTTng-modules**: Linux kernel modules to instrument and
309 trace the kernel.
310* **LTTng-UST**: Libraries and Java/Python packages to instrument and
311 trace user applications.
312
313Most distributions mark the LTTng-modules and LTTng-UST packages as
314optional when installing LTTng-tools (which is always required). In the
315following sections, we always provide the steps to install all three,
316but note that:
317
318* You only need to install LTTng-modules if you intend to trace the
319 Linux kernel.
320* You only need to install LTTng-UST if you intend to trace user
321 applications.
322
323[role="growable"]
324.Availability of LTTng{nbsp}{revision} for major Linux distributions as of 25 July 2017.
325|====
326|Distribution |Available in releases |Alternatives
327
328|https://www.ubuntu.com/[Ubuntu]
329|Ubuntu{nbsp}14.04 _Trusty Tahr_ and Ubuntu{nbsp}16.04 _Xenial Xerus_:
330<<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
331|link:/docs/v2.9#doc-ubuntu[LTTng{nbsp}2.9 for Ubuntu{nbsp}17.04 _Zesty Zapus_].
332
333<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
334other Ubuntu releases.
335
336|https://getfedora.org/[Fedora]
337|_Not available_
338|link:/docs/v2.9#doc-fedora[LTTng{nbsp}2.9 for Fedora 26].
339
340<<building-from-source,Build LTTng{nbsp}{revision} from source>>.
341
342|https://www.debian.org/[Debian]
343|_Not available_
344|link:/docs/v2.9#doc-debian[LTTng{nbsp}2.9 for Debian "stretch"
345(stable), Debian "buster" (testing), and Debian "sid" (unstable)].
346
347<<building-from-source,Build LTTng{nbsp}{revision} from source>>.
348
349|https://www.archlinux.org/[Arch Linux]
350|_Not available_
351|link:/docs/v2.9#doc-arch-linux[LTTng{nbsp}2.9 in the latest AUR packages].
352
353|https://alpinelinux.org/[Alpine Linux]
354|_Not available_
355|link:/docs/v2.9#doc-alpine-linux[LTTng{nbsp}2.9 for Alpine Linux "edge"].
356
357<<building-from-source,Build LTTng{nbsp}{revision} from source>>.
358
359|https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
360|See http://packages.efficios.com/[EfficiOS Enterprise Packages].
361|
362
363|https://buildroot.org/[Buildroot]
364|_Not available_
365|link:/docs/v2.9#doc-buildroot[LTTng{nbsp}2.9 for Buildroot{nbsp}2017.02 and
366Buildroot{nbsp}2017.05].
367
368<<building-from-source,Build LTTng{nbsp}{revision} from source>>.
369
370|http://www.openembedded.org/wiki/Main_Page[OpenEmbedded] and
371https://www.yoctoproject.org/[Yocto]
372|_Not available_
373|link:/docs/v2.9#doc-oe-yocto[LTTng{nbsp}2.9 for Yocto Project{nbsp}2.3 _Pyro_]
374(`openembedded-core` layer).
375
376<<building-from-source,Build LTTng{nbsp}{revision} from source>>.
377|====
378
379
380[[ubuntu]]
381=== [[ubuntu-official-repositories]]Ubuntu
382
383[[ubuntu-ppa]]
384==== noch:{LTTng} Stable {revision} PPA
385
386The https://launchpad.net/~lttng/+archive/ubuntu/stable-{revision}[LTTng
387Stable{nbsp}{revision} PPA] offers the latest stable
388LTTng{nbsp}{revision} packages for:
389
390* Ubuntu{nbsp}14.04 _Trusty Tahr_
391* Ubuntu{nbsp}16.04 _Xenial Xerus_
392
393To install LTTng{nbsp}{revision} from the LTTng Stable{nbsp}{revision} PPA:
394
395. Add the LTTng Stable{nbsp}{revision} PPA repository and update the
396 list of packages:
397+
398--
399[role="term"]
400----
401# apt-add-repository ppa:lttng/stable-2.10
402# apt-get update
403----
404--
405
406. Install the main LTTng{nbsp}{revision} packages:
407+
408--
409[role="term"]
410----
411# apt-get install lttng-tools
412# apt-get install lttng-modules-dkms
413# apt-get install liblttng-ust-dev
414----
415--
416
417. **If you need to instrument and trace
418 <<java-application,Java applications>>**, install the LTTng-UST
419 Java agent:
420+
421--
422[role="term"]
423----
424# apt-get install liblttng-ust-agent-java
425----
426--
427
428. **If you need to instrument and trace
429 <<python-application,Python{nbsp}3 applications>>**, install the
430 LTTng-UST Python agent:
431+
432--
433[role="term"]
434----
435# apt-get install python3-lttngust
436----
437--
438
439
440[[enterprise-distributions]]
441=== RHEL, SUSE, and other enterprise distributions
442
443To install LTTng on enterprise Linux distributions, such as Red Hat
444Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SUSE), please
445see http://packages.efficios.com/[EfficiOS Enterprise Packages].
446
447
448[[building-from-source]]
449=== Build from source
450
451To build and install LTTng{nbsp}{revision} from source:
452
453. Using your distribution's package manager, or from source, install
454 the following dependencies of LTTng-tools and LTTng-UST:
455+
456--
457* https://sourceforge.net/projects/libuuid/[libuuid]
458* http://directory.fsf.org/wiki/Popt[popt]
459* http://liburcu.org/[Userspace RCU]
460* http://www.xmlsoft.org/[libxml2]
461--
462
463. Download, build, and install the latest LTTng-modules{nbsp}{revision}:
464+
465--
466[role="term"]
467----
468$ cd $(mktemp -d) &&
469wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.10.tar.bz2 &&
470tar -xf lttng-modules-latest-2.10.tar.bz2 &&
471cd lttng-modules-2.10.* &&
472make &&
473sudo make modules_install &&
474sudo depmod -a
475----
476--
477
478. Download, build, and install the latest LTTng-UST{nbsp}{revision}:
479+
480--
481[role="term"]
482----
483$ cd $(mktemp -d) &&
484wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.10.tar.bz2 &&
485tar -xf lttng-ust-latest-2.10.tar.bz2 &&
486cd lttng-ust-2.10.* &&
487./configure &&
488make &&
489sudo make install &&
490sudo ldconfig
491----
492--
493+
494--
495[IMPORTANT]
496.Java and Python application tracing
497====
498If you need to instrument and trace <<java-application,Java
499applications>>, pass the `--enable-java-agent-jul`,
500`--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
501`configure` script, depending on which Java logging framework you use.
502
503If you need to instrument and trace <<python-application,Python
504applications>>, pass the `--enable-python-agent` option to the
505`configure` script. You can set the `PYTHON` environment variable to the
506path to the Python interpreter for which to install the LTTng-UST Python
507agent package.
508====
509--
510+
511--
512[NOTE]
513====
514By default, LTTng-UST libraries are installed to
515dir:{/usr/local/lib}, which is the de facto directory in which to
516keep self-compiled and third-party libraries.
517
518When <<building-tracepoint-providers-and-user-application,linking an
519instrumented user application with `liblttng-ust`>>:
520
521* Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
522 variable.
523* Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
524 man:gcc(1), man:g++(1), or man:clang(1).
525====
526--
527
528. Download, build, and install the latest LTTng-tools{nbsp}{revision}:
529+
530--
531[role="term"]
532----
533$ cd $(mktemp -d) &&
534wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
535tar -xf lttng-tools-latest-2.10.tar.bz2 &&
536cd lttng-tools-2.10.* &&
537./configure &&
538make &&
539sudo make install &&
540sudo ldconfig
541----
542--
543
544TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
545previous steps automatically for a given version of LTTng and confine
546the installed files in a specific directory. This can be useful to test
547LTTng without installing it on your system.
548
549
550[[getting-started]]
551== Quick start
552
553This is a short guide to get started quickly with LTTng kernel and user
554space tracing.
555
556Before you follow this guide, make sure to <<installing-lttng,install>>
557LTTng.
558
559This tutorial walks you through the steps to:
560
561. <<tracing-the-linux-kernel,Trace the Linux kernel>>.
562. <<tracing-your-own-user-application,Trace a user application>> written
563 in C.
564. <<viewing-and-analyzing-your-traces,View and analyze the
565 recorded events>>.
566
567
568[[tracing-the-linux-kernel]]
569=== Trace the Linux kernel
570
571The following command lines start with the `#` prompt because you need
572root privileges to trace the Linux kernel. You can also trace the kernel
573as a regular user if your Unix user is a member of the
574<<tracing-group,tracing group>>.
575
576. Create a <<tracing-session,tracing session>> which writes its traces
577 to dir:{/tmp/my-kernel-trace}:
578+
579--
580[role="term"]
581----
582# lttng create my-kernel-session --output=/tmp/my-kernel-trace
583----
584--
585
586. List the available kernel tracepoints and system calls:
587+
588--
589[role="term"]
590----
591# lttng list --kernel
592# lttng list --kernel --syscall
593----
594--
595
596. Create <<event,event rules>> which match the desired instrumentation
597 point names, for example the `sched_switch` and `sched_process_fork`
598 tracepoints, and the man:open(2) and man:close(2) system calls:
599+
600--
601[role="term"]
602----
603# lttng enable-event --kernel sched_switch,sched_process_fork
604# lttng enable-event --kernel --syscall open,close
605----
606--
607+
608You can also create an event rule which matches _all_ the Linux kernel
609tracepoints (this will generate a lot of data when tracing):
610+
611--
612[role="term"]
613----
614# lttng enable-event --kernel --all
615----
616--
617
618. <<basic-tracing-session-control,Start tracing>>:
619+
620--
621[role="term"]
622----
623# lttng start
624----
625--
626
627. Do some operation on your system for a few seconds. For example,
628 load a website, or list the files of a directory.
629. <<basic-tracing-session-control,Stop tracing>> and destroy the
630 tracing session:
631+
632--
633[role="term"]
634----
635# lttng stop
636# lttng destroy
637----
638--
639+
640The man:lttng-destroy(1) command does not destroy the trace data; it
641only destroys the state of the tracing session.
642
643. For the sake of this example, make the recorded trace accessible to
644 the non-root users:
645+
646--
647[role="term"]
648----
649# chown -R $(whoami) /tmp/my-kernel-trace
650----
651--
652
653See <<viewing-and-analyzing-your-traces,View and analyze the
654recorded events>> to view the recorded events.
655
656
657[[tracing-your-own-user-application]]
658=== Trace a user application
659
660This section steps you through a simple example to trace a
661_Hello world_ program written in C.
662
663To create the traceable user application:
664
665. Create the tracepoint provider header file, which defines the
666 tracepoints and the events they can generate:
667+
668--
669[source,c]
670.path:{hello-tp.h}
671----
672#undef TRACEPOINT_PROVIDER
673#define TRACEPOINT_PROVIDER hello_world
674
675#undef TRACEPOINT_INCLUDE
676#define TRACEPOINT_INCLUDE "./hello-tp.h"
677
678#if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
679#define _HELLO_TP_H
680
681#include <lttng/tracepoint.h>
682
683TRACEPOINT_EVENT(
684 hello_world,
685 my_first_tracepoint,
686 TP_ARGS(
687 int, my_integer_arg,
688 char*, my_string_arg
689 ),
690 TP_FIELDS(
691 ctf_string(my_string_field, my_string_arg)
692 ctf_integer(int, my_integer_field, my_integer_arg)
693 )
694)
695
696#endif /* _HELLO_TP_H */
697
698#include <lttng/tracepoint-event.h>
699----
700--
701
702. Create the tracepoint provider package source file:
703+
704--
705[source,c]
706.path:{hello-tp.c}
707----
708#define TRACEPOINT_CREATE_PROBES
709#define TRACEPOINT_DEFINE
710
711#include "hello-tp.h"
712----
713--
714
715. Build the tracepoint provider package:
716+
717--
718[role="term"]
719----
720$ gcc -c -I. hello-tp.c
721----
722--
723
724. Create the _Hello World_ application source file:
725+
726--
727[source,c]
728.path:{hello.c}
729----
730#include <stdio.h>
731#include "hello-tp.h"
732
733int main(int argc, char *argv[])
734{
735 int x;
736
737 puts("Hello, World!\nPress Enter to continue...");
738
739 /*
740 * The following getchar() call is only placed here for the purpose
741 * of this demonstration, to pause the application in order for
742 * you to have time to list its tracepoints. It is not
743 * needed otherwise.
744 */
745 getchar();
746
747 /*
748 * A tracepoint() call.
749 *
750 * Arguments, as defined in hello-tp.h:
751 *
752 * 1. Tracepoint provider name (required)
753 * 2. Tracepoint name (required)
754 * 3. my_integer_arg (first user-defined argument)
755 * 4. my_string_arg (second user-defined argument)
756 *
757 * Notice the tracepoint provider and tracepoint names are
758 * NOT strings: they are in fact parts of variables that the
759 * macros in hello-tp.h create.
760 */
761 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
762
763 for (x = 0; x < argc; ++x) {
764 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
765 }
766
767 puts("Quitting now!");
768 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
769
770 return 0;
771}
772----
773--
774
775. Build the application:
776+
777--
778[role="term"]
779----
780$ gcc -c hello.c
781----
782--
783
784. Link the application with the tracepoint provider package,
785 `liblttng-ust`, and `libdl`:
786+
787--
788[role="term"]
789----
790$ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
791----
792--
793
794Here's the whole build process:
795
796[role="img-100"]
797.User space tracing tutorial's build steps.
798image::ust-flow.png[]
799
800To trace the user application:
801
802. Run the application with a few arguments:
803+
804--
805[role="term"]
806----
807$ ./hello world and beyond
808----
809--
810+
811You see:
812+
813--
814----
815Hello, World!
816Press Enter to continue...
817----
818--
819
820. Start an LTTng <<lttng-sessiond,session daemon>>:
821+
822--
823[role="term"]
824----
825$ lttng-sessiond --daemonize
826----
827--
828+
829Note that a session daemon might already be running, for example as
830a service that the distribution's service manager started.
831
832. List the available user space tracepoints:
833+
834--
835[role="term"]
836----
837$ lttng list --userspace
838----
839--
840+
841You see the `hello_world:my_first_tracepoint` tracepoint listed
842under the `./hello` process.
843
844. Create a <<tracing-session,tracing session>>:
845+
846--
847[role="term"]
848----
849$ lttng create my-user-space-session
850----
851--
852
853. Create an <<event,event rule>> which matches the
854 `hello_world:my_first_tracepoint` event name:
855+
856--
857[role="term"]
858----
859$ lttng enable-event --userspace hello_world:my_first_tracepoint
860----
861--
862
863. <<basic-tracing-session-control,Start tracing>>:
864+
865--
866[role="term"]
867----
868$ lttng start
869----
870--
871
872. Go back to the running `hello` application and press Enter. The
873 program executes all `tracepoint()` instrumentation points and exits.
874. <<basic-tracing-session-control,Stop tracing>> and destroy the
875 tracing session:
876+
877--
878[role="term"]
879----
880$ lttng stop
881$ lttng destroy
882----
883--
884+
885The man:lttng-destroy(1) command does not destroy the trace data; it
886only destroys the state of the tracing session.
887
888By default, LTTng saves the traces in
889+$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
890where +__name__+ is the tracing session name. The
891env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
892
893See <<viewing-and-analyzing-your-traces,View and analyze the
894recorded events>> to view the recorded events.
895
896
897[[viewing-and-analyzing-your-traces]]
898=== View and analyze the recorded events
899
900Once you have completed the <<tracing-the-linux-kernel,Trace the Linux
901kernel>> and <<tracing-your-own-user-application,Trace a user
902application>> tutorials, you can inspect the recorded events.
903
904Many tools are available to read LTTng traces:
905
906* **cmd:babeltrace** is a command-line utility which converts trace
907 formats; it supports the format that LTTng produces, CTF, as well as a
908 basic text output which can be ++grep++ed. The cmd:babeltrace command
909 is part of the http://diamon.org/babeltrace[Babeltrace] project.
910* Babeltrace also includes
911 **https://www.python.org/[Python] bindings** so
912 that you can easily open and read an LTTng trace with your own script,
913 benefiting from the power of Python.
914* http://tracecompass.org/[**Trace Compass**]
915 is a graphical user interface for viewing and analyzing any type of
916 logs or traces, including LTTng's.
917* https://github.com/lttng/lttng-analyses[**LTTng analyses**] is a
918 project which includes many high-level analyses of LTTng kernel
919 traces, like scheduling statistics, interrupt frequency distribution,
920 top CPU usage, and more.
921
922NOTE: This section assumes that the traces recorded during the previous
923tutorials were saved to their default location, in the
924dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
925environment variable defaults to `$HOME` if not set.
926
927
928[[viewing-and-analyzing-your-traces-bt]]
929==== Use the cmd:babeltrace command-line tool
930
931The simplest way to list all the recorded events of a trace is to pass
932its path to cmd:babeltrace with no options:
933
934[role="term"]
935----
936$ babeltrace ~/lttng-traces/my-user-space-session*
937----
938
939cmd:babeltrace finds all traces recursively within the given path and
940prints all their events, merging them in chronological order.
941
942You can pipe the output of cmd:babeltrace into a tool like man:grep(1) for
943further filtering:
944
945[role="term"]
946----
947$ babeltrace /tmp/my-kernel-trace | grep _switch
948----
949
950You can pipe the output of cmd:babeltrace into a tool like man:wc(1) to
951count the recorded events:
952
953[role="term"]
954----
955$ babeltrace /tmp/my-kernel-trace | grep _open | wc --lines
956----
957
958
959[[viewing-and-analyzing-your-traces-bt-python]]
960==== Use the Babeltrace Python bindings
961
962The <<viewing-and-analyzing-your-traces-bt,text output of cmd:babeltrace>>
963is useful to isolate events by simple matching using man:grep(1) and
964similar utilities. However, more elaborate filters, such as keeping only
965event records with a field value falling within a specific range, are
966not trivial to write using a shell. Moreover, reductions and even the
967most basic computations involving multiple event records are virtually
968impossible to implement.
969
970Fortunately, Babeltrace ships with Python 3 bindings which makes it easy
971to read the event records of an LTTng trace sequentially and compute the
972desired information.
973
974The following script accepts an LTTng Linux kernel trace path as its
975first argument and prints the short names of the top 5 running processes
976on CPU 0 during the whole trace:
977
978[source,python]
979.path:{top5proc.py}
980----
981from collections import Counter
982import babeltrace
983import sys
984
985
986def top5proc():
987 if len(sys.argv) != 2:
988 msg = 'Usage: python3 {} TRACEPATH'.format(sys.argv[0])
989 print(msg, file=sys.stderr)
990 return False
991
992 # A trace collection contains one or more traces
993 col = babeltrace.TraceCollection()
994
995 # Add the trace provided by the user (LTTng traces always have
996 # the 'ctf' format)
997 if col.add_trace(sys.argv[1], 'ctf') is None:
998 raise RuntimeError('Cannot add trace')
999
1000 # This counter dict contains execution times:
1001 #
1002 # task command name -> total execution time (ns)
1003 exec_times = Counter()
1004
1005 # This contains the last `sched_switch` timestamp
1006 last_ts = None
1007
1008 # Iterate on events
1009 for event in col.events:
1010 # Keep only `sched_switch` events
1011 if event.name != 'sched_switch':
1012 continue
1013
1014 # Keep only events which happened on CPU 0
1015 if event['cpu_id'] != 0:
1016 continue
1017
1018 # Event timestamp
1019 cur_ts = event.timestamp
1020
1021 if last_ts is None:
1022 # We start here
1023 last_ts = cur_ts
1024
1025 # Previous task command (short) name
1026 prev_comm = event['prev_comm']
1027
1028 # Initialize entry in our dict if not yet done
1029 if prev_comm not in exec_times:
1030 exec_times[prev_comm] = 0
1031
1032 # Compute previous command execution time
1033 diff = cur_ts - last_ts
1034
1035 # Update execution time of this command
1036 exec_times[prev_comm] += diff
1037
1038 # Update last timestamp
1039 last_ts = cur_ts
1040
1041 # Display top 5
1042 for name, ns in exec_times.most_common(5):
1043 s = ns / 1000000000
1044 print('{:20}{} s'.format(name, s))
1045
1046 return True
1047
1048
1049if __name__ == '__main__':
1050 sys.exit(0 if top5proc() else 1)
1051----
1052
1053Run this script:
1054
1055[role="term"]
1056----
1057$ python3 top5proc.py /tmp/my-kernel-trace/kernel
1058----
1059
1060Output example:
1061
1062----
1063swapper/0 48.607245889 s
1064chromium 7.192738188 s
1065pavucontrol 0.709894415 s
1066Compositor 0.660867933 s
1067Xorg.bin 0.616753786 s
1068----
1069
1070Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
1071weren't using the CPU that much when tracing, its first position in the
1072list makes sense.
1073
1074
1075[[core-concepts]]
1076== [[understanding-lttng]]Core concepts
1077
1078From a user's perspective, the LTTng system is built on a few concepts,
1079or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1080operates by sending commands to the <<lttng-sessiond,session daemon>>.
1081Understanding how those objects relate to eachother is key in mastering
1082the toolkit.
1083
1084The core concepts are:
1085
1086* <<tracing-session,Tracing session>>
1087* <<domain,Tracing domain>>
1088* <<channel,Channel and ring buffer>>
1089* <<"event","Instrumentation point, event rule, event, and event record">>
1090
1091
1092[[tracing-session]]
1093=== Tracing session
1094
1095A _tracing session_ is a stateful dialogue between you and
1096a <<lttng-sessiond,session daemon>>. You can
1097<<creating-destroying-tracing-sessions,create a new tracing
1098session>> with the `lttng create` command.
1099
1100Anything that you do when you control LTTng tracers happens within a
1101tracing session. In particular, a tracing session:
1102
1103* Has its own name.
1104* Has its own set of trace files.
1105* Has its own state of activity (started or stopped).
1106* Has its own <<tracing-session-mode,mode>> (local, network streaming,
1107 snapshot, or live).
1108* Has its own <<channel,channels>> which have their own
1109 <<event,event rules>>.
1110
1111[role="img-100"]
1112.A _tracing session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,event rules>>.
1113image::concepts.png[]
1114
1115Those attributes and objects are completely isolated between different
1116tracing sessions.
1117
1118A tracing session is analogous to a cash machine session:
1119the operations you do on the banking system through the cash machine do
1120not alter the data of other users of the same system. In the case of
1121the cash machine, a session lasts as long as your bank card is inside.
1122In the case of LTTng, a tracing session lasts from the `lttng create`
1123command to the `lttng destroy` command.
1124
1125[role="img-100"]
1126.Each Unix user has its own set of tracing sessions.
1127image::many-sessions.png[]
1128
1129
1130[[tracing-session-mode]]
1131==== Tracing session mode
1132
1133LTTng can send the generated trace data to different locations. The
1134_tracing session mode_ dictates where to send it. The following modes
1135are available in LTTng{nbsp}{revision}:
1136
1137Local mode::
1138 LTTng writes the traces to the file system of the machine being traced
1139 (target system).
1140
1141Network streaming mode::
1142 LTTng sends the traces over the network to a
1143 <<lttng-relayd,relay daemon>> running on a remote system.
1144
1145Snapshot mode::
1146 LTTng does not write the traces by default. Instead, you can request
1147 LTTng to <<taking-a-snapshot,take a snapshot>>, that is, a copy of the
1148 current tracing buffers, and to write it to the target's file system
1149 or to send it over the network to a <<lttng-relayd,relay daemon>>
1150 running on a remote system.
1151
1152Live mode::
1153 This mode is similar to the network streaming mode, but a live
1154 trace viewer can connect to the distant relay daemon to
1155 <<lttng-live,view event records as LTTng generates them>> by
1156 the tracers.
1157
1158
1159[[domain]]
1160=== Tracing domain
1161
1162A _tracing domain_ is a namespace for event sources. A tracing domain
1163has its own properties and features.
1164
1165There are currently five available tracing domains:
1166
1167* Linux kernel
1168* User space
1169* `java.util.logging` (JUL)
1170* log4j
1171* Python
1172
1173You must specify a tracing domain when using some commands to avoid
1174ambiguity. For example, since all the domains support named tracepoints
1175as event sources (instrumentation points that you manually insert in the
1176source code), you need to specify a tracing domain when
1177<<enabling-disabling-events,creating an event rule>> because all the
1178tracing domains could have tracepoints with the same names.
1179
1180Some features are reserved to specific tracing domains. Dynamic function
1181entry and return instrumentation points, for example, are currently only
1182supported in the Linux kernel tracing domain, but support for other
1183tracing domains could be added in the future.
1184
1185You can create <<channel,channels>> in the Linux kernel and user space
1186tracing domains. The other tracing domains have a single default
1187channel.
1188
1189
1190[[channel]]
1191=== Channel and ring buffer
1192
1193A _channel_ is an object which is responsible for a set of ring buffers.
1194Each ring buffer is divided into multiple sub-buffers. When an LTTng
1195tracer emits an event, it can record it to one or more
1196sub-buffers. The attributes of a channel determine what to do when
1197there's no space left for a new event record because all sub-buffers
1198are full, where to send a full sub-buffer, and other behaviours.
1199
1200A channel is always associated to a <<domain,tracing domain>>. The
1201`java.util.logging` (JUL), log4j, and Python tracing domains each have
1202a default channel which you cannot configure.
1203
1204A channel also owns <<event,event rules>>. When an LTTng tracer emits
1205an event, it records it to the sub-buffers of all
1206the enabled channels with a satisfied event rule, as long as those
1207channels are part of active <<tracing-session,tracing sessions>>.
1208
1209
1210[[channel-buffering-schemes]]
1211==== Per-user vs. per-process buffering schemes
1212
1213A channel has at least one ring buffer _per CPU_. LTTng always
1214records an event to the ring buffer associated to the CPU on which it
1215occurred.
1216
1217Two _buffering schemes_ are available when you
1218<<enabling-disabling-channels,create a channel>> in the
1219user space <<domain,tracing domain>>:
1220
1221Per-user buffering::
1222 Allocate one set of ring buffers--one per CPU--shared by all the
1223 instrumented processes of each Unix user.
1224+
1225--
1226[role="img-100"]
1227.Per-user buffering scheme.
1228image::per-user-buffering.png[]
1229--
1230
1231Per-process buffering::
1232 Allocate one set of ring buffers--one per CPU--for each
1233 instrumented process.
1234+
1235--
1236[role="img-100"]
1237.Per-process buffering scheme.
1238image::per-process-buffering.png[]
1239--
1240+
1241The per-process buffering scheme tends to consume more memory than the
1242per-user option because systems generally have more instrumented
1243processes than Unix users running instrumented processes. However, the
1244per-process buffering scheme ensures that one process having a high
1245event throughput won't fill all the shared sub-buffers of the same
1246user, only its own.
1247
1248The Linux kernel tracing domain has only one available buffering scheme
1249which is to allocate a single set of ring buffers for the whole system.
1250This scheme is similar to the per-user option, but with a single, global
1251user "running" the kernel.
1252
1253
1254[[channel-overwrite-mode-vs-discard-mode]]
1255==== Overwrite vs. discard event loss modes
1256
1257When an event occurs, LTTng records it to a specific sub-buffer (yellow
1258arc in the following animation) of a specific channel's ring buffer.
1259When there's no space left in a sub-buffer, the tracer marks it as
1260consumable (red) and another, empty sub-buffer starts receiving the
1261following event records. A <<lttng-consumerd,consumer daemon>>
1262eventually consumes the marked sub-buffer (returns to white).
1263
1264[NOTE]
1265[role="docsvg-channel-subbuf-anim"]
1266====
1267{note-no-anim}
1268====
1269
1270In an ideal world, sub-buffers are consumed faster than they are filled,
1271as is the case in the previous animation. In the real world,
1272however, all sub-buffers can be full at some point, leaving no space to
1273record the following events.
1274
1275By default, LTTng-modules and LTTng-UST are _non-blocking_ tracers: when
1276no empty sub-buffer is available, it is acceptable to lose event records
1277when the alternative would be to cause substantial delays in the
1278instrumented application's execution. LTTng privileges performance over
1279integrity; it aims at perturbing the traced system as little as possible
1280in order to make tracing of subtle race conditions and rare interrupt
1281cascades possible.
1282
1283Starting from LTTng{nbsp}2.10, the LTTng user space tracer, LTTng-UST,
1284supports a _blocking mode_. See the <<blocking-timeout-example,blocking
1285timeout example>> to learn how to use the blocking mode.
1286
1287When it comes to losing event records because no empty sub-buffer is
1288available, or because the <<opt-blocking-timeout,blocking timeout>> is
1289reached, the channel's _event loss mode_ determines what to do. The
1290available event loss modes are:
1291
1292Discard mode::
1293 Drop the newest event records until a the tracer
1294 releases a sub-buffer.
1295
1296Overwrite mode::
1297 Clear the sub-buffer containing the oldest event records and start
1298 writing the newest event records there.
1299+
1300This mode is sometimes called _flight recorder mode_ because it's
1301similar to a
1302https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]:
1303always keep a fixed amount of the latest data.
1304
1305Which mechanism you should choose depends on your context: prioritize
1306the newest or the oldest event records in the ring buffer?
1307
1308Beware that, in overwrite mode, the tracer abandons a whole sub-buffer
1309as soon as a there's no space left for a new event record, whereas in
1310discard mode, the tracer only discards the event record that doesn't
1311fit.
1312
1313In discard mode, LTTng increments a count of lost event records when
1314an event record is lost and saves this count to the trace. In
1315overwrite mode, LTTng keeps no information when it overwrites a
1316sub-buffer before consuming it.
1317
1318There are a few ways to decrease your probability of losing event
1319records.
1320<<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>> shows
1321how you can fine-une the sub-buffer count and size of a channel to
1322virtually stop losing event records, though at the cost of greater
1323memory usage.
1324
1325
1326[[channel-subbuf-size-vs-subbuf-count]]
1327==== Sub-buffer count and size
1328
1329When you <<enabling-disabling-channels,create a channel>>, you can
1330set its number of sub-buffers and their size.
1331
1332Note that there is noticeable CPU overhead introduced when
1333switching sub-buffers (marking a full one as consumable and switching
1334to an empty one for the following events to be recorded). Knowing this,
1335the following list presents a few practical situations along with how
1336to configure the sub-buffer count and size for them:
1337
1338* **High event throughput**: In general, prefer bigger sub-buffers to
1339 lower the risk of losing event records.
1340+
1341Having bigger sub-buffers also ensures a lower
1342<<channel-switch-timer,sub-buffer switching frequency>>.
1343+
1344The number of sub-buffers is only meaningful if you create the channel
1345in overwrite mode: in this case, if a sub-buffer overwrite happens, the
1346other sub-buffers are left unaltered.
1347
1348* **Low event throughput**: In general, prefer smaller sub-buffers
1349 since the risk of losing event records is low.
1350+
1351Because events occur less frequently, the sub-buffer switching frequency
1352should remain low and thus the tracer's overhead should not be a
1353problem.
1354
1355* **Low memory system**: If your target system has a low memory
1356 limit, prefer fewer first, then smaller sub-buffers.
1357+
1358Even if the system is limited in memory, you want to keep the
1359sub-buffers as big as possible to avoid a high sub-buffer switching
1360frequency.
1361
1362Note that LTTng uses http://diamon.org/ctf/[CTF] as its trace format,
1363which means event data is very compact. For example, the average
1364LTTng kernel event record weights about 32{nbsp}bytes. Thus, a
1365sub-buffer size of 1{nbsp}MiB is considered big.
1366
1367The previous situations highlight the major trade-off between a few big
1368sub-buffers and more, smaller sub-buffers: sub-buffer switching
1369frequency vs. how much data is lost in overwrite mode. Assuming a
1370constant event throughput and using the overwrite mode, the two
1371following configurations have the same ring buffer total size:
1372
1373[NOTE]
1374[role="docsvg-channel-subbuf-size-vs-count-anim"]
1375====
1376{note-no-anim}
1377====
1378
1379* **2 sub-buffers of 4{nbsp}MiB each**: Expect a very low sub-buffer
1380 switching frequency, but if a sub-buffer overwrite happens, half of
1381 the event records so far (4{nbsp}MiB) are definitely lost.
1382* **8 sub-buffers of 1{nbsp}MiB each**: Expect 4{nbsp}times the tracer's
1383 overhead as the previous configuration, but if a sub-buffer
1384 overwrite happens, only the eighth of event records so far are
1385 definitely lost.
1386
1387In discard mode, the sub-buffers count parameter is pointless: use two
1388sub-buffers and set their size according to the requirements of your
1389situation.
1390
1391
1392[[channel-switch-timer]]
1393==== Switch timer period
1394
1395The _switch timer period_ is an important configurable attribute of
1396a channel to ensure periodic sub-buffer flushing.
1397
1398When the _switch timer_ expires, a sub-buffer switch happens. You can
1399set the switch timer period attribute when you
1400<<enabling-disabling-channels,create a channel>> to ensure that event
1401data is consumed and committed to trace files or to a distant relay
1402daemon periodically in case of a low event throughput.
1403
1404[NOTE]
1405[role="docsvg-channel-switch-timer"]
1406====
1407{note-no-anim}
1408====
1409
1410This attribute is also convenient when you use big sub-buffers to cope
1411with a sporadic high event throughput, even if the throughput is
1412normally low.
1413
1414
1415[[channel-read-timer]]
1416==== Read timer period
1417
1418By default, the LTTng tracers use a notification mechanism to signal a
1419full sub-buffer so that a consumer daemon can consume it. When such
1420notifications must be avoided, for example in real-time applications,
1421you can use the channel's _read timer_ instead. When the read timer
1422fires, the <<lttng-consumerd,consumer daemon>> checks for full,
1423consumable sub-buffers.
1424
1425
1426[[tracefile-rotation]]
1427==== Trace file count and size
1428
1429By default, trace files can grow as large as needed. You can set the
1430maximum size of each trace file that a channel writes when you
1431<<enabling-disabling-channels,create a channel>>. When the size of
1432a trace file reaches the channel's fixed maximum size, LTTng creates
1433another file to contain the next event records. LTTng appends a file
1434count to each trace file name in this case.
1435
1436If you set the trace file size attribute when you create a channel, the
1437maximum number of trace files that LTTng creates is _unlimited_ by
1438default. To limit them, you can also set a maximum number of trace
1439files. When the number of trace files reaches the channel's fixed
1440maximum count, the oldest trace file is overwritten. This mechanism is
1441called _trace file rotation_.
1442
1443
1444[[event]]
1445=== Instrumentation point, event rule, event, and event record
1446
1447An _event rule_ is a set of conditions which must be **all** satisfied
1448for LTTng to record an occuring event.
1449
1450You set the conditions when you <<enabling-disabling-events,create
1451an event rule>>.
1452
1453You always attach an event rule to <<channel,channel>> when you create
1454it.
1455
1456When an event passes the conditions of an event rule, LTTng records it
1457in one of the attached channel's sub-buffers.
1458
1459The available conditions, as of LTTng{nbsp}{revision}, are:
1460
1461* The event rule _is enabled_.
1462* The instrumentation point's type _is{nbsp}T_.
1463* The instrumentation point's name (sometimes called _event name_)
1464 _matches{nbsp}N_, but _is not{nbsp}E_.
1465* The instrumentation point's log level _is as severe as{nbsp}L_, or
1466 _is exactly{nbsp}L_.
1467* The fields of the event's payload _satisfy_ a filter
1468 expression{nbsp}__F__.
1469
1470As you can see, all the conditions but the dynamic filter are related to
1471the event rule's status or to the instrumentation point, not to the
1472occurring events. This is why, without a filter, checking if an event
1473passes an event rule is not a dynamic task: when you create or modify an
1474event rule, all the tracers of its tracing domain enable or disable the
1475instrumentation points themselves once. This is possible because the
1476attributes of an instrumentation point (type, name, and log level) are
1477defined statically. In other words, without a dynamic filter, the tracer
1478_does not evaluate_ the arguments of an instrumentation point unless it
1479matches an enabled event rule.
1480
1481Note that, for LTTng to record an event, the <<channel,channel>> to
1482which a matching event rule is attached must also be enabled, and the
1483tracing session owning this channel must be active.
1484
1485[role="img-100"]
1486.Logical path from an instrumentation point to an event record.
1487image::event-rule.png[]
1488
1489.Event, event record, or event rule?
1490****
1491With so many similar terms, it's easy to get confused.
1492
1493An **event** is the consequence of the execution of an _instrumentation
1494point_, like a tracepoint that you manually place in some source code,
1495or a Linux kernel KProbe. An event is said to _occur_ at a specific
1496time. Different actions can be taken upon the occurrence of an event,
1497like record the event's payload to a buffer.
1498
1499An **event record** is the representation of an event in a sub-buffer. A
1500tracer is responsible for capturing the payload of an event, current
1501context variables, the event's ID, and the event's timestamp. LTTng
1502can append this sub-buffer to a trace file.
1503
1504An **event rule** is a set of conditions which must all be satisfied for
1505LTTng to record an occuring event. Events still occur without
1506satisfying event rules, but LTTng does not record them.
1507****
1508
1509
1510[[plumbing]]
1511== Components of noch:{LTTng}
1512
1513The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1514to call LTTng a simple _tool_ since it is composed of multiple
1515interacting components. This section describes those components,
1516explains their respective roles, and shows how they connect together to
1517form the LTTng ecosystem.
1518
1519The following diagram shows how the most important components of LTTng
1520interact with user applications, the Linux kernel, and you:
1521
1522[role="img-100"]
1523.Control and trace data paths between LTTng components.
1524image::plumbing.png[]
1525
1526The LTTng project incorporates:
1527
1528* **LTTng-tools**: Libraries and command-line interface to
1529 control tracing sessions.
1530** <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1531** <<lttng-consumerd,Consumer daemon>> (man:lttng-consumerd(8)).
1532** <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1533** <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1534** <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1535* **LTTng-UST**: Libraries and Java/Python packages to trace user
1536 applications.
1537** <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1538 headers to instrument and trace any native user application.
1539** <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1540*** `liblttng-ust-libc-wrapper`
1541*** `liblttng-ust-pthread-wrapper`
1542*** `liblttng-ust-cyg-profile`
1543*** `liblttng-ust-cyg-profile-fast`
1544*** `liblttng-ust-dl`
1545** User space tracepoint provider source files generator command-line
1546 tool (man:lttng-gen-tp(1)).
1547** <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1548 Java applications using `java.util.logging` or
1549 Apache log4j 1.2 logging.
1550** <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1551 Python applications using the standard `logging` package.
1552* **LTTng-modules**: <<lttng-modules,Linux kernel modules>> to trace
1553 the kernel.
1554** LTTng kernel tracer module.
1555** Tracing ring buffer kernel modules.
1556** Probe kernel modules.
1557** LTTng logger kernel module.
1558
1559
1560[[lttng-cli]]
1561=== Tracing control command-line interface
1562
1563[role="img-100"]
1564.The tracing control command-line interface.
1565image::plumbing-lttng-cli.png[]
1566
1567The _man:lttng(1) command-line tool_ is the standard user interface to
1568control LTTng <<tracing-session,tracing sessions>>. The cmd:lttng tool
1569is part of LTTng-tools.
1570
1571The cmd:lttng tool is linked with
1572<<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1573one or more <<lttng-sessiond,session daemons>> behind the scenes.
1574
1575The cmd:lttng tool has a Git-like interface:
1576
1577[role="term"]
1578----
1579$ lttng <GENERAL OPTIONS> <COMMAND> <COMMAND OPTIONS>
1580----
1581
1582The <<controlling-tracing,Tracing control>> section explores the
1583available features of LTTng using the cmd:lttng tool.
1584
1585
1586[[liblttng-ctl-lttng]]
1587=== Tracing control library
1588
1589[role="img-100"]
1590.The tracing control library.
1591image::plumbing-liblttng-ctl.png[]
1592
1593The _LTTng control library_, `liblttng-ctl`, is used to communicate
1594with a <<lttng-sessiond,session daemon>> using a C API that hides the
1595underlying protocol's details. `liblttng-ctl` is part of LTTng-tools.
1596
1597The <<lttng-cli,cmd:lttng command-line tool>>
1598is linked with `liblttng-ctl`.
1599
1600You can use `liblttng-ctl` in C or $$C++$$ source code by including its
1601"master" header:
1602
1603[source,c]
1604----
1605#include <lttng/lttng.h>
1606----
1607
1608Some objects are referenced by name (C string), such as tracing
1609sessions, but most of them require to create a handle first using
1610`lttng_create_handle()`.
1611
1612The best available developer documentation for `liblttng-ctl` is, as of
1613LTTng{nbsp}{revision}, its installed header files. Every function and
1614structure is thoroughly documented.
1615
1616
1617[[lttng-ust]]
1618=== User space tracing library
1619
1620[role="img-100"]
1621.The user space tracing library.
1622image::plumbing-liblttng-ust.png[]
1623
1624The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1625is the LTTng user space tracer. It receives commands from a
1626<<lttng-sessiond,session daemon>>, for example to
1627enable and disable specific instrumentation points, and writes event
1628records to ring buffers shared with a
1629<<lttng-consumerd,consumer daemon>>.
1630`liblttng-ust` is part of LTTng-UST.
1631
1632Public C header files are installed beside `liblttng-ust` to
1633instrument any <<c-application,C or $$C++$$ application>>.
1634
1635<<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1636packages, use their own library providing tracepoints which is
1637linked with `liblttng-ust`.
1638
1639An application or library does not have to initialize `liblttng-ust`
1640manually: its constructor does the necessary tasks to properly register
1641to a session daemon. The initialization phase also enables the
1642instrumentation points matching the <<event,event rules>> that you
1643already created.
1644
1645
1646[[lttng-ust-agents]]
1647=== User space tracing agents
1648
1649[role="img-100"]
1650.The user space tracing agents.
1651image::plumbing-lttng-ust-agents.png[]
1652
1653The _LTTng-UST Java and Python agents_ are regular Java and Python
1654packages which add LTTng tracing capabilities to the
1655native logging frameworks. The LTTng-UST agents are part of LTTng-UST.
1656
1657In the case of Java, the
1658https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1659core logging facilities] and
1660https://logging.apache.org/log4j/1.2/[Apache log4j 1.2] are supported.
1661Note that Apache Log4{nbsp}2 is not supported.
1662
1663In the case of Python, the standard
1664https://docs.python.org/3/library/logging.html[`logging`] package
1665is supported. Both Python 2 and Python 3 modules can import the
1666LTTng-UST Python agent package.
1667
1668The applications using the LTTng-UST agents are in the
1669`java.util.logging` (JUL),
1670log4j, and Python <<domain,tracing domains>>.
1671
1672Both agents use the same mechanism to trace the log statements. When an
1673agent is initialized, it creates a log handler that attaches to the root
1674logger. The agent also registers to a <<lttng-sessiond,session daemon>>.
1675When the application executes a log statement, it is passed to the
1676agent's log handler by the root logger. The agent's log handler calls a
1677native function in a tracepoint provider package shared library linked
1678with <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
1679other fields, like its logger name and its log level. This native
1680function contains a user space instrumentation point, hence tracing the
1681log statement.
1682
1683The log level condition of an
1684<<event,event rule>> is considered when tracing
1685a Java or a Python application, and it's compatible with the standard
1686JUL, log4j, and Python log levels.
1687
1688
1689[[lttng-modules]]
1690=== LTTng kernel modules
1691
1692[role="img-100"]
1693.The LTTng kernel modules.
1694image::plumbing-lttng-modules.png[]
1695
1696The _LTTng kernel modules_ are a set of Linux kernel modules
1697which implement the kernel tracer of the LTTng project. The LTTng
1698kernel modules are part of LTTng-modules.
1699
1700The LTTng kernel modules include:
1701
1702* A set of _probe_ modules.
1703+
1704Each module attaches to a specific subsystem
1705of the Linux kernel using its tracepoint instrument points. There are
1706also modules to attach to the entry and return points of the Linux
1707system call functions.
1708
1709* _Ring buffer_ modules.
1710+
1711A ring buffer implementation is provided as kernel modules. The LTTng
1712kernel tracer writes to the ring buffer; a
1713<<lttng-consumerd,consumer daemon>> reads from the ring buffer.
1714
1715* The _LTTng kernel tracer_ module.
1716* The _LTTng logger_ module.
1717+
1718The LTTng logger module implements the special path:{/proc/lttng-logger}
1719file so that any executable can generate LTTng events by opening and
1720writing to this file.
1721+
1722See <<proc-lttng-logger-abi,LTTng logger>>.
1723
1724Generally, you do not have to load the LTTng kernel modules manually
1725(using man:modprobe(8), for example): a root <<lttng-sessiond,session
1726daemon>> loads the necessary modules when starting. If you have extra
1727probe modules, you can specify to load them to the session daemon on
1728the command line.
1729
1730The LTTng kernel modules are installed in
1731+/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
1732the kernel release (see `uname --kernel-release`).
1733
1734
1735[[lttng-sessiond]]
1736=== Session daemon
1737
1738[role="img-100"]
1739.The session daemon.
1740image::plumbing-sessiond.png[]
1741
1742The _session daemon_, man:lttng-sessiond(8), is a daemon responsible for
1743managing tracing sessions and for controlling the various components of
1744LTTng. The session daemon is part of LTTng-tools.
1745
1746The session daemon sends control requests to and receives control
1747responses from:
1748
1749* The <<lttng-ust,user space tracing library>>.
1750+
1751Any instance of the user space tracing library first registers to
1752a session daemon. Then, the session daemon can send requests to
1753this instance, such as:
1754+
1755--
1756** Get the list of tracepoints.
1757** Share an <<event,event rule>> so that the user space tracing library
1758 can enable or disable tracepoints. Amongst the possible conditions
1759 of an event rule is a filter expression which `liblttng-ust` evalutes
1760 when an event occurs.
1761** Share <<channel,channel>> attributes and ring buffer locations.
1762--
1763+
1764The session daemon and the user space tracing library use a Unix
1765domain socket for their communication.
1766
1767* The <<lttng-ust-agents,user space tracing agents>>.
1768+
1769Any instance of a user space tracing agent first registers to
1770a session daemon. Then, the session daemon can send requests to
1771this instance, such as:
1772+
1773--
1774** Get the list of loggers.
1775** Enable or disable a specific logger.
1776--
1777+
1778The session daemon and the user space tracing agent use a TCP connection
1779for their communication.
1780
1781* The <<lttng-modules,LTTng kernel tracer>>.
1782* The <<lttng-consumerd,consumer daemon>>.
1783+
1784The session daemon sends requests to the consumer daemon to instruct
1785it where to send the trace data streams, amongst other information.
1786
1787* The <<lttng-relayd,relay daemon>>.
1788
1789The session daemon receives commands from the
1790<<liblttng-ctl-lttng,tracing control library>>.
1791
1792The root session daemon loads the appropriate
1793<<lttng-modules,LTTng kernel modules>> on startup. It also spawns
1794a <<lttng-consumerd,consumer daemon>> as soon as you create
1795an <<event,event rule>>.
1796
1797The session daemon does not send and receive trace data: this is the
1798role of the <<lttng-consumerd,consumer daemon>> and
1799<<lttng-relayd,relay daemon>>. It does, however, generate the
1800http://diamon.org/ctf/[CTF] metadata stream.
1801
1802Each Unix user can have its own session daemon instance. The
1803tracing sessions managed by different session daemons are completely
1804independent.
1805
1806The root user's session daemon is the only one which is
1807allowed to control the LTTng kernel tracer, and its spawned consumer
1808daemon is the only one which is allowed to consume trace data from the
1809LTTng kernel tracer. Note, however, that any Unix user which is a member
1810of the <<tracing-group,tracing group>> is allowed
1811to create <<channel,channels>> in the
1812Linux kernel <<domain,tracing domain>>, and thus to trace the Linux
1813kernel.
1814
1815The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
1816session daemon when using its `create` command if none is currently
1817running. You can also start the session daemon manually.
1818
1819
1820[[lttng-consumerd]]
1821=== Consumer daemon
1822
1823[role="img-100"]
1824.The consumer daemon.
1825image::plumbing-consumerd.png[]
1826
1827The _consumer daemon_, man:lttng-consumerd(8), is a daemon which shares
1828ring buffers with user applications or with the LTTng kernel modules to
1829collect trace data and send it to some location (on disk or to a
1830<<lttng-relayd,relay daemon>> over the network). The consumer daemon
1831is part of LTTng-tools.
1832
1833You do not start a consumer daemon manually: a consumer daemon is always
1834spawned by a <<lttng-sessiond,session daemon>> as soon as you create an
1835<<event,event rule>>, that is, before you start tracing. When you kill
1836its owner session daemon, the consumer daemon also exits because it is
1837the session daemon's child process. Command-line options of
1838man:lttng-sessiond(8) target the consumer daemon process.
1839
1840There are up to two running consumer daemons per Unix user, whereas only
1841one session daemon can run per user. This is because each process can be
1842either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
1843and 64-bit processes, it is more efficient to have separate
1844corresponding 32-bit and 64-bit consumer daemons. The root user is an
1845exception: it can have up to _three_ running consumer daemons: 32-bit
1846and 64-bit instances for its user applications, and one more
1847reserved for collecting kernel trace data.
1848
1849
1850[[lttng-relayd]]
1851=== Relay daemon
1852
1853[role="img-100"]
1854.The relay daemon.
1855image::plumbing-relayd.png[]
1856
1857The _relay daemon_, man:lttng-relayd(8), is a daemon acting as a bridge
1858between remote session and consumer daemons, local trace files, and a
1859remote live trace viewer. The relay daemon is part of LTTng-tools.
1860
1861The main purpose of the relay daemon is to implement a receiver of
1862<<sending-trace-data-over-the-network,trace data over the network>>.
1863This is useful when the target system does not have much file system
1864space to record trace files locally.
1865
1866The relay daemon is also a server to which a
1867<<lttng-live,live trace viewer>> can
1868connect. The live trace viewer sends requests to the relay daemon to
1869receive trace data as the target system emits events. The
1870communication protocol is named _LTTng live_; it is used over TCP
1871connections.
1872
1873Note that you can start the relay daemon on the target system directly.
1874This is the setup of choice when the use case is to view events as
1875the target system emits them without the need of a remote system.
1876
1877
1878[[instrumenting]]
1879== [[using-lttng]]Instrumentation
1880
1881There are many examples of tracing and monitoring in our everyday life:
1882
1883* You have access to real-time and historical weather reports and
1884 forecasts thanks to weather stations installed around the country.
1885* You know your heart is safe thanks to an electrocardiogram.
1886* You make sure not to drive your car too fast and to have enough fuel
1887 to reach your destination thanks to gauges visible on your dashboard.
1888
1889All the previous examples have something in common: they rely on
1890**instruments**. Without the electrodes attached to the surface of your
1891body's skin, cardiac monitoring is futile.
1892
1893LTTng, as a tracer, is no different from those real life examples. If
1894you're about to trace a software system or, in other words, record its
1895history of execution, you better have **instrumentation points** in the
1896subject you're tracing, that is, the actual software.
1897
1898Various ways were developed to instrument a piece of software for LTTng
1899tracing. The most straightforward one is to manually place
1900instrumentation points, called _tracepoints_, in the software's source
1901code. It is also possible to add instrumentation points dynamically in
1902the Linux kernel <<domain,tracing domain>>.
1903
1904If you're only interested in tracing the Linux kernel, your
1905instrumentation needs are probably already covered by LTTng's built-in
1906<<lttng-modules,Linux kernel tracepoints>>. You may also wish to trace a
1907user application which is already instrumented for LTTng tracing.
1908In such cases, you can skip this whole section and read the topics of
1909the <<controlling-tracing,Tracing control>> section.
1910
1911Many methods are available to instrument a piece of software for LTTng
1912tracing. They are:
1913
1914* <<c-application,User space instrumentation for C and $$C++$$
1915 applications>>.
1916* <<prebuilt-ust-helpers,Prebuilt user space tracing helpers>>.
1917* <<java-application,User space Java agent>>.
1918* <<python-application,User space Python agent>>.
1919* <<proc-lttng-logger-abi,LTTng logger>>.
1920* <<instrumenting-linux-kernel,LTTng kernel tracepoints>>.
1921
1922
1923[[c-application]]
1924=== [[cxx-application]]User space instrumentation for C and $$C++$$ applications
1925
1926The procedure to instrument a C or $$C++$$ user application with
1927the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`, is:
1928
1929. <<tracepoint-provider,Create the source files of a tracepoint provider
1930 package>>.
1931. <<probing-the-application-source-code,Add tracepoints to
1932 the application's source code>>.
1933. <<building-tracepoint-providers-and-user-application,Build and link
1934 a tracepoint provider package and the user application>>.
1935
1936If you need quick, man:printf(3)-like instrumentation, you can skip
1937those steps and use <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>>
1938instead.
1939
1940IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
1941instrument a user application with `liblttng-ust`.
1942
1943
1944[[tracepoint-provider]]
1945==== Create the source files of a tracepoint provider package
1946
1947A _tracepoint provider_ is a set of compiled functions which provide
1948**tracepoints** to an application, the type of instrumentation point
1949supported by LTTng-UST. Those functions can emit events with
1950user-defined fields and serialize those events as event records to one
1951or more LTTng-UST <<channel,channel>> sub-buffers. The `tracepoint()`
1952macro, which you <<probing-the-application-source-code,insert in a user
1953application's source code>>, calls those functions.
1954
1955A _tracepoint provider package_ is an object file (`.o`) or a shared
1956library (`.so`) which contains one or more tracepoint providers.
1957Its source files are:
1958
1959* One or more <<tpp-header,tracepoint provider header>> (`.h`).
1960* A <<tpp-source,tracepoint provider package source>> (`.c`).
1961
1962A tracepoint provider package is dynamically linked with `liblttng-ust`,
1963the LTTng user space tracer, at run time.
1964
1965[role="img-100"]
1966.User application linked with `liblttng-ust` and containing a tracepoint provider.
1967image::ust-app.png[]
1968
1969NOTE: If you need quick, man:printf(3)-like instrumentation, you can
1970skip creating and using a tracepoint provider and use
1971<<tracef,`tracef()`>> or <<tracelog,`tracelog()`>> instead.
1972
1973
1974[[tpp-header]]
1975===== Create a tracepoint provider header file template
1976
1977A _tracepoint provider header file_ contains the tracepoint
1978definitions of a tracepoint provider.
1979
1980To create a tracepoint provider header file:
1981
1982. Start from this template:
1983+
1984--
1985[source,c]
1986.Tracepoint provider header file template (`.h` file extension).
1987----
1988#undef TRACEPOINT_PROVIDER
1989#define TRACEPOINT_PROVIDER provider_name
1990
1991#undef TRACEPOINT_INCLUDE
1992#define TRACEPOINT_INCLUDE "./tp.h"
1993
1994#if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1995#define _TP_H
1996
1997#include <lttng/tracepoint.h>
1998
1999/*
2000 * Use TRACEPOINT_EVENT(), TRACEPOINT_EVENT_CLASS(),
2001 * TRACEPOINT_EVENT_INSTANCE(), and TRACEPOINT_LOGLEVEL() here.
2002 */
2003
2004#endif /* _TP_H */
2005
2006#include <lttng/tracepoint-event.h>
2007----
2008--
2009
2010. Replace:
2011+
2012* `provider_name` with the name of your tracepoint provider.
2013* `"tp.h"` with the name of your tracepoint provider header file.
2014
2015. Below the `#include <lttng/tracepoint.h>` line, put your
2016 <<defining-tracepoints,tracepoint definitions>>.
2017
2018Your tracepoint provider name must be unique amongst all the possible
2019tracepoint provider names used on the same target system. We
2020suggest to include the name of your project or company in the name,
2021for example, `org_lttng_my_project_tpp`.
2022
2023TIP: [[lttng-gen-tp]]You can use the man:lttng-gen-tp(1) tool to create
2024this boilerplate for you. When using cmd:lttng-gen-tp, all you need to
2025write are the <<defining-tracepoints,tracepoint definitions>>.
2026
2027
2028[[defining-tracepoints]]
2029===== Create a tracepoint definition
2030
2031A _tracepoint definition_ defines, for a given tracepoint:
2032
2033* Its **input arguments**. They are the macro parameters that the
2034 `tracepoint()` macro accepts for this particular tracepoint
2035 in the user application's source code.
2036* Its **output event fields**. They are the sources of event fields
2037 that form the payload of any event that the execution of the
2038 `tracepoint()` macro emits for this particular tracepoint.
2039
2040You can create a tracepoint definition by using the
2041`TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2042line in the
2043<<tpp-header,tracepoint provider header file template>>.
2044
2045The syntax of the `TRACEPOINT_EVENT()` macro is:
2046
2047[source,c]
2048.`TRACEPOINT_EVENT()` macro syntax.
2049----
2050TRACEPOINT_EVENT(
2051 /* Tracepoint provider name */
2052 provider_name,
2053
2054 /* Tracepoint name */
2055 tracepoint_name,
2056
2057 /* Input arguments */
2058 TP_ARGS(
2059 arguments
2060 ),
2061
2062 /* Output event fields */
2063 TP_FIELDS(
2064 fields
2065 )
2066)
2067----
2068
2069Replace:
2070
2071* `provider_name` with your tracepoint provider name.
2072* `tracepoint_name` with your tracepoint name.
2073* `arguments` with the <<tpp-def-input-args,input arguments>>.
2074* `fields` with the <<tpp-def-output-fields,output event field>>
2075 definitions.
2076
2077This tracepoint emits events named `provider_name:tracepoint_name`.
2078
2079[IMPORTANT]
2080.Event name's length limitation
2081====
2082The concatenation of the tracepoint provider name and the
2083tracepoint name must not exceed **254 characters**. If it does, the
2084instrumented application compiles and runs, but LTTng throws multiple
2085warnings and you could experience serious issues.
2086====
2087
2088[[tpp-def-input-args]]The syntax of the `TP_ARGS()` macro is:
2089
2090[source,c]
2091.`TP_ARGS()` macro syntax.
2092----
2093TP_ARGS(
2094 type, arg_name
2095)
2096----
2097
2098Replace:
2099
2100* `type` with the C type of the argument.
2101* `arg_name` with the argument name.
2102
2103You can repeat `type` and `arg_name` up to 10 times to have
2104more than one argument.
2105
2106.`TP_ARGS()` usage with three arguments.
2107====
2108[source,c]
2109----
2110TP_ARGS(
2111 int, count,
2112 float, ratio,
2113 const char*, query
2114)
2115----
2116====
2117
2118The `TP_ARGS()` and `TP_ARGS(void)` forms are valid to create a
2119tracepoint definition with no input arguments.
2120
2121[[tpp-def-output-fields]]The `TP_FIELDS()` macro contains a list of
2122`ctf_*()` macros. Each `ctf_*()` macro defines one event field. See
2123man:lttng-ust(3) for a complete description of the available `ctf_*()`
2124macros. A `ctf_*()` macro specifies the type, size, and byte order of
2125one event field.
2126
2127Each `ctf_*()` macro takes an _argument expression_ parameter. This is a
2128C expression that the tracer evalutes at the `tracepoint()` macro site
2129in the application's source code. This expression provides a field's
2130source of data. The argument expression can include input argument names
2131listed in the `TP_ARGS()` macro.
2132
2133Each `ctf_*()` macro also takes a _field name_ parameter. Field names
2134must be unique within a given tracepoint definition.
2135
2136Here's a complete tracepoint definition example:
2137
2138.Tracepoint definition.
2139====
2140The following tracepoint definition defines a tracepoint which takes
2141three input arguments and has four output event fields.
2142
2143[source,c]
2144----
2145#include "my-custom-structure.h"
2146
2147TRACEPOINT_EVENT(
2148 my_provider,
2149 my_tracepoint,
2150 TP_ARGS(
2151 const struct my_custom_structure*, my_custom_structure,
2152 float, ratio,
2153 const char*, query
2154 ),
2155 TP_FIELDS(
2156 ctf_string(query_field, query)
2157 ctf_float(double, ratio_field, ratio)
2158 ctf_integer(int, recv_size, my_custom_structure->recv_size)
2159 ctf_integer(int, send_size, my_custom_structure->send_size)
2160 )
2161)
2162----
2163
2164You can refer to this tracepoint definition with the `tracepoint()`
2165macro in your application's source code like this:
2166
2167[source,c]
2168----
2169tracepoint(my_provider, my_tracepoint,
2170 my_structure, some_ratio, the_query);
2171----
2172====
2173
2174NOTE: The LTTng tracer only evaluates tracepoint arguments at run time
2175if they satisfy an enabled <<event,event rule>>.
2176
2177
2178[[using-tracepoint-classes]]
2179===== Use a tracepoint class
2180
2181A _tracepoint class_ is a class of tracepoints which share the same
2182output event field definitions. A _tracepoint instance_ is one
2183instance of such a defined tracepoint class, with its own tracepoint
2184name.
2185
2186The <<defining-tracepoints,`TRACEPOINT_EVENT()` macro>> is actually a
2187shorthand which defines both a tracepoint class and a tracepoint
2188instance at the same time.
2189
2190When you build a tracepoint provider package, the C or $$C++$$ compiler
2191creates one serialization function for each **tracepoint class**. A
2192serialization function is responsible for serializing the event fields
2193of a tracepoint to a sub-buffer when tracing.
2194
2195For various performance reasons, when your situation requires multiple
2196tracepoint definitions with different names, but with the same event
2197fields, we recommend that you manually create a tracepoint class
2198and instantiate as many tracepoint instances as needed. One positive
2199effect of such a design, amongst other advantages, is that all
2200tracepoint instances of the same tracepoint class reuse the same
2201serialization function, thus reducing
2202https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2203
2204.Use a tracepoint class and tracepoint instances.
2205====
2206Consider the following three tracepoint definitions:
2207
2208[source,c]
2209----
2210TRACEPOINT_EVENT(
2211 my_app,
2212 get_account,
2213 TP_ARGS(
2214 int, userid,
2215 size_t, len
2216 ),
2217 TP_FIELDS(
2218 ctf_integer(int, userid, userid)
2219 ctf_integer(size_t, len, len)
2220 )
2221)
2222
2223TRACEPOINT_EVENT(
2224 my_app,
2225 get_settings,
2226 TP_ARGS(
2227 int, userid,
2228 size_t, len
2229 ),
2230 TP_FIELDS(
2231 ctf_integer(int, userid, userid)
2232 ctf_integer(size_t, len, len)
2233 )
2234)
2235
2236TRACEPOINT_EVENT(
2237 my_app,
2238 get_transaction,
2239 TP_ARGS(
2240 int, userid,
2241 size_t, len
2242 ),
2243 TP_FIELDS(
2244 ctf_integer(int, userid, userid)
2245 ctf_integer(size_t, len, len)
2246 )
2247)
2248----
2249
2250In this case, we create three tracepoint classes, with one implicit
2251tracepoint instance for each of them: `get_account`, `get_settings`, and
2252`get_transaction`. However, they all share the same event field names
2253and types. Hence three identical, yet independent serialization
2254functions are created when you build the tracepoint provider package.
2255
2256A better design choice is to define a single tracepoint class and three
2257tracepoint instances:
2258
2259[source,c]
2260----
2261/* The tracepoint class */
2262TRACEPOINT_EVENT_CLASS(
2263 /* Tracepoint provider name */
2264 my_app,
2265
2266 /* Tracepoint class name */
2267 my_class,
2268
2269 /* Input arguments */
2270 TP_ARGS(
2271 int, userid,
2272 size_t, len
2273 ),
2274
2275 /* Output event fields */
2276 TP_FIELDS(
2277 ctf_integer(int, userid, userid)
2278 ctf_integer(size_t, len, len)
2279 )
2280)
2281
2282/* The tracepoint instances */
2283TRACEPOINT_EVENT_INSTANCE(
2284 /* Tracepoint provider name */
2285 my_app,
2286
2287 /* Tracepoint class name */
2288 my_class,
2289
2290 /* Tracepoint name */
2291 get_account,
2292
2293 /* Input arguments */
2294 TP_ARGS(
2295 int, userid,
2296 size_t, len
2297 )
2298)
2299TRACEPOINT_EVENT_INSTANCE(
2300 my_app,
2301 my_class,
2302 get_settings,
2303 TP_ARGS(
2304 int, userid,
2305 size_t, len
2306 )
2307)
2308TRACEPOINT_EVENT_INSTANCE(
2309 my_app,
2310 my_class,
2311 get_transaction,
2312 TP_ARGS(
2313 int, userid,
2314 size_t, len
2315 )
2316)
2317----
2318====
2319
2320
2321[[assigning-log-levels]]
2322===== Assign a log level to a tracepoint definition
2323
2324You can assign an optional _log level_ to a
2325<<defining-tracepoints,tracepoint definition>>.
2326
2327Assigning different levels of severity to tracepoint definitions can
2328be useful: when you <<enabling-disabling-events,create an event rule>>,
2329you can target tracepoints having a log level as severe as a specific
2330value.
2331
2332The concept of LTTng-UST log levels is similar to the levels found
2333in typical logging frameworks:
2334
2335* In a logging framework, the log level is given by the function
2336 or method name you use at the log statement site: `debug()`,
2337 `info()`, `warn()`, `error()`, and so on.
2338* In LTTng-UST, you statically assign the log level to a tracepoint
2339 definition; any `tracepoint()` macro invocation which refers to
2340 this definition has this log level.
2341
2342You can assign a log level to a tracepoint definition with the
2343`TRACEPOINT_LOGLEVEL()` macro. You must use this macro _after_ the
2344<<defining-tracepoints,`TRACEPOINT_EVENT()`>> or
2345<<using-tracepoint-classes,`TRACEPOINT_INSTANCE()`>> macro for a given
2346tracepoint.
2347
2348The syntax of the `TRACEPOINT_LOGLEVEL()` macro is:
2349
2350[source,c]
2351.`TRACEPOINT_LOGLEVEL()` macro syntax.
2352----
2353TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2354----
2355
2356Replace:
2357
2358* `provider_name` with the tracepoint provider name.
2359* `tracepoint_name` with the tracepoint name.
2360* `log_level` with the log level to assign to the tracepoint
2361 definition named `tracepoint_name` in the `provider_name`
2362 tracepoint provider.
2363+
2364See man:lttng-ust(3) for a list of available log level names.
2365
2366.Assign the `TRACE_DEBUG_UNIT` log level to a tracepoint definition.
2367====
2368[source,c]
2369----
2370/* Tracepoint definition */
2371TRACEPOINT_EVENT(
2372 my_app,
2373 get_transaction,
2374 TP_ARGS(
2375 int, userid,
2376 size_t, len
2377 ),
2378 TP_FIELDS(
2379 ctf_integer(int, userid, userid)
2380 ctf_integer(size_t, len, len)
2381 )
2382)
2383
2384/* Log level assignment */
2385TRACEPOINT_LOGLEVEL(my_app, get_transaction, TRACE_DEBUG_UNIT)
2386----
2387====
2388
2389
2390[[tpp-source]]
2391===== Create a tracepoint provider package source file
2392
2393A _tracepoint provider package source file_ is a C source file which
2394includes a <<tpp-header,tracepoint provider header file>> to expand its
2395macros into event serialization and other functions.
2396
2397You can always use the following tracepoint provider package source
2398file template:
2399
2400[source,c]
2401.Tracepoint provider package source file template.
2402----
2403#define TRACEPOINT_CREATE_PROBES
2404
2405#include "tp.h"
2406----
2407
2408Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2409header file>> name. You may also include more than one tracepoint
2410provider header file here to create a tracepoint provider package
2411holding more than one tracepoint providers.
2412
2413
2414[[probing-the-application-source-code]]
2415==== Add tracepoints to an application's source code
2416
2417Once you <<tpp-header,create a tracepoint provider header file>>, you
2418can use the `tracepoint()` macro in your application's
2419source code to insert the tracepoints that this header
2420<<defining-tracepoints,defines>>.
2421
2422The `tracepoint()` macro takes at least two parameters: the tracepoint
2423provider name and the tracepoint name. The corresponding tracepoint
2424definition defines the other parameters.
2425
2426.`tracepoint()` usage.
2427====
2428The following <<defining-tracepoints,tracepoint definition>> defines a
2429tracepoint which takes two input arguments and has two output event
2430fields.
2431
2432[source,c]
2433.Tracepoint provider header file.
2434----
2435#include "my-custom-structure.h"
2436
2437TRACEPOINT_EVENT(
2438 my_provider,
2439 my_tracepoint,
2440 TP_ARGS(
2441 int, argc,
2442 const char*, cmd_name
2443 ),
2444 TP_FIELDS(
2445 ctf_string(cmd_name, cmd_name)
2446 ctf_integer(int, number_of_args, argc)
2447 )
2448)
2449----
2450
2451You can refer to this tracepoint definition with the `tracepoint()`
2452macro in your application's source code like this:
2453
2454[source,c]
2455.Application's source file.
2456----
2457#include "tp.h"
2458
2459int main(int argc, char* argv[])
2460{
2461 tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2462
2463 return 0;
2464}
2465----
2466
2467Note how the application's source code includes
2468the tracepoint provider header file containing the tracepoint
2469definitions to use, path:{tp.h}.
2470====
2471
2472.`tracepoint()` usage with a complex tracepoint definition.
2473====
2474Consider this complex tracepoint definition, where multiple event
2475fields refer to the same input arguments in their argument expression
2476parameter:
2477
2478[source,c]
2479.Tracepoint provider header file.
2480----
2481/* For `struct stat` */
2482#include <sys/types.h>
2483#include <sys/stat.h>
2484#include <unistd.h>
2485
2486TRACEPOINT_EVENT(
2487 my_provider,
2488 my_tracepoint,
2489 TP_ARGS(
2490 int, my_int_arg,
2491 char*, my_str_arg,
2492 struct stat*, st
2493 ),
2494 TP_FIELDS(
2495 ctf_integer(int, my_constant_field, 23 + 17)
2496 ctf_integer(int, my_int_arg_field, my_int_arg)
2497 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2498 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2499 my_str_arg[2] + my_str_arg[3])
2500 ctf_string(my_str_arg_field, my_str_arg)
2501 ctf_integer_hex(off_t, size_field, st->st_size)
2502 ctf_float(double, size_dbl_field, (double) st->st_size)
2503 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2504 size_t, strlen(my_str_arg) / 2)
2505 )
2506)
2507----
2508
2509You can refer to this tracepoint definition with the `tracepoint()`
2510macro in your application's source code like this:
2511
2512[source,c]
2513.Application's source file.
2514----
2515#define TRACEPOINT_DEFINE
2516#include "tp.h"
2517
2518int main(void)
2519{
2520 struct stat s;
2521
2522 stat("/etc/fstab", &s);
2523 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2524
2525 return 0;
2526}
2527----
2528
2529If you look at the event record that LTTng writes when tracing this
2530program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2531it should look like this:
2532
2533.Event record fields
2534|====
2535|Field's name |Field's value
2536|`my_constant_field` |40
2537|`my_int_arg_field` |23
2538|`my_int_arg_field2` |529
2539|`sum4_field` |389
2540|`my_str_arg_field` |`Hello, World!`
2541|`size_field` |0x12d
2542|`size_dbl_field` |301.0
2543|`half_my_str_arg_field` |`Hello,`
2544|====
2545====
2546
2547Sometimes, the arguments you pass to `tracepoint()` are expensive to
2548compute--they use the call stack, for example. To avoid this
2549computation when the tracepoint is disabled, you can use the
2550`tracepoint_enabled()` and `do_tracepoint()` macros.
2551
2552The syntax of the `tracepoint_enabled()` and `do_tracepoint()` macros
2553is:
2554
2555[source,c]
2556.`tracepoint_enabled()` and `do_tracepoint()` macros syntax.
2557----
2558tracepoint_enabled(provider_name, tracepoint_name)
2559do_tracepoint(provider_name, tracepoint_name, ...)
2560----
2561
2562Replace:
2563
2564* `provider_name` with the tracepoint provider name.
2565* `tracepoint_name` with the tracepoint name.
2566
2567`tracepoint_enabled()` returns a non-zero value if the tracepoint named
2568`tracepoint_name` from the provider named `provider_name` is enabled
2569**at run time**.
2570
2571`do_tracepoint()` is like `tracepoint()`, except that it doesn't check
2572if the tracepoint is enabled. Using `tracepoint()` with
2573`tracepoint_enabled()` is dangerous since `tracepoint()` also contains
2574the `tracepoint_enabled()` check, thus a race condition is
2575possible in this situation:
2576
2577[source,c]
2578.Possible race condition when using `tracepoint_enabled()` with `tracepoint()`.
2579----
2580if (tracepoint_enabled(my_provider, my_tracepoint)) {
2581 stuff = prepare_stuff();
2582}
2583
2584tracepoint(my_provider, my_tracepoint, stuff);
2585----
2586
2587If the tracepoint is enabled after the condition, then `stuff` is not
2588prepared: the emitted event will either contain wrong data, or the whole
2589application could crash (segmentation fault, for example).
2590
2591NOTE: Neither `tracepoint_enabled()` nor `do_tracepoint()` have an
2592`STAP_PROBEV()` call. If you need it, you must emit
2593this call yourself.
2594
2595
2596[[building-tracepoint-providers-and-user-application]]
2597==== Build and link a tracepoint provider package and an application
2598
2599Once you have one or more <<tpp-header,tracepoint provider header
2600files>> and a <<tpp-source,tracepoint provider package source file>>,
2601you can create the tracepoint provider package by compiling its source
2602file. From here, multiple build and run scenarios are possible. The
2603following table shows common application and library configurations
2604along with the required command lines to achieve them.
2605
2606In the following diagrams, we use the following file names:
2607
2608`app`::
2609 Executable application.
2610
2611`app.o`::
2612 Application's object file.
2613
2614`tpp.o`::
2615 Tracepoint provider package object file.
2616
2617`tpp.a`::
2618 Tracepoint provider package archive file.
2619
2620`libtpp.so`::
2621 Tracepoint provider package shared object file.
2622
2623`emon.o`::
2624 User library object file.
2625
2626`libemon.so`::
2627 User library shared object file.
2628
2629We use the following symbols in the diagrams of table below:
2630
2631[role="img-100"]
2632.Symbols used in the build scenario diagrams.
2633image::ust-sit-symbols.png[]
2634
2635We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
2636variable in the following instructions.
2637
2638[role="growable ust-scenarios",cols="asciidoc,asciidoc"]
2639.Common tracepoint provider package scenarios.
2640|====
2641|Scenario |Instructions
2642
2643|
2644The instrumented application is statically linked with
2645the tracepoint provider package object.
2646
2647image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
2648
2649|
2650include::../common/ust-sit-step-tp-o.txt[]
2651
2652To build the instrumented application:
2653
2654. In path:{app.c}, before including path:{tpp.h}, add the following line:
2655+
2656--
2657[source,c]
2658----
2659#define TRACEPOINT_DEFINE
2660----
2661--
2662
2663. Compile the application source file:
2664+
2665--
2666[role="term"]
2667----
2668$ gcc -c app.c
2669----
2670--
2671
2672. Build the application:
2673+
2674--
2675[role="term"]
2676----
2677$ gcc -o app app.o tpp.o -llttng-ust -ldl
2678----
2679--
2680
2681To run the instrumented application:
2682
2683* Start the application:
2684+
2685--
2686[role="term"]
2687----
2688$ ./app
2689----
2690--
2691
2692|
2693The instrumented application is statically linked with the
2694tracepoint provider package archive file.
2695
2696image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
2697
2698|
2699To create the tracepoint provider package archive file:
2700
2701. Compile the <<tpp-source,tracepoint provider package source file>>:
2702+
2703--
2704[role="term"]
2705----
2706$ gcc -I. -c tpp.c
2707----
2708--
2709
2710. Create the tracepoint provider package archive file:
2711+
2712--
2713[role="term"]
2714----
2715$ ar rcs tpp.a tpp.o
2716----
2717--
2718
2719To build the instrumented application:
2720
2721. In path:{app.c}, before including path:{tpp.h}, add the following line:
2722+
2723--
2724[source,c]
2725----
2726#define TRACEPOINT_DEFINE
2727----
2728--
2729
2730. Compile the application source file:
2731+
2732--
2733[role="term"]
2734----
2735$ gcc -c app.c
2736----
2737--
2738
2739. Build the application:
2740+
2741--
2742[role="term"]
2743----
2744$ gcc -o app app.o tpp.a -llttng-ust -ldl
2745----
2746--
2747
2748To run the instrumented application:
2749
2750* Start the application:
2751+
2752--
2753[role="term"]
2754----
2755$ ./app
2756----
2757--
2758
2759|
2760The instrumented application is linked with the tracepoint provider
2761package shared object.
2762
2763image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
2764
2765|
2766include::../common/ust-sit-step-tp-so.txt[]
2767
2768To build the instrumented application:
2769
2770. In path:{app.c}, before including path:{tpp.h}, add the following line:
2771+
2772--
2773[source,c]
2774----
2775#define TRACEPOINT_DEFINE
2776----
2777--
2778
2779. Compile the application source file:
2780+
2781--
2782[role="term"]
2783----
2784$ gcc -c app.c
2785----
2786--
2787
2788. Build the application:
2789+
2790--
2791[role="term"]
2792----
2793$ gcc -o app app.o -ldl -L. -ltpp
2794----
2795--
2796
2797To run the instrumented application:
2798
2799* Start the application:
2800+
2801--
2802[role="term"]
2803----
2804$ ./app
2805----
2806--
2807
2808|
2809The tracepoint provider package shared object is preloaded before the
2810instrumented application starts.
2811
2812image::ust-sit+tp-so-preloaded+app-instrumented.png[]
2813
2814|
2815include::../common/ust-sit-step-tp-so.txt[]
2816
2817To build the instrumented application:
2818
2819. In path:{app.c}, before including path:{tpp.h}, add the
2820 following lines:
2821+
2822--
2823[source,c]
2824----
2825#define TRACEPOINT_DEFINE
2826#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2827----
2828--
2829
2830. Compile the application source file:
2831+
2832--
2833[role="term"]
2834----
2835$ gcc -c app.c
2836----
2837--
2838
2839. Build the application:
2840+
2841--
2842[role="term"]
2843----
2844$ gcc -o app app.o -ldl
2845----
2846--
2847
2848To run the instrumented application with tracing support:
2849
2850* Preload the tracepoint provider package shared object and
2851 start the application:
2852+
2853--
2854[role="term"]
2855----
2856$ LD_PRELOAD=./libtpp.so ./app
2857----
2858--
2859
2860To run the instrumented application without tracing support:
2861
2862* Start the application:
2863+
2864--
2865[role="term"]
2866----
2867$ ./app
2868----
2869--
2870
2871|
2872The instrumented application dynamically loads the tracepoint provider
2873package shared object.
2874
2875See the <<dlclose-warning,warning about `dlclose()`>>.
2876
2877image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
2878
2879|
2880include::../common/ust-sit-step-tp-so.txt[]
2881
2882To build the instrumented application:
2883
2884. In path:{app.c}, before including path:{tpp.h}, add the
2885 following lines:
2886+
2887--
2888[source,c]
2889----
2890#define TRACEPOINT_DEFINE
2891#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2892----
2893--
2894
2895. Compile the application source file:
2896+
2897--
2898[role="term"]
2899----
2900$ gcc -c app.c
2901----
2902--
2903
2904. Build the application:
2905+
2906--
2907[role="term"]
2908----
2909$ gcc -o app app.o -ldl
2910----
2911--
2912
2913To run the instrumented application:
2914
2915* Start the application:
2916+
2917--
2918[role="term"]
2919----
2920$ ./app
2921----
2922--
2923
2924|
2925The application is linked with the instrumented user library.
2926
2927The instrumented user library is statically linked with the tracepoint
2928provider package object file.
2929
2930image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
2931
2932|
2933include::../common/ust-sit-step-tp-o-fpic.txt[]
2934
2935To build the instrumented user library:
2936
2937. In path:{emon.c}, before including path:{tpp.h}, add the
2938 following line:
2939+
2940--
2941[source,c]
2942----
2943#define TRACEPOINT_DEFINE
2944----
2945--
2946
2947. Compile the user library source file:
2948+
2949--
2950[role="term"]
2951----
2952$ gcc -I. -fpic -c emon.c
2953----
2954--
2955
2956. Build the user library shared object:
2957+
2958--
2959[role="term"]
2960----
2961$ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
2962----
2963--
2964
2965To build the application:
2966
2967. Compile the application source file:
2968+
2969--
2970[role="term"]
2971----
2972$ gcc -c app.c
2973----
2974--
2975
2976. Build the application:
2977+
2978--
2979[role="term"]
2980----
2981$ gcc -o app app.o -L. -lemon
2982----
2983--
2984
2985To run the application:
2986
2987* Start the application:
2988+
2989--
2990[role="term"]
2991----
2992$ ./app
2993----
2994--
2995
2996|
2997The application is linked with the instrumented user library.
2998
2999The instrumented user library is linked with the tracepoint provider
3000package shared object.
3001
3002image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3003
3004|
3005include::../common/ust-sit-step-tp-so.txt[]
3006
3007To build the instrumented user library:
3008
3009. In path:{emon.c}, before including path:{tpp.h}, add the
3010 following line:
3011+
3012--
3013[source,c]
3014----
3015#define TRACEPOINT_DEFINE
3016----
3017--
3018
3019. Compile the user library source file:
3020+
3021--
3022[role="term"]
3023----
3024$ gcc -I. -fpic -c emon.c
3025----
3026--
3027
3028. Build the user library shared object:
3029+
3030--
3031[role="term"]
3032----
3033$ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3034----
3035--
3036
3037To build the application:
3038
3039. Compile the application source file:
3040+
3041--
3042[role="term"]
3043----
3044$ gcc -c app.c
3045----
3046--
3047
3048. Build the application:
3049+
3050--
3051[role="term"]
3052----
3053$ gcc -o app app.o -L. -lemon
3054----
3055--
3056
3057To run the application:
3058
3059* Start the application:
3060+
3061--
3062[role="term"]
3063----
3064$ ./app
3065----
3066--
3067
3068|
3069The tracepoint provider package shared object is preloaded before the
3070application starts.
3071
3072The application is linked with the instrumented user library.
3073
3074image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3075
3076|
3077include::../common/ust-sit-step-tp-so.txt[]
3078
3079To build the instrumented user library:
3080
3081. In path:{emon.c}, before including path:{tpp.h}, add the
3082 following lines:
3083+
3084--
3085[source,c]
3086----
3087#define TRACEPOINT_DEFINE
3088#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3089----
3090--
3091
3092. Compile the user library source file:
3093+
3094--
3095[role="term"]
3096----
3097$ gcc -I. -fpic -c emon.c
3098----
3099--
3100
3101. Build the user library shared object:
3102+
3103--
3104[role="term"]
3105----
3106$ gcc -shared -o libemon.so emon.o -ldl
3107----
3108--
3109
3110To build the application:
3111
3112. Compile the application source file:
3113+
3114--
3115[role="term"]
3116----
3117$ gcc -c app.c
3118----
3119--
3120
3121. Build the application:
3122+
3123--
3124[role="term"]
3125----
3126$ gcc -o app app.o -L. -lemon
3127----
3128--
3129
3130To run the application with tracing support:
3131
3132* Preload the tracepoint provider package shared object and
3133 start the application:
3134+
3135--
3136[role="term"]
3137----
3138$ LD_PRELOAD=./libtpp.so ./app
3139----
3140--
3141
3142To run the application without tracing support:
3143
3144* Start the application:
3145+
3146--
3147[role="term"]
3148----
3149$ ./app
3150----
3151--
3152
3153|
3154The application is linked with the instrumented user library.
3155
3156The instrumented user library dynamically loads the tracepoint provider
3157package shared object.
3158
3159See the <<dlclose-warning,warning about `dlclose()`>>.
3160
3161image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3162
3163|
3164include::../common/ust-sit-step-tp-so.txt[]
3165
3166To build the instrumented user library:
3167
3168. In path:{emon.c}, before including path:{tpp.h}, add the
3169 following lines:
3170+
3171--
3172[source,c]
3173----
3174#define TRACEPOINT_DEFINE
3175#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3176----
3177--
3178
3179. Compile the user library source file:
3180+
3181--
3182[role="term"]
3183----
3184$ gcc -I. -fpic -c emon.c
3185----
3186--
3187
3188. Build the user library shared object:
3189+
3190--
3191[role="term"]
3192----
3193$ gcc -shared -o libemon.so emon.o -ldl
3194----
3195--
3196
3197To build the application:
3198
3199. Compile the application source file:
3200+
3201--
3202[role="term"]
3203----
3204$ gcc -c app.c
3205----
3206--
3207
3208. Build the application:
3209+
3210--
3211[role="term"]
3212----
3213$ gcc -o app app.o -L. -lemon
3214----
3215--
3216
3217To run the application:
3218
3219* Start the application:
3220+
3221--
3222[role="term"]
3223----
3224$ ./app
3225----
3226--
3227
3228|
3229The application dynamically loads the instrumented user library.
3230
3231The instrumented user library is linked with the tracepoint provider
3232package shared object.
3233
3234See the <<dlclose-warning,warning about `dlclose()`>>.
3235
3236image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3237
3238|
3239include::../common/ust-sit-step-tp-so.txt[]
3240
3241To build the instrumented user library:
3242
3243. In path:{emon.c}, before including path:{tpp.h}, add the
3244 following line:
3245+
3246--
3247[source,c]
3248----
3249#define TRACEPOINT_DEFINE
3250----
3251--
3252
3253. Compile the user library source file:
3254+
3255--
3256[role="term"]
3257----
3258$ gcc -I. -fpic -c emon.c
3259----
3260--
3261
3262. Build the user library shared object:
3263+
3264--
3265[role="term"]
3266----
3267$ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3268----
3269--
3270
3271To build the application:
3272
3273. Compile the application source file:
3274+
3275--
3276[role="term"]
3277----
3278$ gcc -c app.c
3279----
3280--
3281
3282. Build the application:
3283+
3284--
3285[role="term"]
3286----
3287$ gcc -o app app.o -ldl -L. -lemon
3288----
3289--
3290
3291To run the application:
3292
3293* Start the application:
3294+
3295--
3296[role="term"]
3297----
3298$ ./app
3299----
3300--
3301
3302|
3303The application dynamically loads the instrumented user library.
3304
3305The instrumented user library dynamically loads the tracepoint provider
3306package shared object.
3307
3308See the <<dlclose-warning,warning about `dlclose()`>>.
3309
3310image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3311
3312|
3313include::../common/ust-sit-step-tp-so.txt[]
3314
3315To build the instrumented user library:
3316
3317. In path:{emon.c}, before including path:{tpp.h}, add the
3318 following lines:
3319+
3320--
3321[source,c]
3322----
3323#define TRACEPOINT_DEFINE
3324#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3325----
3326--
3327
3328. Compile the user library source file:
3329+
3330--
3331[role="term"]
3332----
3333$ gcc -I. -fpic -c emon.c
3334----
3335--
3336
3337. Build the user library shared object:
3338+
3339--
3340[role="term"]
3341----
3342$ gcc -shared -o libemon.so emon.o -ldl
3343----
3344--
3345
3346To build the application:
3347
3348. Compile the application source file:
3349+
3350--
3351[role="term"]
3352----
3353$ gcc -c app.c
3354----
3355--
3356
3357. Build the application:
3358+
3359--
3360[role="term"]
3361----
3362$ gcc -o app app.o -ldl -L. -lemon
3363----
3364--
3365
3366To run the application:
3367
3368* Start the application:
3369+
3370--
3371[role="term"]
3372----
3373$ ./app
3374----
3375--
3376
3377|
3378The tracepoint provider package shared object is preloaded before the
3379application starts.
3380
3381The application dynamically loads the instrumented user library.
3382
3383image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3384
3385|
3386include::../common/ust-sit-step-tp-so.txt[]
3387
3388To build the instrumented user library:
3389
3390. In path:{emon.c}, before including path:{tpp.h}, add the
3391 following lines:
3392+
3393--
3394[source,c]
3395----
3396#define TRACEPOINT_DEFINE
3397#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3398----
3399--
3400
3401. Compile the user library source file:
3402+
3403--
3404[role="term"]
3405----
3406$ gcc -I. -fpic -c emon.c
3407----
3408--
3409
3410. Build the user library shared object:
3411+
3412--
3413[role="term"]
3414----
3415$ gcc -shared -o libemon.so emon.o -ldl
3416----
3417--
3418
3419To build the application:
3420
3421. Compile the application source file:
3422+
3423--
3424[role="term"]
3425----
3426$ gcc -c app.c
3427----
3428--
3429
3430. Build the application:
3431+
3432--
3433[role="term"]
3434----
3435$ gcc -o app app.o -L. -lemon
3436----
3437--
3438
3439To run the application with tracing support:
3440
3441* Preload the tracepoint provider package shared object and
3442 start the application:
3443+
3444--
3445[role="term"]
3446----
3447$ LD_PRELOAD=./libtpp.so ./app
3448----
3449--
3450
3451To run the application without tracing support:
3452
3453* Start the application:
3454+
3455--
3456[role="term"]
3457----
3458$ ./app
3459----
3460--
3461
3462|
3463The application is statically linked with the tracepoint provider
3464package object file.
3465
3466The application is linked with the instrumented user library.
3467
3468image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3469
3470|
3471include::../common/ust-sit-step-tp-o.txt[]
3472
3473To build the instrumented user library:
3474
3475. In path:{emon.c}, before including path:{tpp.h}, add the
3476 following line:
3477+
3478--
3479[source,c]
3480----
3481#define TRACEPOINT_DEFINE
3482----
3483--
3484
3485. Compile the user library source file:
3486+
3487--
3488[role="term"]
3489----
3490$ gcc -I. -fpic -c emon.c
3491----
3492--
3493
3494. Build the user library shared object:
3495+
3496--
3497[role="term"]
3498----
3499$ gcc -shared -o libemon.so emon.o
3500----
3501--
3502
3503To build the application:
3504
3505. Compile the application source file:
3506+
3507--
3508[role="term"]
3509----
3510$ gcc -c app.c
3511----
3512--
3513
3514. Build the application:
3515+
3516--
3517[role="term"]
3518----
3519$ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3520----
3521--
3522
3523To run the instrumented application:
3524
3525* Start the application:
3526+
3527--
3528[role="term"]
3529----
3530$ ./app
3531----
3532--
3533
3534|
3535The application is statically linked with the tracepoint provider
3536package object file.
3537
3538The application dynamically loads the instrumented user library.
3539
3540image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3541
3542|
3543include::../common/ust-sit-step-tp-o.txt[]
3544
3545To build the application:
3546
3547. In path:{app.c}, before including path:{tpp.h}, add the following line:
3548+
3549--
3550[source,c]
3551----
3552#define TRACEPOINT_DEFINE
3553----
3554--
3555
3556. Compile the application source file:
3557+
3558--
3559[role="term"]
3560----
3561$ gcc -c app.c
3562----
3563--
3564
3565. Build the application:
3566+
3567--
3568[role="term"]
3569----
3570$ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3571 -llttng-ust -ldl
3572----
3573--
3574+
3575The `--export-dynamic` option passed to the linker is necessary for the
3576dynamically loaded library to ``see'' the tracepoint symbols defined in
3577the application.
3578
3579To build the instrumented user library:
3580
3581. Compile the user library source file:
3582+
3583--
3584[role="term"]
3585----
3586$ gcc -I. -fpic -c emon.c
3587----
3588--
3589
3590. Build the user library shared object:
3591+
3592--
3593[role="term"]
3594----
3595$ gcc -shared -o libemon.so emon.o
3596----
3597--
3598
3599To run the application:
3600
3601* Start the application:
3602+
3603--
3604[role="term"]
3605----
3606$ ./app
3607----
3608--
3609|====
3610
3611[[dlclose-warning]]
3612[IMPORTANT]
3613.Do not use man:dlclose(3) on a tracepoint provider package
3614====
3615Never use man:dlclose(3) on any shared object which:
3616
3617* Is linked with, statically or dynamically, a tracepoint provider
3618 package.
3619* Calls man:dlopen(3) itself to dynamically open a tracepoint provider
3620 package shared object.
3621
3622This is currently considered **unsafe** due to a lack of reference
3623counting from LTTng-UST to the shared object.
3624
3625A known workaround (available since glibc 2.2) is to use the
3626`RTLD_NODELETE` flag when calling man:dlopen(3) initially. This has the
3627effect of not unloading the loaded shared object, even if man:dlclose(3)
3628is called.
3629
3630You can also preload the tracepoint provider package shared object with
3631the env:LD_PRELOAD environment variable to overcome this limitation.
3632====
3633
3634
3635[[using-lttng-ust-with-daemons]]
3636===== Use noch:{LTTng-UST} with daemons
3637
3638If your instrumented application calls man:fork(2), man:clone(2),
3639or BSD's man:rfork(2), without a following man:exec(3)-family
3640system call, you must preload the path:{liblttng-ust-fork.so} shared
3641object when you start the application.
3642
3643[role="term"]
3644----
3645$ LD_PRELOAD=liblttng-ust-fork.so ./my-app
3646----
3647
3648If your tracepoint provider package is
3649a shared library which you also preload, you must put both
3650shared objects in env:LD_PRELOAD:
3651
3652[role="term"]
3653----
3654$ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
3655----
3656
3657
3658[role="since-2.9"]
3659[[liblttng-ust-fd]]
3660===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
3661
3662If your instrumented application closes one or more file descriptors
3663which it did not open itself, you must preload the
3664path:{liblttng-ust-fd.so} shared object when you start the application:
3665
3666[role="term"]
3667----
3668$ LD_PRELOAD=liblttng-ust-fd.so ./my-app
3669----
3670
3671Typical use cases include closing all the file descriptors after
3672man:fork(2) or man:rfork(2) and buggy applications doing
3673``double closes''.
3674
3675
3676[[lttng-ust-pkg-config]]
3677===== Use noch:{pkg-config}
3678
3679On some distributions, LTTng-UST ships with a
3680https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
3681metadata file. If this is your case, then you can use cmd:pkg-config to
3682build an application on the command line:
3683
3684[role="term"]
3685----
3686$ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
3687----
3688
3689
3690[[instrumenting-32-bit-app-on-64-bit-system]]
3691===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
3692
3693In order to trace a 32-bit application running on a 64-bit system,
3694LTTng must use a dedicated 32-bit
3695<<lttng-consumerd,consumer daemon>>.
3696
3697The following steps show how to build and install a 32-bit consumer
3698daemon, which is _not_ part of the default 64-bit LTTng build, how to
3699build and install the 32-bit LTTng-UST libraries, and how to build and
3700link an instrumented 32-bit application in that context.
3701
3702To build a 32-bit instrumented application for a 64-bit target system,
3703assuming you have a fresh target system with no installed Userspace RCU
3704or LTTng packages:
3705
3706. Download, build, and install a 32-bit version of Userspace RCU:
3707+
3708--
3709[role="term"]
3710----
3711$ cd $(mktemp -d) &&
3712wget http://lttng.org/files/urcu/userspace-rcu-latest-0.9.tar.bz2 &&
3713tar -xf userspace-rcu-latest-0.9.tar.bz2 &&
3714cd userspace-rcu-0.9.* &&
3715./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
3716make &&
3717sudo make install &&
3718sudo ldconfig
3719----
3720--
3721
3722. Using your distribution's package manager, or from source, install
3723 the following 32-bit versions of the following dependencies of
3724 LTTng-tools and LTTng-UST:
3725+
3726--
3727* https://sourceforge.net/projects/libuuid/[libuuid]
3728* http://directory.fsf.org/wiki/Popt[popt]
3729* http://www.xmlsoft.org/[libxml2]
3730--
3731
3732. Download, build, and install a 32-bit version of the latest
3733 LTTng-UST{nbsp}{revision}:
3734+
3735--
3736[role="term"]
3737----
3738$ cd $(mktemp -d) &&
3739wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.10.tar.bz2 &&
3740tar -xf lttng-ust-latest-2.10.tar.bz2 &&
3741cd lttng-ust-2.10.* &&
3742./configure --libdir=/usr/local/lib32 \
3743 CFLAGS=-m32 CXXFLAGS=-m32 \
3744 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
3745make &&
3746sudo make install &&
3747sudo ldconfig
3748----
3749--
3750+
3751[NOTE]
3752====
3753Depending on your distribution,
375432-bit libraries could be installed at a different location than
3755`/usr/lib32`. For example, Debian is known to install
3756some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
3757
3758In this case, make sure to set `LDFLAGS` to all the
3759relevant 32-bit library paths, for example:
3760
3761[role="term"]
3762----
3763$ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
3764----
3765====
3766
3767. Download the latest LTTng-tools{nbsp}{revision}, build, and install
3768 the 32-bit consumer daemon:
3769+
3770--
3771[role="term"]
3772----
3773$ cd $(mktemp -d) &&
3774wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
3775tar -xf lttng-tools-latest-2.10.tar.bz2 &&
3776cd lttng-tools-2.10.* &&
3777./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3778 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
3779 --disable-bin-lttng --disable-bin-lttng-crash \
3780 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
3781make &&
3782cd src/bin/lttng-consumerd &&
3783sudo make install &&
3784sudo ldconfig
3785----
3786--
3787
3788. From your distribution or from source,
3789 <<installing-lttng,install>> the 64-bit versions of
3790 LTTng-UST and Userspace RCU.
3791. Download, build, and install the 64-bit version of the
3792 latest LTTng-tools{nbsp}{revision}:
3793+
3794--
3795[role="term"]
3796----
3797$ cd $(mktemp -d) &&
3798wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
3799tar -xf lttng-tools-latest-2.10.tar.bz2 &&
3800cd lttng-tools-2.10.* &&
3801./configure --with-consumerd32-libdir=/usr/local/lib32 \
3802 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
3803make &&
3804sudo make install &&
3805sudo ldconfig
3806----
3807--
3808
3809. Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
3810 when linking your 32-bit application:
3811+
3812----
3813-m32 -L/usr/lib32 -L/usr/local/lib32 \
3814-Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
3815----
3816+
3817For example, let's rebuild the quick start example in
3818<<tracing-your-own-user-application,Trace a user application>> as an
3819instrumented 32-bit application:
3820+
3821--
3822[role="term"]
3823----
3824$ gcc -m32 -c -I. hello-tp.c
3825$ gcc -m32 -c hello.c
3826$ gcc -m32 -o hello hello.o hello-tp.o \
3827 -L/usr/lib32 -L/usr/local/lib32 \
3828 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
3829 -llttng-ust -ldl
3830----
3831--
3832
3833No special action is required to execute the 32-bit application and
3834to trace it: use the command-line man:lttng(1) tool as usual.
3835
3836
3837[role="since-2.5"]
3838[[tracef]]
3839==== Use `tracef()`
3840
3841man:tracef(3) is a small LTTng-UST API designed for quick,
3842man:printf(3)-like instrumentation without the burden of
3843<<tracepoint-provider,creating>> and
3844<<building-tracepoint-providers-and-user-application,building>>
3845a tracepoint provider package.
3846
3847To use `tracef()` in your application:
3848
3849. In the C or C++ source files where you need to use `tracef()`,
3850 include `<lttng/tracef.h>`:
3851+
3852--
3853[source,c]
3854----
3855#include <lttng/tracef.h>
3856----
3857--
3858
3859. In the application's source code, use `tracef()` like you would use
3860 man:printf(3):
3861+
3862--
3863[source,c]
3864----
3865 /* ... */
3866
3867 tracef("my message: %d (%s)", my_integer, my_string);
3868
3869 /* ... */
3870----
3871--
3872
3873. Link your application with `liblttng-ust`:
3874+
3875--
3876[role="term"]
3877----
3878$ gcc -o app app.c -llttng-ust
3879----
3880--
3881
3882To trace the events that `tracef()` calls emit:
3883
3884* <<enabling-disabling-events,Create an event rule>> which matches the
3885 `lttng_ust_tracef:*` event name:
3886+
3887--
3888[role="term"]
3889----
3890$ lttng enable-event --userspace 'lttng_ust_tracef:*'
3891----
3892--
3893
3894[IMPORTANT]
3895.Limitations of `tracef()`
3896====
3897The `tracef()` utility function was developed to make user space tracing
3898super simple, albeit with notable disadvantages compared to
3899<<defining-tracepoints,user-defined tracepoints>>:
3900
3901* All the emitted events have the same tracepoint provider and
3902 tracepoint names, respectively `lttng_ust_tracef` and `event`.
3903* There is no static type checking.
3904* The only event record field you actually get, named `msg`, is a string
3905 potentially containing the values you passed to `tracef()`
3906 using your own format string. This also means that you cannot filter
3907 events with a custom expression at run time because there are no
3908 isolated fields.
3909* Since `tracef()` uses the C standard library's man:vasprintf(3)
3910 function behind the scenes to format the strings at run time, its
3911 expected performance is lower than with user-defined tracepoints,
3912 which do not require a conversion to a string.
3913
3914Taking this into consideration, `tracef()` is useful for some quick
3915prototyping and debugging, but you should not consider it for any
3916permanent and serious applicative instrumentation.
3917====
3918
3919
3920[role="since-2.7"]
3921[[tracelog]]
3922==== Use `tracelog()`
3923
3924The man:tracelog(3) API is very similar to <<tracef,`tracef()`>>, with
3925the difference that it accepts an additional log level parameter.
3926
3927The goal of `tracelog()` is to ease the migration from logging to
3928tracing.
3929
3930To use `tracelog()` in your application:
3931
3932. In the C or C++ source files where you need to use `tracelog()`,
3933 include `<lttng/tracelog.h>`:
3934+
3935--
3936[source,c]
3937----
3938#include <lttng/tracelog.h>
3939----
3940--
3941
3942. In the application's source code, use `tracelog()` like you would use
3943 man:printf(3), except for the first parameter which is the log
3944 level:
3945+
3946--
3947[source,c]
3948----
3949 /* ... */
3950
3951 tracelog(TRACE_WARNING, "my message: %d (%s)",
3952 my_integer, my_string);
3953
3954 /* ... */
3955----
3956--
3957+
3958See man:lttng-ust(3) for a list of available log level names.
3959
3960. Link your application with `liblttng-ust`:
3961+
3962--
3963[role="term"]
3964----
3965$ gcc -o app app.c -llttng-ust
3966----
3967--
3968
3969To trace the events that `tracelog()` calls emit with a log level
3970_as severe as_ a specific log level:
3971
3972* <<enabling-disabling-events,Create an event rule>> which matches the
3973 `lttng_ust_tracelog:*` event name and a minimum level
3974 of severity:
3975+
3976--
3977[role="term"]
3978----
3979$ lttng enable-event --userspace 'lttng_ust_tracelog:*'
3980 --loglevel=TRACE_WARNING
3981----
3982--
3983
3984To trace the events that `tracelog()` calls emit with a
3985_specific log level_:
3986
3987* Create an event rule which matches the `lttng_ust_tracelog:*`
3988 event name and a specific log level:
3989+
3990--
3991[role="term"]
3992----
3993$ lttng enable-event --userspace 'lttng_ust_tracelog:*'
3994 --loglevel-only=TRACE_INFO
3995----
3996--
3997
3998
3999[[prebuilt-ust-helpers]]
4000=== Prebuilt user space tracing helpers
4001
4002The LTTng-UST package provides a few helpers in the form or preloadable
4003shared objects which automatically instrument system functions and
4004calls.
4005
4006The helper shared objects are normally found in dir:{/usr/lib}. If you
4007built LTTng-UST <<building-from-source,from source>>, they are probably
4008located in dir:{/usr/local/lib}.
4009
4010The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4011are:
4012
4013path:{liblttng-ust-libc-wrapper.so}::
4014path:{liblttng-ust-pthread-wrapper.so}::
4015 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4016 memory and POSIX threads function tracing>>.
4017
4018path:{liblttng-ust-cyg-profile.so}::
4019path:{liblttng-ust-cyg-profile-fast.so}::
4020 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4021
4022path:{liblttng-ust-dl.so}::
4023 <<liblttng-ust-dl,Dynamic linker tracing>>.
4024
4025To use a user space tracing helper with any user application:
4026
4027* Preload the helper shared object when you start the application:
4028+
4029--
4030[role="term"]
4031----
4032$ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4033----
4034--
4035+
4036You can preload more than one helper:
4037+
4038--
4039[role="term"]
4040----
4041$ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4042----
4043--
4044
4045
4046[role="since-2.3"]
4047[[liblttng-ust-libc-pthread-wrapper]]
4048==== Instrument C standard library memory and POSIX threads functions
4049
4050The path:{liblttng-ust-libc-wrapper.so} and
4051path:{liblttng-ust-pthread-wrapper.so} helpers
4052add instrumentation to some C standard library and POSIX
4053threads functions.
4054
4055[role="growable"]
4056.Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4057|====
4058|TP provider name |TP name |Instrumented function
4059
4060.6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4061 |`calloc` |man:calloc(3)
4062 |`realloc` |man:realloc(3)
4063 |`free` |man:free(3)
4064 |`memalign` |man:memalign(3)
4065 |`posix_memalign` |man:posix_memalign(3)
4066|====
4067
4068[role="growable"]
4069.Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4070|====
4071|TP provider name |TP name |Instrumented function
4072
4073.4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4074 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4075 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4076 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4077|====
4078
4079When you preload the shared object, it replaces the functions listed
4080in the previous tables by wrappers which contain tracepoints and call
4081the replaced functions.
4082
4083
4084[[liblttng-ust-cyg-profile]]
4085==== Instrument function entry and exit
4086
4087The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4088to the entry and exit points of functions.
4089
4090man:gcc(1) and man:clang(1) have an option named
4091https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
4092which generates instrumentation calls for entry and exit to functions.
4093The LTTng-UST function tracing helpers,
4094path:{liblttng-ust-cyg-profile.so} and
4095path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4096to add tracepoints to the two generated functions (which contain
4097`cyg_profile` in their names, hence the helper's name).
4098
4099To use the LTTng-UST function tracing helper, the source files to
4100instrument must be built using the `-finstrument-functions` compiler
4101flag.
4102
4103There are two versions of the LTTng-UST function tracing helper:
4104
4105* **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4106 that you should only use when it can be _guaranteed_ that the
4107 complete event stream is recorded without any lost event record.
4108 Any kind of duplicate information is left out.
4109+
4110Assuming no event record is lost, having only the function addresses on
4111entry is enough to create a call graph, since an event record always
4112contains the ID of the CPU that generated it.
4113+
4114You can use a tool like man:addr2line(1) to convert function addresses
4115back to source file names and line numbers.
4116
4117* **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4118which also works in use cases where event records might get discarded or
4119not recorded from application startup.
4120In these cases, the trace analyzer needs more information to be
4121able to reconstruct the program flow.
4122
4123See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4124points of this helper.
4125
4126All the tracepoints that this helper provides have the
4127log level `TRACE_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4128
4129TIP: It's sometimes a good idea to limit the number of source files that
4130you compile with the `-finstrument-functions` option to prevent LTTng
4131from writing an excessive amount of trace data at run time. When using
4132man:gcc(1), you can use the
4133`-finstrument-functions-exclude-function-list` option to avoid
4134instrument entries and exits of specific function names.
4135
4136
4137[role="since-2.4"]
4138[[liblttng-ust-dl]]
4139==== Instrument the dynamic linker
4140
4141The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4142man:dlopen(3) and man:dlclose(3) function calls.
4143
4144See man:lttng-ust-dl(3) to learn more about the instrumentation points
4145of this helper.
4146
4147
4148[role="since-2.4"]
4149[[java-application]]
4150=== User space Java agent
4151
4152You can instrument any Java application which uses one of the following
4153logging frameworks:
4154
4155* The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4156 (JUL) core logging facilities.
4157* http://logging.apache.org/log4j/1.2/[**Apache log4j 1.2**], since
4158 LTTng 2.6. Note that Apache Log4j{nbsp}2 is not supported.
4159
4160[role="img-100"]
4161.LTTng-UST Java agent imported by a Java application.
4162image::java-app.png[]
4163
4164Note that the methods described below are new in LTTng{nbsp}{revision}.
4165Previous LTTng versions use another technique.
4166
4167NOTE: We use http://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4168and https://ci.lttng.org/[continuous integration], thus this version is
4169directly supported. However, the LTTng-UST Java agent is also tested
4170with OpenJDK{nbsp}7.
4171
4172
4173[role="since-2.8"]
4174[[jul]]
4175==== Use the LTTng-UST Java agent for `java.util.logging`
4176
4177To use the LTTng-UST Java agent in a Java application which uses
4178`java.util.logging` (JUL):
4179
4180. In the Java application's source code, import the LTTng-UST
4181 log handler package for `java.util.logging`:
4182+
4183--
4184[source,java]
4185----
4186import org.lttng.ust.agent.jul.LttngLogHandler;
4187----
4188--
4189
4190. Create an LTTng-UST JUL log handler:
4191+
4192--
4193[source,java]
4194----
4195Handler lttngUstLogHandler = new LttngLogHandler();
4196----
4197--
4198
4199. Add this handler to the JUL loggers which should emit LTTng events:
4200+
4201--
4202[source,java]
4203----
4204Logger myLogger = Logger.getLogger("some-logger");
4205
4206myLogger.addHandler(lttngUstLogHandler);
4207----
4208--
4209
4210. Use `java.util.logging` log statements and configuration as usual.
4211 The loggers with an attached LTTng-UST log handler can emit
4212 LTTng events.
4213
4214. Before exiting the application, remove the LTTng-UST log handler from
4215 the loggers attached to it and call its `close()` method:
4216+
4217--
4218[source,java]
4219----
4220myLogger.removeHandler(lttngUstLogHandler);
4221lttngUstLogHandler.close();
4222----
4223--
4224+
4225This is not strictly necessary, but it is recommended for a clean
4226disposal of the handler's resources.
4227
4228. Include the LTTng-UST Java agent's common and JUL-specific JAR files,
4229 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4230 in the
4231 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4232 path] when you build the Java application.
4233+
4234The JAR files are typically located in dir:{/usr/share/java}.
4235+
4236IMPORTANT: The LTTng-UST Java agent must be
4237<<installing-lttng,installed>> for the logging framework your
4238application uses.
4239
4240.Use the LTTng-UST Java agent for `java.util.logging`.
4241====
4242[source,java]
4243.path:{Test.java}
4244----
4245import java.io.IOException;
4246import java.util.logging.Handler;
4247import java.util.logging.Logger;
4248import org.lttng.ust.agent.jul.LttngLogHandler;
4249
4250public class Test
4251{
4252 private static final int answer = 42;
4253
4254 public static void main(String[] argv) throws Exception
4255 {
4256 // Create a logger
4257 Logger logger = Logger.getLogger("jello");
4258
4259 // Create an LTTng-UST log handler
4260 Handler lttngUstLogHandler = new LttngLogHandler();
4261
4262 // Add the LTTng-UST log handler to our logger
4263 logger.addHandler(lttngUstLogHandler);
4264
4265 // Log at will!
4266 logger.info("some info");
4267 logger.warning("some warning");
4268 Thread.sleep(500);
4269 logger.finer("finer information; the answer is " + answer);
4270 Thread.sleep(123);
4271 logger.severe("error!");
4272
4273 // Not mandatory, but cleaner
4274 logger.removeHandler(lttngUstLogHandler);
4275 lttngUstLogHandler.close();
4276 }
4277}
4278----
4279
4280Build this example:
4281
4282[role="term"]
4283----
4284$ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4285----
4286
4287<<creating-destroying-tracing-sessions,Create a tracing session>>,
4288<<enabling-disabling-events,create an event rule>> matching the
4289`jello` JUL logger, and <<basic-tracing-session-control,start tracing>>:
4290
4291[role="term"]
4292----
4293$ lttng create
4294$ lttng enable-event --jul jello
4295$ lttng start
4296----
4297
4298Run the compiled class:
4299
4300[role="term"]
4301----
4302$ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4303----
4304
4305<<basic-tracing-session-control,Stop tracing>> and inspect the
4306recorded events:
4307
4308[role="term"]
4309----
4310$ lttng stop
4311$ lttng view
4312----
4313====
4314
4315In the resulting trace, an <<event,event record>> generated by a Java
4316application using `java.util.logging` is named `lttng_jul:event` and
4317has the following fields:
4318
4319`msg`::
4320 Log record's message.
4321
4322`logger_name`::
4323 Logger name.
4324
4325`class_name`::
4326 Name of the class in which the log statement was executed.
4327
4328`method_name`::
4329 Name of the method in which the log statement was executed.
4330
4331`long_millis`::
4332 Logging time (timestamp in milliseconds).
4333
4334`int_loglevel`::
4335 Log level integer value.
4336
4337`int_threadid`::
4338 ID of the thread in which the log statement was executed.
4339
4340You can use the opt:lttng-enable-event(1):--loglevel or
4341opt:lttng-enable-event(1):--loglevel-only option of the
4342man:lttng-enable-event(1) command to target a range of JUL log levels
4343or a specific JUL log level.
4344
4345
4346[role="since-2.8"]
4347[[log4j]]
4348==== Use the LTTng-UST Java agent for Apache log4j
4349
4350To use the LTTng-UST Java agent in a Java application which uses
4351Apache log4j 1.2:
4352
4353. In the Java application's source code, import the LTTng-UST
4354 log appender package for Apache log4j:
4355+
4356--
4357[source,java]
4358----
4359import org.lttng.ust.agent.log4j.LttngLogAppender;
4360----
4361--
4362
4363. Create an LTTng-UST log4j log appender:
4364+
4365--
4366[source,java]
4367----
4368Appender lttngUstLogAppender = new LttngLogAppender();
4369----
4370--
4371
4372. Add this appender to the log4j loggers which should emit LTTng events:
4373+
4374--
4375[source,java]
4376----
4377Logger myLogger = Logger.getLogger("some-logger");
4378
4379myLogger.addAppender(lttngUstLogAppender);
4380----
4381--
4382
4383. Use Apache log4j log statements and configuration as usual. The
4384 loggers with an attached LTTng-UST log appender can emit LTTng events.
4385
4386. Before exiting the application, remove the LTTng-UST log appender from
4387 the loggers attached to it and call its `close()` method:
4388+
4389--
4390[source,java]
4391----
4392myLogger.removeAppender(lttngUstLogAppender);
4393lttngUstLogAppender.close();
4394----
4395--
4396+
4397This is not strictly necessary, but it is recommended for a clean
4398disposal of the appender's resources.
4399
4400. Include the LTTng-UST Java agent's common and log4j-specific JAR
4401 files, path:{lttng-ust-agent-common.jar} and
4402 path:{lttng-ust-agent-log4j.jar}, in the
4403 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4404 path] when you build the Java application.
4405+
4406The JAR files are typically located in dir:{/usr/share/java}.
4407+
4408IMPORTANT: The LTTng-UST Java agent must be
4409<<installing-lttng,installed>> for the logging framework your
4410application uses.
4411
4412.Use the LTTng-UST Java agent for Apache log4j.
4413====
4414[source,java]
4415.path:{Test.java}
4416----
4417import org.apache.log4j.Appender;
4418import org.apache.log4j.Logger;
4419import org.lttng.ust.agent.log4j.LttngLogAppender;
4420
4421public class Test
4422{
4423 private static final int answer = 42;
4424
4425 public static void main(String[] argv) throws Exception
4426 {
4427 // Create a logger
4428 Logger logger = Logger.getLogger("jello");
4429
4430 // Create an LTTng-UST log appender
4431 Appender lttngUstLogAppender = new LttngLogAppender();
4432
4433 // Add the LTTng-UST log appender to our logger
4434 logger.addAppender(lttngUstLogAppender);
4435
4436 // Log at will!
4437 logger.info("some info");
4438 logger.warn("some warning");
4439 Thread.sleep(500);
4440 logger.debug("debug information; the answer is " + answer);
4441 Thread.sleep(123);
4442 logger.fatal("error!");
4443
4444 // Not mandatory, but cleaner
4445 logger.removeAppender(lttngUstLogAppender);
4446 lttngUstLogAppender.close();
4447 }
4448}
4449
4450----
4451
4452Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4453file):
4454
4455[role="term"]
4456----
4457$ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4458----
4459
4460<<creating-destroying-tracing-sessions,Create a tracing session>>,
4461<<enabling-disabling-events,create an event rule>> matching the
4462`jello` log4j logger, and <<basic-tracing-session-control,start tracing>>:
4463
4464[role="term"]
4465----
4466$ lttng create
4467$ lttng enable-event --log4j jello
4468$ lttng start
4469----
4470
4471Run the compiled class:
4472
4473[role="term"]
4474----
4475$ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4476----
4477
4478<<basic-tracing-session-control,Stop tracing>> and inspect the
4479recorded events:
4480
4481[role="term"]
4482----
4483$ lttng stop
4484$ lttng view
4485----
4486====
4487
4488In the resulting trace, an <<event,event record>> generated by a Java
4489application using log4j is named `lttng_log4j:event` and
4490has the following fields:
4491
4492`msg`::
4493 Log record's message.
4494
4495`logger_name`::
4496 Logger name.
4497
4498`class_name`::
4499 Name of the class in which the log statement was executed.
4500
4501`method_name`::
4502 Name of the method in which the log statement was executed.
4503
4504`filename`::
4505 Name of the file in which the executed log statement is located.
4506
4507`line_number`::
4508 Line number at which the log statement was executed.
4509
4510`timestamp`::
4511 Logging timestamp.
4512
4513`int_loglevel`::
4514 Log level integer value.
4515
4516`thread_name`::
4517 Name of the Java thread in which the log statement was executed.
4518
4519You can use the opt:lttng-enable-event(1):--loglevel or
4520opt:lttng-enable-event(1):--loglevel-only option of the
4521man:lttng-enable-event(1) command to target a range of Apache log4j log levels
4522or a specific log4j log level.
4523
4524
4525[role="since-2.8"]
4526[[java-application-context]]
4527==== Provide application-specific context fields in a Java application
4528
4529A Java application-specific context field is a piece of state provided
4530by the application which <<adding-context,you can add>>, using the
4531man:lttng-add-context(1) command, to each <<event,event record>>
4532produced by the log statements of this application.
4533
4534For example, a given object might have a current request ID variable.
4535You can create a context information retriever for this object and
4536assign a name to this current request ID. You can then, using the
4537man:lttng-add-context(1) command, add this context field by name to
4538the JUL or log4j <<channel,channel>>.
4539
4540To provide application-specific context fields in a Java application:
4541
4542. In the Java application's source code, import the LTTng-UST
4543 Java agent context classes and interfaces:
4544+
4545--
4546[source,java]
4547----
4548import org.lttng.ust.agent.context.ContextInfoManager;
4549import org.lttng.ust.agent.context.IContextInfoRetriever;
4550----
4551--
4552
4553. Create a context information retriever class, that is, a class which
4554 implements the `IContextInfoRetriever` interface:
4555+
4556--
4557[source,java]
4558----
4559class MyContextInfoRetriever implements IContextInfoRetriever
4560{
4561 @Override
4562 public Object retrieveContextInfo(String key)
4563 {
4564 if (key.equals("intCtx")) {
4565 return (short) 17;
4566 } else if (key.equals("strContext")) {
4567 return "context value!";
4568 } else {
4569 return null;
4570 }
4571 }
4572}
4573----
4574--
4575+
4576This `retrieveContextInfo()` method is the only member of the
4577`IContextInfoRetriever` interface. Its role is to return the current
4578value of a state by name to create a context field. The names of the
4579context fields and which state variables they return depends on your
4580specific scenario.
4581+
4582All primitive types and objects are supported as context fields.
4583When `retrieveContextInfo()` returns an object, the context field
4584serializer calls its `toString()` method to add a string field to
4585event records. The method can also return `null`, which means that
4586no context field is available for the required name.
4587
4588. Register an instance of your context information retriever class to
4589 the context information manager singleton:
4590+
4591--
4592[source,java]
4593----
4594IContextInfoRetriever cir = new MyContextInfoRetriever();
4595ContextInfoManager cim = ContextInfoManager.getInstance();
4596cim.registerContextInfoRetriever("retrieverName", cir);
4597----
4598--
4599
4600. Before exiting the application, remove your context information
4601 retriever from the context information manager singleton:
4602+
4603--
4604[source,java]
4605----
4606ContextInfoManager cim = ContextInfoManager.getInstance();
4607cim.unregisterContextInfoRetriever("retrieverName");
4608----
4609--
4610+
4611This is not strictly necessary, but it is recommended for a clean
4612disposal of some manager's resources.
4613
4614. Build your Java application with LTTng-UST Java agent support as
4615 usual, following the procedure for either the <<jul,JUL>> or
4616 <<log4j,Apache log4j>> framework.
4617
4618
4619.Provide application-specific context fields in a Java application.
4620====
4621[source,java]
4622.path:{Test.java}
4623----
4624import java.util.logging.Handler;
4625import java.util.logging.Logger;
4626import org.lttng.ust.agent.jul.LttngLogHandler;
4627import org.lttng.ust.agent.context.ContextInfoManager;
4628import org.lttng.ust.agent.context.IContextInfoRetriever;
4629
4630public class Test
4631{
4632 // Our context information retriever class
4633 private static class MyContextInfoRetriever
4634 implements IContextInfoRetriever
4635 {
4636 @Override
4637 public Object retrieveContextInfo(String key) {
4638 if (key.equals("intCtx")) {
4639 return (short) 17;
4640 } else if (key.equals("strContext")) {
4641 return "context value!";
4642 } else {
4643 return null;
4644 }
4645 }
4646 }
4647
4648 private static final int answer = 42;
4649
4650 public static void main(String args[]) throws Exception
4651 {
4652 // Get the context information manager instance
4653 ContextInfoManager cim = ContextInfoManager.getInstance();
4654
4655 // Create and register our context information retriever
4656 IContextInfoRetriever cir = new MyContextInfoRetriever();
4657 cim.registerContextInfoRetriever("myRetriever", cir);
4658
4659 // Create a logger
4660 Logger logger = Logger.getLogger("jello");
4661
4662 // Create an LTTng-UST log handler
4663 Handler lttngUstLogHandler = new LttngLogHandler();
4664
4665 // Add the LTTng-UST log handler to our logger
4666 logger.addHandler(lttngUstLogHandler);
4667
4668 // Log at will!
4669 logger.info("some info");
4670 logger.warning("some warning");
4671 Thread.sleep(500);
4672 logger.finer("finer information; the answer is " + answer);
4673 Thread.sleep(123);
4674 logger.severe("error!");
4675
4676 // Not mandatory, but cleaner
4677 logger.removeHandler(lttngUstLogHandler);
4678 lttngUstLogHandler.close();
4679 cim.unregisterContextInfoRetriever("myRetriever");
4680 }
4681}
4682----
4683
4684Build this example:
4685
4686[role="term"]
4687----
4688$ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4689----
4690
4691<<creating-destroying-tracing-sessions,Create a tracing session>>
4692and <<enabling-disabling-events,create an event rule>> matching the
4693`jello` JUL logger:
4694
4695[role="term"]
4696----
4697$ lttng create
4698$ lttng enable-event --jul jello
4699----
4700
4701<<adding-context,Add the application-specific context fields>> to the
4702JUL channel:
4703
4704[role="term"]
4705----
4706$ lttng add-context --jul --type='$app.myRetriever:intCtx'
4707$ lttng add-context --jul --type='$app.myRetriever:strContext'
4708----
4709
4710<<basic-tracing-session-control,Start tracing>>:
4711
4712[role="term"]
4713----
4714$ lttng start
4715----
4716
4717Run the compiled class:
4718
4719[role="term"]
4720----
4721$ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4722----
4723
4724<<basic-tracing-session-control,Stop tracing>> and inspect the
4725recorded events:
4726
4727[role="term"]
4728----
4729$ lttng stop
4730$ lttng view
4731----
4732====
4733
4734
4735[role="since-2.7"]
4736[[python-application]]
4737=== User space Python agent
4738
4739You can instrument a Python 2 or Python 3 application which uses the
4740standard https://docs.python.org/3/library/logging.html[`logging`]
4741package.
4742
4743Each log statement emits an LTTng event once the
4744application module imports the
4745<<lttng-ust-agents,LTTng-UST Python agent>> package.
4746
4747[role="img-100"]
4748.A Python application importing the LTTng-UST Python agent.
4749image::python-app.png[]
4750
4751To use the LTTng-UST Python agent:
4752
4753. In the Python application's source code, import the LTTng-UST Python
4754 agent:
4755+
4756--
4757[source,python]
4758----
4759import lttngust
4760----
4761--
4762+
4763The LTTng-UST Python agent automatically adds its logging handler to the
4764root logger at import time.
4765+
4766Any log statement that the application executes before this import does
4767not emit an LTTng event.
4768+
4769IMPORTANT: The LTTng-UST Python agent must be
4770<<installing-lttng,installed>>.
4771
4772. Use log statements and logging configuration as usual.
4773 Since the LTTng-UST Python agent adds a handler to the _root_
4774 logger, you can trace any log statement from any logger.
4775
4776.Use the LTTng-UST Python agent.
4777====
4778[source,python]
4779.path:{test.py}
4780----
4781import lttngust
4782import logging
4783import time
4784
4785
4786def example():
4787 logging.basicConfig()
4788 logger = logging.getLogger('my-logger')
4789
4790 while True:
4791 logger.debug('debug message')
4792 logger.info('info message')
4793 logger.warn('warn message')
4794 logger.error('error message')
4795 logger.critical('critical message')
4796 time.sleep(1)
4797
4798
4799if __name__ == '__main__':
4800 example()
4801----
4802
4803NOTE: `logging.basicConfig()`, which adds to the root logger a basic
4804logging handler which prints to the standard error stream, is not
4805strictly required for LTTng-UST tracing to work, but in versions of
4806Python preceding 3.2, you could see a warning message which indicates
4807that no handler exists for the logger `my-logger`.
4808
4809<<creating-destroying-tracing-sessions,Create a tracing session>>,
4810<<enabling-disabling-events,create an event rule>> matching the
4811`my-logger` Python logger, and <<basic-tracing-session-control,start
4812tracing>>:
4813
4814[role="term"]
4815----
4816$ lttng create
4817$ lttng enable-event --python my-logger
4818$ lttng start
4819----
4820
4821Run the Python script:
4822
4823[role="term"]
4824----
4825$ python test.py
4826----
4827
4828<<basic-tracing-session-control,Stop tracing>> and inspect the recorded
4829events:
4830
4831[role="term"]
4832----
4833$ lttng stop
4834$ lttng view
4835----
4836====
4837
4838In the resulting trace, an <<event,event record>> generated by a Python
4839application is named `lttng_python:event` and has the following fields:
4840
4841`asctime`::
4842 Logging time (string).
4843
4844`msg`::
4845 Log record's message.
4846
4847`logger_name`::
4848 Logger name.
4849
4850`funcName`::
4851 Name of the function in which the log statement was executed.
4852
4853`lineno`::
4854 Line number at which the log statement was executed.
4855
4856`int_loglevel`::
4857 Log level integer value.
4858
4859`thread`::
4860 ID of the Python thread in which the log statement was executed.
4861
4862`threadName`::
4863 Name of the Python thread in which the log statement was executed.
4864
4865You can use the opt:lttng-enable-event(1):--loglevel or
4866opt:lttng-enable-event(1):--loglevel-only option of the
4867man:lttng-enable-event(1) command to target a range of Python log levels
4868or a specific Python log level.
4869
4870When an application imports the LTTng-UST Python agent, the agent tries
4871to register to a <<lttng-sessiond,session daemon>>. Note that you must
4872<<start-sessiond,start the session daemon>> _before_ you run the Python
4873application. If a session daemon is found, the agent tries to register
4874to it during 5{nbsp}seconds, after which the application continues
4875without LTTng tracing support. You can override this timeout value with
4876the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
4877(milliseconds).
4878
4879If the session daemon stops while a Python application with an imported
4880LTTng-UST Python agent runs, the agent retries to connect and to
4881register to a session daemon every 3{nbsp}seconds. You can override this
4882delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
4883variable.
4884
4885
4886[role="since-2.5"]
4887[[proc-lttng-logger-abi]]
4888=== LTTng logger
4889
4890The `lttng-tracer` Linux kernel module, part of
4891<<lttng-modules,LTTng-modules>>, creates the special LTTng logger file
4892path:{/proc/lttng-logger} when it's loaded. Any application can write
4893text data to this file to emit an LTTng event.
4894
4895[role="img-100"]
4896.An application writes to the LTTng logger file to emit an LTTng event.
4897image::lttng-logger.png[]
4898
4899The LTTng logger is the quickest method--not the most efficient,
4900however--to add instrumentation to an application. It is designed
4901mostly to instrument shell scripts:
4902
4903[role="term"]
4904----
4905$ echo "Some message, some $variable" > /proc/lttng-logger
4906----
4907
4908Any event that the LTTng logger emits is named `lttng_logger` and
4909belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
4910other instrumentation points in the kernel tracing domain, **any Unix
4911user** can <<enabling-disabling-events,create an event rule>> which
4912matches its event name, not only the root user or users in the
4913<<tracing-group,tracing group>>.
4914
4915To use the LTTng logger:
4916
4917* From any application, write text data to the path:{/proc/lttng-logger}
4918 file.
4919
4920The `msg` field of `lttng_logger` event records contains the
4921recorded message.
4922
4923NOTE: The maximum message length of an LTTng logger event is
49241024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
4925than one event to contain the remaining data.
4926
4927You should not use the LTTng logger to trace a user application which
4928can be instrumented in a more efficient way, namely:
4929
4930* <<c-application,C and $$C++$$ applications>>.
4931* <<java-application,Java applications>>.
4932* <<python-application,Python applications>>.
4933
4934.Use the LTTng logger.
4935====
4936[source,bash]
4937.path:{test.bash}
4938----
4939echo 'Hello, World!' > /proc/lttng-logger
4940sleep 2
4941df --human-readable --print-type / > /proc/lttng-logger
4942----
4943
4944<<creating-destroying-tracing-sessions,Create a tracing session>>,
4945<<enabling-disabling-events,create an event rule>> matching the
4946`lttng_logger` Linux kernel tracepoint, and
4947<<basic-tracing-session-control,start tracing>>:
4948
4949[role="term"]
4950----
4951$ lttng create
4952$ lttng enable-event --kernel lttng_logger
4953$ lttng start
4954----
4955
4956Run the Bash script:
4957
4958[role="term"]
4959----
4960$ bash test.bash
4961----
4962
4963<<basic-tracing-session-control,Stop tracing>> and inspect the recorded
4964events:
4965
4966[role="term"]
4967----
4968$ lttng stop
4969$ lttng view
4970----
4971====
4972
4973
4974[[instrumenting-linux-kernel]]
4975=== LTTng kernel tracepoints
4976
4977NOTE: This section shows how to _add_ instrumentation points to the
4978Linux kernel. The kernel's subsystems are already thoroughly
4979instrumented at strategic places for LTTng when you
4980<<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
4981package.
4982
4983////
4984There are two methods to instrument the Linux kernel:
4985
4986. <<linux-add-lttng-layer,Add an LTTng layer>> over an existing ftrace
4987 tracepoint which uses the `TRACE_EVENT()` API.
4988+
4989Choose this if you want to instrumentation a Linux kernel tree with an
4990instrumentation point compatible with ftrace, perf, and SystemTap.
4991
4992. Use an <<linux-lttng-tracepoint-event,LTTng-only approach>> to
4993 instrument an out-of-tree kernel module.
4994+
4995Choose this if you don't need ftrace, perf, or SystemTap support.
4996////
4997
4998
4999[[linux-add-lttng-layer]]
5000==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5001
5002This section shows how to add an LTTng layer to existing ftrace
5003instrumentation using the `TRACE_EVENT()` API.
5004
5005This section does not document the `TRACE_EVENT()` macro. You can
5006read the following articles to learn more about this API:
5007
5008* http://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part 1)]
5009* http://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part 2)]
5010* http://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part 3)]
5011
5012The following procedure assumes that your ftrace tracepoints are
5013correctly defined in their own header and that they are created in
5014one source file using the `CREATE_TRACE_POINTS` definition.
5015
5016To add an LTTng layer over an existing ftrace tracepoint:
5017
5018. Make sure the following kernel configuration options are
5019 enabled:
5020+
5021--
5022* `CONFIG_MODULES`
5023* `CONFIG_KALLSYMS`
5024* `CONFIG_HIGH_RES_TIMERS`
5025* `CONFIG_TRACEPOINTS`
5026--
5027
5028. Build the Linux source tree with your custom ftrace tracepoints.
5029. Boot the resulting Linux image on your target system.
5030+
5031Confirm that the tracepoints exist by looking for their names in the
5032dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5033is your subsystem's name.
5034
5035. Get a copy of the latest LTTng-modules{nbsp}{revision}:
5036+
5037--
5038[role="term"]
5039----
5040$ cd $(mktemp -d) &&
5041wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.10.tar.bz2 &&
5042tar -xf lttng-modules-latest-2.10.tar.bz2 &&
5043cd lttng-modules-2.10.*
5044----
5045--
5046
5047. In dir:{instrumentation/events/lttng-module}, relative to the root
5048 of the LTTng-modules source tree, create a header file named
5049 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5050 LTTng-modules tracepoint definitions using the LTTng-modules
5051 macros in it.
5052+
5053Start with this template:
5054+
5055--
5056[source,c]
5057.path:{instrumentation/events/lttng-module/my_subsys.h}
5058----
5059#undef TRACE_SYSTEM
5060#define TRACE_SYSTEM my_subsys
5061
5062#if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5063#define _LTTNG_MY_SUBSYS_H
5064
5065#include "../../../probes/lttng-tracepoint-event.h"
5066#include <linux/tracepoint.h>
5067
5068LTTNG_TRACEPOINT_EVENT(
5069 /*
5070 * Format is identical to TRACE_EVENT()'s version for the three
5071 * following macro parameters:
5072 */
5073 my_subsys_my_event,
5074 TP_PROTO(int my_int, const char *my_string),
5075 TP_ARGS(my_int, my_string),
5076
5077 /* LTTng-modules specific macros */
5078 TP_FIELDS(
5079 ctf_integer(int, my_int_field, my_int)
5080 ctf_string(my_bar_field, my_bar)
5081 )
5082)
5083
5084#endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5085
5086#include "../../../probes/define_trace.h"
5087----
5088--
5089+
5090The entries in the `TP_FIELDS()` section are the list of fields for the
5091LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5092ftrace's `TRACE_EVENT()` macro.
5093+
5094See <<lttng-modules-tp-fields,Tracepoint fields macros>> for a
5095complete description of the available `ctf_*()` macros.
5096
5097. Create the LTTng-modules probe's kernel module C source file,
5098 +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5099 subsystem name:
5100+
5101--
5102[source,c]
5103.path:{probes/lttng-probe-my-subsys.c}
5104----
5105#include <linux/module.h>
5106#include "../lttng-tracer.h"
5107
5108/*
5109 * Build-time verification of mismatch between mainline
5110 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5111 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5112 */
5113#include <trace/events/my_subsys.h>
5114
5115/* Create LTTng tracepoint probes */
5116#define LTTNG_PACKAGE_BUILD
5117#define CREATE_TRACE_POINTS
5118#define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5119
5120#include "../instrumentation/events/lttng-module/my_subsys.h"
5121
5122MODULE_LICENSE("GPL and additional rights");
5123MODULE_AUTHOR("Your name <your-email>");
5124MODULE_DESCRIPTION("LTTng my_subsys probes");
5125MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5126 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5127 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5128 LTTNG_MODULES_EXTRAVERSION);
5129----
5130--
5131
5132. Edit path:{probes/KBuild} and add your new kernel module object
5133 next to the existing ones:
5134+
5135--
5136[source,make]
5137.path:{probes/KBuild}
5138----
5139# ...
5140
5141obj-m += lttng-probe-module.o
5142obj-m += lttng-probe-power.o
5143
5144obj-m += lttng-probe-my-subsys.o
5145
5146# ...
5147----
5148--
5149
5150. Build and install the LTTng kernel modules:
5151+
5152--
5153[role="term"]
5154----
5155$ make KERNELDIR=/path/to/linux
5156# make modules_install && depmod -a
5157----
5158--
5159+
5160Replace `/path/to/linux` with the path to the Linux source tree where
5161you defined and used tracepoints with ftrace's `TRACE_EVENT()` macro.
5162
5163Note that you can also use the
5164<<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5165instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5166C code that need to be executed before the event fields are recorded.
5167
5168The best way to learn how to use the previous LTTng-modules macros is to
5169inspect the existing LTTng-modules tracepoint definitions in the
5170dir:{instrumentation/events/lttng-module} header files. Compare them
5171with the Linux kernel mainline versions in the
5172dir:{include/trace/events} directory of the Linux source tree.
5173
5174
5175[role="since-2.7"]
5176[[lttng-tracepoint-event-code]]
5177===== Use custom C code to access the data for tracepoint fields
5178
5179Although we recommended to always use the
5180<<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5181the arguments and fields of an LTTng-modules tracepoint when possible,
5182sometimes you need a more complex process to access the data that the
5183tracer records as event record fields. In other words, you need local
5184variables and multiple C{nbsp}statements instead of simple
5185argument-based expressions that you pass to the
5186<<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5187
5188You can use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5189`LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5190a block of C{nbsp}code to be executed before LTTng records the fields.
5191The structure of this macro is:
5192
5193[source,c]
5194.`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5195----
5196LTTNG_TRACEPOINT_EVENT_CODE(
5197 /*
5198 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5199 * version for the following three macro parameters:
5200 */
5201 my_subsys_my_event,
5202 TP_PROTO(int my_int, const char *my_string),
5203 TP_ARGS(my_int, my_string),
5204
5205 /* Declarations of custom local variables */
5206 TP_locvar(
5207 int a = 0;
5208 unsigned long b = 0;
5209 const char *name = "(undefined)";
5210 struct my_struct *my_struct;
5211 ),
5212
5213 /*
5214 * Custom code which uses both tracepoint arguments
5215 * (in TP_ARGS()) and local variables (in TP_locvar()).
5216 *
5217 * Local variables are actually members of a structure pointed
5218 * to by the special variable tp_locvar.
5219 */
5220 TP_code(
5221 if (my_int) {
5222 tp_locvar->a = my_int + 17;
5223 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5224 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5225 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5226 put_my_struct(tp_locvar->my_struct);
5227
5228 if (tp_locvar->b) {
5229 tp_locvar->a = 1;
5230 }
5231 }
5232 ),
5233
5234 /*
5235 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5236 * version for this, except that tp_locvar members can be
5237 * used in the argument expression parameters of
5238 * the ctf_*() macros.
5239 */
5240 TP_FIELDS(
5241 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5242 ctf_integer(int, my_struct_a, tp_locvar->a)
5243 ctf_string(my_string_field, my_string)
5244 ctf_string(my_struct_name, tp_locvar->name)
5245 )
5246)
5247----
5248
5249IMPORTANT: The C code defined in `TP_code()` must not have any side
5250effects when executed. In particular, the code must not allocate
5251memory or get resources without deallocating this memory or putting
5252those resources afterwards.
5253
5254
5255[[instrumenting-linux-kernel-tracing]]
5256==== Load and unload a custom probe kernel module
5257
5258You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5259kernel module>> in the kernel before it can emit LTTng events.
5260
5261To load the default probe kernel modules and a custom probe kernel
5262module:
5263
5264* Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5265 probe modules to load when starting a root <<lttng-sessiond,session
5266 daemon>>:
5267+
5268--
5269.Load the `my_subsys`, `usb`, and the default probe modules.
5270====
5271[role="term"]
5272----
5273# lttng-sessiond --extra-kmod-probes=my_subsys,usb
5274----
5275====
5276--
5277+
5278You only need to pass the subsystem name, not the whole kernel module
5279name.
5280
5281To load _only_ a given custom probe kernel module:
5282
5283* Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5284 modules to load when starting a root session daemon:
5285+
5286--
5287.Load only the `my_subsys` and `usb` probe modules.
5288====
5289[role="term"]
5290----
5291# lttng-sessiond --kmod-probes=my_subsys,usb
5292----
5293====
5294--
5295
5296To confirm that a probe module is loaded:
5297
5298* Use man:lsmod(8):
5299+
5300--
5301[role="term"]
5302----
5303$ lsmod | grep lttng_probe_usb
5304----
5305--
5306
5307To unload the loaded probe modules:
5308
5309* Kill the session daemon with `SIGTERM`:
5310+
5311--
5312[role="term"]
5313----
5314# pkill lttng-sessiond
5315----
5316--
5317+
5318You can also use man:modprobe(8)'s `--remove` option if the session
5319daemon terminates abnormally.
5320
5321
5322[[controlling-tracing]]
5323== Tracing control
5324
5325Once an application or a Linux kernel is
5326<<instrumenting,instrumented>> for LTTng tracing,
5327you can _trace_ it.
5328
5329This section is divided in topics on how to use the various
5330<<plumbing,components of LTTng>>, in particular the <<lttng-cli,cmd:lttng
5331command-line tool>>, to _control_ the LTTng daemons and tracers.
5332
5333NOTE: In the following subsections, we refer to an man:lttng(1) command
5334using its man page name. For example, instead of _Run the `create`
5335command to..._, we use _Run the man:lttng-create(1) command to..._.
5336
5337
5338[[start-sessiond]]
5339=== Start a session daemon
5340
5341In some situations, you need to run a <<lttng-sessiond,session daemon>>
5342(man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5343command-line tool.
5344
5345You will see the following error when you run a command while no session
5346daemon is running:
5347
5348----
5349Error: No session daemon is available
5350----
5351
5352The only command that automatically runs a session daemon is
5353man:lttng-create(1), which you use to
5354<<creating-destroying-tracing-sessions,create a tracing session>>. While
5355this is most of the time the first operation that you do, sometimes it's
5356not. Some examples are:
5357
5358* <<list-instrumentation-points,List the available instrumentation points>>.
5359* <<saving-loading-tracing-session,Load a tracing session configuration>>.
5360
5361[[tracing-group]] Each Unix user must have its own running session
5362daemon to trace user applications. The session daemon that the root user
5363starts is the only one allowed to control the LTTng kernel tracer. Users
5364that are part of the _tracing group_ can control the root session
5365daemon. The default tracing group name is `tracing`; you can set it to
5366something else with the opt:lttng-sessiond(8):--group option when you
5367start the root session daemon.
5368
5369To start a user session daemon:
5370
5371* Run man:lttng-sessiond(8):
5372+
5373--
5374[role="term"]
5375----
5376$ lttng-sessiond --daemonize
5377----
5378--
5379
5380To start the root session daemon:
5381
5382* Run man:lttng-sessiond(8) as the root user:
5383+
5384--
5385[role="term"]
5386----
5387# lttng-sessiond --daemonize
5388----
5389--
5390
5391In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5392start the session daemon in foreground.
5393
5394To stop a session daemon, use man:kill(1) on its process ID (standard
5395`TERM` signal).
5396
5397Note that some Linux distributions could manage the LTTng session daemon
5398as a service. In this case, you should use the service manager to
5399start, restart, and stop session daemons.
5400
5401
5402[[creating-destroying-tracing-sessions]]
5403=== Create and destroy a tracing session
5404
5405Almost all the LTTng control operations happen in the scope of
5406a <<tracing-session,tracing session>>, which is the dialogue between the
5407<<lttng-sessiond,session daemon>> and you.
5408
5409To create a tracing session with a generated name:
5410
5411* Use the man:lttng-create(1) command:
5412+
5413--
5414[role="term"]
5415----
5416$ lttng create
5417----
5418--
5419
5420The created tracing session's name is `auto` followed by the
5421creation date.
5422
5423To create a tracing session with a specific name:
5424
5425* Use the optional argument of the man:lttng-create(1) command:
5426+
5427--
5428[role="term"]
5429----
5430$ lttng create my-session
5431----
5432--
5433+
5434Replace `my-session` with the specific tracing session name.
5435
5436LTTng appends the creation date to the created tracing session's name.
5437
5438LTTng writes the traces of a tracing session in
5439+$LTTNG_HOME/lttng-trace/__name__+ by default, where +__name__+ is the
5440name of the tracing session. Note that the env:LTTNG_HOME environment
5441variable defaults to `$HOME` if not set.
5442
5443To output LTTng traces to a non-default location:
5444
5445* Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command:
5446+
5447--
5448[role="term"]
5449----
5450$ lttng create my-session --output=/tmp/some-directory
5451----
5452--
5453
5454You may create as many tracing sessions as you wish.
5455
5456To list all the existing tracing sessions for your Unix user:
5457
5458* Use the man:lttng-list(1) command:
5459+
5460--
5461[role="term"]
5462----
5463$ lttng list
5464----
5465--
5466
5467When you create a tracing session, it is set as the _current tracing
5468session_. The following man:lttng(1) commands operate on the current
5469tracing session when you don't specify one:
5470
5471[role="list-3-cols"]
5472* `add-context`
5473* `destroy`
5474* `disable-channel`
5475* `disable-event`
5476* `enable-channel`
5477* `enable-event`
5478* `load`
5479* `regenerate`
5480* `save`
5481* `snapshot`
5482* `start`
5483* `stop`
5484* `track`
5485* `untrack`
5486* `view`
5487
5488To change the current tracing session:
5489
5490* Use the man:lttng-set-session(1) command:
5491+
5492--
5493[role="term"]
5494----
5495$ lttng set-session new-session
5496----
5497--
5498+
5499Replace `new-session` by the name of the new current tracing session.
5500
5501When you are done tracing in a given tracing session, you can destroy
5502it. This operation frees the resources taken by the tracing session
5503to destroy; it does not destroy the trace data that LTTng wrote for
5504this tracing session.
5505
5506To destroy the current tracing session:
5507
5508* Use the man:lttng-destroy(1) command:
5509+
5510--
5511[role="term"]
5512----
5513$ lttng destroy
5514----
5515--
5516
5517
5518[[list-instrumentation-points]]
5519=== List the available instrumentation points
5520
5521The <<lttng-sessiond,session daemon>> can query the running instrumented
5522user applications and the Linux kernel to get a list of available
5523instrumentation points. For the Linux kernel <<domain,tracing domain>>,
5524they are tracepoints and system calls. For the user space tracing
5525domain, they are tracepoints. For the other tracing domains, they are
5526logger names.
5527
5528To list the available instrumentation points:
5529
5530* Use the man:lttng-list(1) command with the requested tracing domain's
5531 option amongst:
5532+
5533--
5534* opt:lttng-list(1):--kernel: Linux kernel tracepoints (your Unix user
5535 must be a root user, or it must be a member of the
5536 <<tracing-group,tracing group>>).
5537* opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall: Linux
5538 kernel system calls (your Unix user must be a root user, or it must be
5539 a member of the tracing group).
5540* opt:lttng-list(1):--userspace: user space tracepoints.
5541* opt:lttng-list(1):--jul: `java.util.logging` loggers.
5542* opt:lttng-list(1):--log4j: Apache log4j loggers.
5543* opt:lttng-list(1):--python: Python loggers.
5544--
5545
5546.List the available user space tracepoints.
5547====
5548[role="term"]
5549----
5550$ lttng list --userspace
5551----
5552====
5553
5554.List the available Linux kernel system call tracepoints.
5555====
5556[role="term"]
5557----
5558$ lttng list --kernel --syscall
5559----
5560====
5561
5562
5563[[enabling-disabling-events]]
5564=== Create and enable an event rule
5565
5566Once you <<creating-destroying-tracing-sessions,create a tracing
5567session>>, you can create <<event,event rules>> with the
5568man:lttng-enable-event(1) command.
5569
5570You specify each condition with a command-line option. The available
5571condition options are shown in the following table.
5572
5573[role="growable",cols="asciidoc,asciidoc,default"]
5574.Condition command-line options for the man:lttng-enable-event(1) command.
5575|====
5576|Option |Description |Applicable tracing domains
5577
5578|
5579One of:
5580
5581. `--syscall`
5582. +--probe=__ADDR__+
5583. +--function=__ADDR__+
5584
5585|
5586Instead of using the default _tracepoint_ instrumentation type, use:
5587
5588. A Linux system call.
5589. A Linux https://lwn.net/Articles/132196/[KProbe] (symbol or address).
5590. The entry and return points of a Linux function (symbol or address).
5591
5592|Linux kernel.
5593
5594|First positional argument.
5595
5596|
5597Tracepoint or system call name. In the case of a Linux KProbe or
5598function, this is a custom name given to the event rule. With the
5599JUL, log4j, and Python domains, this is a logger name.
5600
5601With a tracepoint, logger, or system call name, the last character
5602can be `*` to match anything that remains.
5603
5604|All.
5605
5606|
5607One of:
5608
5609. +--loglevel=__LEVEL__+
5610. +--loglevel-only=__LEVEL__+
5611
5612|
5613. Match only tracepoints or log statements with a logging level at
5614 least as severe as +__LEVEL__+.
5615. Match only tracepoints or log statements with a logging level
5616 equal to +__LEVEL__+.
5617
5618See man:lttng-enable-event(1) for the list of available logging level
5619names.
5620
5621|User space, JUL, log4j, and Python.
5622
5623|+--exclude=__EXCLUSIONS__+
5624
5625|
5626When you use a `*` character at the end of the tracepoint or logger
5627name (first positional argument), exclude the specific names in the
5628comma-delimited list +__EXCLUSIONS__+.
5629
5630|
5631User space, JUL, log4j, and Python.
5632
5633|+--filter=__EXPR__+
5634
5635|
5636Match only events which satisfy the expression +__EXPR__+.
5637
5638See man:lttng-enable-event(1) to learn more about the syntax of a
5639filter expression.
5640
5641|All.
5642
5643|====
5644
5645You attach an event rule to a <<channel,channel>> on creation. If you do
5646not specify the channel with the opt:lttng-enable-event(1):--channel
5647option, and if the event rule to create is the first in its
5648<<domain,tracing domain>> for a given tracing session, then LTTng
5649creates a _default channel_ for you. This default channel is reused in
5650subsequent invocations of the man:lttng-enable-event(1) command for the
5651same tracing domain.
5652
5653An event rule is always enabled at creation time.
5654
5655The following examples show how you can combine the previous
5656command-line options to create simple to more complex event rules.
5657
5658.Create an event rule targetting a Linux kernel tracepoint (default channel).
5659====
5660[role="term"]
5661----
5662$ lttng enable-event --kernel sched_switch
5663----
5664====
5665
5666.Create an event rule matching four Linux kernel system calls (default channel).
5667====
5668[role="term"]
5669----
5670$ lttng enable-event --kernel --syscall open,write,read,close
5671----
5672====
5673
5674.Create event rules matching tracepoints with filter expressions (default channel).
5675====
5676[role="term"]
5677----
5678$ lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
5679----
5680
5681[role="term"]
5682----
5683$ lttng enable-event --kernel --all \
5684 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
5685----
5686
5687[role="term"]
5688----
5689$ lttng enable-event --jul my_logger \
5690 --filter='$app.retriever:cur_msg_id > 3'
5691----
5692
5693IMPORTANT: Make sure to always quote the filter string when you
5694use man:lttng(1) from a shell.
5695====
5696
5697.Create an event rule matching any user space tracepoint of a given tracepoint provider with a log level range (default channel).
5698====
5699[role="term"]
5700----
5701$ lttng enable-event --userspace my_app:'*' --loglevel=TRACE_INFO
5702----
5703
5704IMPORTANT: Make sure to always quote the wildcard character when you
5705use man:lttng(1) from a shell.
5706====
5707
5708.Create an event rule matching multiple Python loggers with a wildcard and with exclusions (default channel).
5709====
5710[role="term"]
5711----
5712$ lttng enable-event --python my-app.'*' \
5713 --exclude='my-app.module,my-app.hello'
5714----
5715====
5716
5717.Create an event rule matching any Apache log4j logger with a specific log level (default channel).
5718====
5719[role="term"]
5720----
5721$ lttng enable-event --log4j --all --loglevel-only=LOG4J_WARN
5722----
5723====
5724
5725.Create an event rule attached to a specific channel matching a specific user space tracepoint provider and tracepoint.
5726====
5727[role="term"]
5728----
5729$ lttng enable-event --userspace my_app:my_tracepoint --channel=my-channel
5730----
5731====
5732
5733The event rules of a given channel form a whitelist: as soon as an
5734emitted event passes one of them, LTTng can record the event. For
5735example, an event named `my_app:my_tracepoint` emitted from a user space
5736tracepoint with a `TRACE_ERROR` log level passes both of the following
5737rules:
5738
5739[role="term"]
5740----
5741$ lttng enable-event --userspace my_app:my_tracepoint
5742$ lttng enable-event --userspace my_app:my_tracepoint \
5743 --loglevel=TRACE_INFO
5744----
5745
5746The second event rule is redundant: the first one includes
5747the second one.
5748
5749
5750[[disable-event-rule]]
5751=== Disable an event rule
5752
5753To disable an event rule that you <<enabling-disabling-events,created>>
5754previously, use the man:lttng-disable-event(1) command. This command
5755disables _all_ the event rules (of a given tracing domain and channel)
5756which match an instrumentation point. The other conditions are not
5757supported as of LTTng{nbsp}{revision}.
5758
5759The LTTng tracer does not record an emitted event which passes
5760a _disabled_ event rule.
5761
5762.Disable an event rule matching a Python logger (default channel).
5763====
5764[role="term"]
5765----
5766$ lttng disable-event --python my-logger
5767----
5768====
5769
5770.Disable an event rule matching all `java.util.logging` loggers (default channel).
5771====
5772[role="term"]
5773----
5774$ lttng disable-event --jul '*'
5775----
5776====
5777
5778.Disable _all_ the event rules of the default channel.
5779====
5780The opt:lttng-disable-event(1):--all-events option is not, like the
5781opt:lttng-enable-event(1):--all option of man:lttng-enable-event(1), the
5782equivalent of the event name `*` (wildcard): it disables _all_ the event
5783rules of a given channel.
5784
5785[role="term"]
5786----
5787$ lttng disable-event --jul --all-events
5788----
5789====
5790
5791NOTE: You cannot delete an event rule once you create it.
5792
5793
5794[[status]]
5795=== Get the status of a tracing session
5796
5797To get the status of the current tracing session, that is, its
5798parameters, its channels, event rules, and their attributes:
5799
5800* Use the man:lttng-status(1) command:
5801+
5802--
5803[role="term"]
5804----
5805$ lttng status
5806----
5807--
5808+
5809
5810To get the status of any tracing session:
5811
5812* Use the man:lttng-list(1) command with the tracing session's name:
5813+
5814--
5815[role="term"]
5816----
5817$ lttng list my-session
5818----
5819--
5820+
5821Replace `my-session` with the desired tracing session's name.
5822
5823
5824[[basic-tracing-session-control]]
5825=== Start and stop a tracing session
5826
5827Once you <<creating-destroying-tracing-sessions,create a tracing
5828session>> and
5829<<enabling-disabling-events,create one or more event rules>>,
5830you can start and stop the tracers for this tracing session.
5831
5832To start tracing in the current tracing session:
5833
5834* Use the man:lttng-start(1) command:
5835+
5836--
5837[role="term"]
5838----
5839$ lttng start
5840----
5841--
5842
5843LTTng is very flexible: you can launch user applications before
5844or after the you start the tracers. The tracers only record the events
5845if they pass enabled event rules and if they occur while the tracers are
5846started.
5847
5848To stop tracing in the current tracing session:
5849
5850* Use the man:lttng-stop(1) command:
5851+
5852--
5853[role="term"]
5854----
5855$ lttng stop
5856----
5857--
5858+
5859If there were <<channel-overwrite-mode-vs-discard-mode,lost event
5860records>> or lost sub-buffers since the last time you ran
5861man:lttng-start(1), warnings are printed when you run the
5862man:lttng-stop(1) command.
5863
5864
5865[[enabling-disabling-channels]]
5866=== Create a channel
5867
5868Once you create a tracing session, you can create a <<channel,channel>>
5869with the man:lttng-enable-channel(1) command.
5870
5871Note that LTTng automatically creates a default channel when, for a
5872given <<domain,tracing domain>>, no channels exist and you
5873<<enabling-disabling-events,create>> the first event rule. This default
5874channel is named `channel0` and its attributes are set to reasonable
5875values. Therefore, you only need to create a channel when you need
5876non-default attributes.
5877
5878You specify each non-default channel attribute with a command-line
5879option when you use the man:lttng-enable-channel(1) command. The
5880available command-line options are:
5881
5882[role="growable",cols="asciidoc,asciidoc"]
5883.Command-line options for the man:lttng-enable-channel(1) command.
5884|====
5885|Option |Description
5886
5887|`--overwrite`
5888
5889|
5890Use the _overwrite_
5891<<channel-overwrite-mode-vs-discard-mode,event loss mode>> instead of
5892the default _discard_ mode.
5893
5894|`--buffers-pid` (user space tracing domain only)
5895
5896|
5897Use the per-process <<channel-buffering-schemes,buffering scheme>>
5898instead of the default per-user buffering scheme.
5899
5900|+--subbuf-size=__SIZE__+
5901
5902|
5903Allocate sub-buffers of +__SIZE__+ bytes (power of two), for each CPU,
5904either for each Unix user (default), or for each instrumented process.
5905
5906See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
5907
5908|+--num-subbuf=__COUNT__+
5909
5910|
5911Allocate +__COUNT__+ sub-buffers (power of two), for each CPU, either
5912for each Unix user (default), or for each instrumented process.
5913
5914See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
5915
5916|+--tracefile-size=__SIZE__+
5917
5918|
5919Set the maximum size of each trace file that this channel writes within
5920a stream to +__SIZE__+ bytes instead of no maximum.
5921
5922See <<tracefile-rotation,Trace file count and size>>.
5923
5924|+--tracefile-count=__COUNT__+
5925
5926|
5927Limit the number of trace files that this channel creates to
5928+__COUNT__+ channels instead of no limit.
5929
5930See <<tracefile-rotation,Trace file count and size>>.
5931
5932|+--switch-timer=__PERIODUS__+
5933
5934|
5935Set the <<channel-switch-timer,switch timer period>>
5936to +__PERIODUS__+{nbsp}µs.
5937
5938|+--read-timer=__PERIODUS__+
5939
5940|
5941Set the <<channel-read-timer,read timer period>>
5942to +__PERIODUS__+{nbsp}µs.
5943
5944|[[opt-blocking-timeout]]+--blocking-timeout=__TIMEOUTUS__+
5945
5946|
5947Set the timeout of user space applications which load LTTng-UST
5948in blocking mode to +__TIMEOUTUS__+:
5949
59500 (default)::
5951 Never block (non-blocking mode).
5952
5953-1::
5954 Block forever until space is available in a sub-buffer to record
5955 the event.
5956
5957__n__, a positive value::
5958 Wait for at most __n__ µs when trying to write into a sub-buffer.
5959
5960Note that, for this option to have any effect on an instrumented
5961user space application, you need to run the application with a set
5962env:LTTNG_UST_ALLOW_BLOCKING environment variable.
5963
5964|+--output=__TYPE__+ (Linux kernel tracing domain only)
5965
5966|
5967Set the channel's output type to +__TYPE__+, either `mmap` or `splice`.
5968
5969|====
5970
5971You can only create a channel in the Linux kernel and user space
5972<<domain,tracing domains>>: other tracing domains have their own channel
5973created on the fly when <<enabling-disabling-events,creating event
5974rules>>.
5975
5976[IMPORTANT]
5977====
5978Because of a current LTTng limitation, you must create all channels
5979_before_ you <<basic-tracing-session-control,start tracing>> in a given
5980tracing session, that is, before the first time you run
5981man:lttng-start(1).
5982
5983Since LTTng automatically creates a default channel when you use the
5984man:lttng-enable-event(1) command with a specific tracing domain, you
5985cannot, for example, create a Linux kernel event rule, start tracing,
5986and then create a user space event rule, because no user space channel
5987exists yet and it's too late to create one.
5988
5989For this reason, make sure to configure your channels properly
5990before starting the tracers for the first time!
5991====
5992
5993The following examples show how you can combine the previous
5994command-line options to create simple to more complex channels.
5995
5996.Create a Linux kernel channel with default attributes.
5997====
5998[role="term"]
5999----
6000$ lttng enable-channel --kernel my-channel
6001----
6002====
6003
6004.Create a user space channel with 4 sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6005====
6006[role="term"]
6007----
6008$ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6009 --buffers-pid my-channel
6010----
6011====
6012
71b643ed 6013.[[blocking-timeout-example]]Create a default user space channel with an infinite blocking timeout.
85c29972
PP
6014====
6015<<creating-destroying-tracing-sessions,Create a tracing-session>>,
6016create the channel, <<enabling-disabling-events,create an event rule>>,
6017and <<basic-tracing-session-control,start tracing>>:
6018
6019[role="term"]
6020----
6021$ lttng create
6022$ lttng enable-channel --userspace --blocking-timeout=-1 blocking-channel
6023$ lttng enable-event --userspace --channel=blocking-channel --all
6024$ lttng start
6025----
6026
6027Run an application instrumented with LTTng-UST and allow it to block:
6028
6029[role="term"]
6030----
6031$ LTTNG_UST_ALLOW_BLOCKING=1 my-app
6032----
6033====
6034
6035.Create a Linux kernel channel which rotates 8 trace files of 4{nbsp}MiB each for each stream
6036====
6037[role="term"]
6038----
6039$ lttng enable-channel --kernel --tracefile-count=8 \
6040 --tracefile-size=4194304 my-channel
6041----
6042====
6043
6044.Create a user space channel in overwrite (or _flight recorder_) mode.
6045====
6046[role="term"]
6047----
6048$ lttng enable-channel --userspace --overwrite my-channel
6049----
6050====
6051
6052You can <<enabling-disabling-events,create>> the same event rule in
6053two different channels:
6054
6055[role="term"]
6056----
6057$ lttng enable-event --userspace --channel=my-channel app:tp
6058$ lttng enable-event --userspace --channel=other-channel app:tp
6059----
6060
6061If both channels are enabled, when a tracepoint named `app:tp` is
6062reached, LTTng records two events, one for each channel.
6063
6064
6065[[disable-channel]]
6066=== Disable a channel
6067
6068To disable a specific channel that you <<enabling-disabling-channels,created>>
6069previously, use the man:lttng-disable-channel(1) command.
6070
6071.Disable a specific Linux kernel channel.
6072====
6073[role="term"]
6074----
6075$ lttng disable-channel --kernel my-channel
6076----
6077====
6078
6079The state of a channel precedes the individual states of event rules
6080attached to it: event rules which belong to a disabled channel, even if
6081they are enabled, are also considered disabled.
6082
6083
6084[[adding-context]]
6085=== Add context fields to a channel
6086
6087Event record fields in trace files provide important information about
6088events that occured previously, but sometimes some external context may
6089help you solve a problem faster. Examples of context fields are:
6090
6091* The **process ID**, **thread ID**, **process name**, and
6092 **process priority** of the thread in which the event occurs.
6093* The **hostname** of the system on which the event occurs.
6094* The current values of many possible **performance counters** using
6095 perf, for example:
6096** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6097** Cache misses.
6098** Branch instructions, misses, and loads.
6099** CPU faults.
6100* Any context defined at the application level (supported for the
6101 JUL and log4j <<domain,tracing domains>>).
6102
6103To get the full list of available context fields, see
6104`lttng add-context --list`. Some context fields are reserved for a
6105specific <<domain,tracing domain>> (Linux kernel or user space).
6106
6107You add context fields to <<channel,channels>>. All the events
6108that a channel with added context fields records contain those fields.
6109
6110To add context fields to one or all the channels of a given tracing
6111session:
6112
6113* Use the man:lttng-add-context(1) command.
6114
6115.Add context fields to all the channels of the current tracing session.
6116====
6117The following command line adds the virtual process identifier and
6118the per-thread CPU cycles count fields to all the user space channels
6119of the current tracing session.
6120
6121[role="term"]
6122----
6123$ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6124----
6125====
6126
6127.Add performance counter context fields by raw ID
6128====
6129See man:lttng-add-context(1) for the exact format of the context field
6130type, which is partly compatible with the format used in
6131man:perf-record(1).
6132
6133[role="term"]
6134----
6135$ lttng add-context --userspace --type=perf:thread:raw:r0110:test
6136$ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6137----
6138====
6139
6140.Add a context field to a specific channel.
6141====
6142The following command line adds the thread identifier context field
6143to the Linux kernel channel named `my-channel` in the current
6144tracing session.
6145
6146[role="term"]
6147----
6148$ lttng add-context --kernel --channel=my-channel --type=tid
6149----
6150====
6151
6152.Add an application-specific context field to a specific channel.
6153====
6154The following command line adds the `cur_msg_id` context field of the
6155`retriever` context retriever for all the instrumented
6156<<java-application,Java applications>> recording <<event,event records>>
6157in the channel named `my-channel`:
6158
6159[role="term"]
6160----
6161$ lttng add-context --kernel --channel=my-channel \
6162 --type='$app:retriever:cur_msg_id'
6163----
6164
6165IMPORTANT: Make sure to always quote the `$` character when you
6166use man:lttng-add-context(1) from a shell.
6167====
6168
6169NOTE: You cannot remove context fields from a channel once you add it.
6170
6171
6172[role="since-2.7"]
6173[[pid-tracking]]
6174=== Track process IDs
6175
6176It's often useful to allow only specific process IDs (PIDs) to emit
6177events. For example, you may wish to record all the system calls made by
6178a given process (Ă  la http://linux.die.net/man/1/strace[strace]).
6179
6180The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6181purpose. Both commands operate on a whitelist of process IDs. You _add_
6182entries to this whitelist with the man:lttng-track(1) command and remove
6183entries with the man:lttng-untrack(1) command. Any process which has one
6184of the PIDs in the whitelist is allowed to emit LTTng events which pass
6185an enabled <<event,event rule>>.
6186
6187NOTE: The PID tracker tracks the _numeric process IDs_. Should a
6188process with a given tracked ID exit and another process be given this
6189ID, then the latter would also be allowed to emit events.
6190
6191.Track and untrack process IDs.
6192====
6193For the sake of the following example, assume the target system has 16
6194possible PIDs.
6195
6196When you
6197<<creating-destroying-tracing-sessions,create a tracing session>>,
6198the whitelist contains all the possible PIDs:
6199
6200[role="img-100"]
6201.All PIDs are tracked.
6202image::track-all.png[]
6203
6204When the whitelist is full and you use the man:lttng-track(1) command to
6205specify some PIDs to track, LTTng first clears the whitelist, then it
6206tracks the specific PIDs. After:
6207
6208[role="term"]
6209----
6210$ lttng track --pid=3,4,7,10,13
6211----
6212
6213the whitelist is:
6214
6215[role="img-100"]
6216.PIDs 3, 4, 7, 10, and 13 are tracked.
6217image::track-3-4-7-10-13.png[]
6218
6219You can add more PIDs to the whitelist afterwards:
6220
6221[role="term"]
6222----
6223$ lttng track --pid=1,15,16
6224----
6225
6226The result is:
6227
6228[role="img-100"]
6229.PIDs 1, 15, and 16 are added to the whitelist.
6230image::track-1-3-4-7-10-13-15-16.png[]
6231
6232The man:lttng-untrack(1) command removes entries from the PID tracker's
6233whitelist. Given the previous example, the following command:
6234
6235[role="term"]
6236----
6237$ lttng untrack --pid=3,7,10,13
6238----
6239
6240leads to this whitelist:
6241
6242[role="img-100"]
6243.PIDs 3, 7, 10, and 13 are removed from the whitelist.
6244image::track-1-4-15-16.png[]
6245
6246LTTng can track all possible PIDs again using the opt:track(1):--all
6247option:
6248
6249[role="term"]
6250----
6251$ lttng track --pid --all
6252----
6253
6254The result is, again:
6255
6256[role="img-100"]
6257.All PIDs are tracked.
6258image::track-all.png[]
6259====
6260
6261.Track only specific PIDs
6262====
6263A very typical use case with PID tracking is to start with an empty
6264whitelist, then <<basic-tracing-session-control,start the tracers>>, and
6265then add PIDs manually while tracers are active. You can accomplish this
6266by using the opt:lttng-untrack(1):--all option of the
6267man:lttng-untrack(1) command to clear the whitelist after you
6268<<creating-destroying-tracing-sessions,create a tracing session>>:
6269
6270[role="term"]
6271----
6272$ lttng untrack --pid --all
6273----
6274
6275gives:
6276
6277[role="img-100"]
6278.No PIDs are tracked.
6279image::untrack-all.png[]
6280
6281If you trace with this whitelist configuration, the tracer records no
6282events for this <<domain,tracing domain>> because no processes are
6283tracked. You can use the man:lttng-track(1) command as usual to track
6284specific PIDs, for example:
6285
6286[role="term"]
6287----
6288$ lttng track --pid=6,11
6289----
6290
6291Result:
6292
6293[role="img-100"]
6294.PIDs 6 and 11 are tracked.
6295image::track-6-11.png[]
6296====
6297
6298
6299[role="since-2.5"]
6300[[saving-loading-tracing-session]]
6301=== Save and load tracing session configurations
6302
6303Configuring a <<tracing-session,tracing session>> can be long. Some of
6304the tasks involved are:
6305
6306* <<enabling-disabling-channels,Create channels>> with
6307 specific attributes.
6308* <<adding-context,Add context fields>> to specific channels.
6309* <<enabling-disabling-events,Create event rules>> with specific log
6310 level and filter conditions.
6311
6312If you use LTTng to solve real world problems, chances are you have to
6313record events using the same tracing session setup over and over,
6314modifying a few variables each time in your instrumented program
6315or environment. To avoid constant tracing session reconfiguration,
6316the man:lttng(1) command-line tool can save and load tracing session
6317configurations to/from XML files.
6318
6319To save a given tracing session configuration:
6320
6321* Use the man:lttng-save(1) command:
6322+
6323--
6324[role="term"]
6325----
6326$ lttng save my-session
6327----
6328--
6329+
6330Replace `my-session` with the name of the tracing session to save.
6331
6332LTTng saves tracing session configurations to
6333dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6334env:LTTNG_HOME environment variable defaults to `$HOME` if not set. Use
6335the opt:lttng-save(1):--output-path option to change this destination
6336directory.
6337
6338LTTng saves all configuration parameters, for example:
6339
6340* The tracing session name.
6341* The trace data output path.
6342* The channels with their state and all their attributes.
6343* The context fields you added to channels.
6344* The event rules with their state, log level and filter conditions.
6345
6346To load a tracing session:
6347
6348* Use the man:lttng-load(1) command:
6349+
6350--
6351[role="term"]
6352----
6353$ lttng load my-session
6354----
6355--
6356+
6357Replace `my-session` with the name of the tracing session to load.
6358
6359When LTTng loads a configuration, it restores your saved tracing session
6360as if you just configured it manually.
6361
6362See man:lttng(1) for the complete list of command-line options. You
6363can also save and load all many sessions at a time, and decide in which
6364directory to output the XML files.
6365
6366
6367[[sending-trace-data-over-the-network]]
6368=== Send trace data over the network
6369
6370LTTng can send the recorded trace data to a remote system over the
6371network instead of writing it to the local file system.
6372
6373To send the trace data over the network:
6374
6375. On the _remote_ system (which can also be the target system),
6376 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6377+
6378--
6379[role="term"]
6380----
6381$ lttng-relayd
6382----
6383--
6384
6385. On the _target_ system, create a tracing session configured to
6386 send trace data over the network:
6387+
6388--
6389[role="term"]
6390----
6391$ lttng create my-session --set-url=net://remote-system
6392----
6393--
6394+
6395Replace `remote-system` by the host name or IP address of the
6396remote system. See man:lttng-create(1) for the exact URL format.
6397
6398. On the target system, use the man:lttng(1) command-line tool as usual.
6399 When tracing is active, the target's consumer daemon sends sub-buffers
6400 to the relay daemon running on the remote system instead of flushing
6401 them to the local file system. The relay daemon writes the received
6402 packets to the local file system.
6403
6404The relay daemon writes trace files to
6405+$LTTNG_HOME/lttng-traces/__hostname__/__session__+ by default, where
6406+__hostname__+ is the host name of the target system and +__session__+
6407is the tracing session name. Note that the env:LTTNG_HOME environment
6408variable defaults to `$HOME` if not set. Use the
6409opt:lttng-relayd(8):--output option of man:lttng-relayd(8) to write
6410trace files to another base directory.
6411
6412
6413[role="since-2.4"]
6414[[lttng-live]]
6415=== View events as LTTng emits them (noch:{LTTng} live)
6416
6417LTTng live is a network protocol implemented by the <<lttng-relayd,relay
6418daemon>> (man:lttng-relayd(8)) to allow compatible trace viewers to
6419display events as LTTng emits them on the target system while tracing is
6420active.
6421
6422The relay daemon creates a _tee_: it forwards the trace data to both
6423the local file system and to connected live viewers:
6424
6425[role="img-90"]
6426.The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live viewer.
6427image::live.png[]
6428
6429To use LTTng live:
6430
6431. On the _target system_, create a <<tracing-session,tracing session>>
6432 in _live mode_:
6433+
6434--
6435[role="term"]
6436----
6437$ lttng create my-session --live
6438----
6439--
6440+
6441This spawns a local relay daemon.
6442
6443. Start the live viewer and configure it to connect to the relay
6444 daemon. For example, with http://diamon.org/babeltrace[Babeltrace]:
6445+
6446--
6447[role="term"]
6448----
6449$ babeltrace --input-format=lttng-live \
6450 net://localhost/host/hostname/my-session
6451----
6452--
6453+
6454Replace:
6455+
6456--
6457* `hostname` with the host name of the target system.
6458* `my-session` with the name of the tracing session to view.
6459--
6460
6461. Configure the tracing session as usual with the man:lttng(1)
6462 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6463
6464You can list the available live tracing sessions with Babeltrace:
6465
6466[role="term"]
6467----
6468$ babeltrace --input-format=lttng-live net://localhost
6469----
6470
6471You can start the relay daemon on another system. In this case, you need
6472to specify the relay daemon's URL when you create the tracing session
6473with the opt:lttng-create(1):--set-url option. You also need to replace
6474`localhost` in the procedure above with the host name of the system on
6475which the relay daemon is running.
6476
6477See man:lttng-create(1) and man:lttng-relayd(8) for the complete list of
6478command-line options.
6479
6480
6481[role="since-2.3"]
6482[[taking-a-snapshot]]
6483=== Take a snapshot of the current sub-buffers of a tracing session
6484
6485The normal behavior of LTTng is to append full sub-buffers to growing
6486trace data files. This is ideal to keep a full history of the events
6487that occurred on the target system, but it can
6488represent too much data in some situations. For example, you may wish
6489to trace your application continuously until some critical situation
6490happens, in which case you only need the latest few recorded
6491events to perform the desired analysis, not multi-gigabyte trace files.
6492
6493With the man:lttng-snapshot(1) command, you can take a snapshot of the
6494current sub-buffers of a given <<tracing-session,tracing session>>.
6495LTTng can write the snapshot to the local file system or send it over
6496the network.
6497
6498To take a snapshot:
6499
6500. Create a tracing session in _snapshot mode_:
6501+
6502--
6503[role="term"]
6504----
6505$ lttng create my-session --snapshot
6506----
6507--
6508+
6509The <<channel-overwrite-mode-vs-discard-mode,event loss mode>> of
6510<<channel,channels>> created in this mode is automatically set to
6511_overwrite_ (flight recorder mode).
6512
6513. Configure the tracing session as usual with the man:lttng(1)
6514 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6515
6516. **Optional**: When you need to take a snapshot,
6517 <<basic-tracing-session-control,stop tracing>>.
6518+
6519You can take a snapshot when the tracers are active, but if you stop
6520them first, you are sure that the data in the sub-buffers does not
6521change before you actually take the snapshot.
6522
6523. Take a snapshot:
6524+
6525--
6526[role="term"]
6527----
6528$ lttng snapshot record --name=my-first-snapshot
6529----
6530--
6531+
6532LTTng writes the current sub-buffers of all the current tracing
6533session's channels to trace files on the local file system. Those trace
6534files have `my-first-snapshot` in their name.
6535
6536There is no difference between the format of a normal trace file and the
6537format of a snapshot: viewers of LTTng traces also support LTTng
6538snapshots.
6539
6540By default, LTTng writes snapshot files to the path shown by
6541`lttng snapshot list-output`. You can change this path or decide to send
6542snapshots over the network using either:
6543
6544. An output path or URL that you specify when you create the
6545 tracing session.
6546. An snapshot output path or URL that you add using
6547 `lttng snapshot add-output`
6548. An output path or URL that you provide directly to the
6549 `lttng snapshot record` command.
6550
6551Method 3 overrides method 2, which overrides method 1. When you
6552specify a URL, a relay daemon must listen on a remote system (see
6553<<sending-trace-data-over-the-network,Send trace data over the network>>).
6554
6555
6556[role="since-2.6"]
6557[[mi]]
6558=== Use the machine interface
6559
6560With any command of the man:lttng(1) command-line tool, you can set the
6561opt:lttng(1):--mi option to `xml` (before the command name) to get an
6562XML machine interface output, for example:
6563
6564[role="term"]
6565----
6566$ lttng --mi=xml enable-event --kernel --syscall open
6567----
6568
6569A schema definition (XSD) is
6570https://github.com/lttng/lttng-tools/blob/stable-2.10/src/common/mi-lttng-3.0.xsd[available]
6571to ease the integration with external tools as much as possible.
6572
6573
6574[role="since-2.8"]
6575[[metadata-regenerate]]
6576=== Regenerate the metadata of an LTTng trace
6577
6578An LTTng trace, which is a http://diamon.org/ctf[CTF] trace, has both
6579data stream files and a metadata file. This metadata file contains,
6580amongst other things, information about the offset of the clock sources
6581used to timestamp <<event,event records>> when tracing.
6582
6583If, once a <<tracing-session,tracing session>> is
6584<<basic-tracing-session-control,started>>, a major
6585https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
6586happens, the trace's clock offset also needs to be updated. You
6587can use the `metadata` item of the man:lttng-regenerate(1) command
6588to do so.
6589
6590The main use case of this command is to allow a system to boot with
6591an incorrect wall time and trace it with LTTng before its wall time
6592is corrected. Once the system is known to be in a state where its
6593wall time is correct, it can run `lttng regenerate metadata`.
6594
6595To regenerate the metadata of an LTTng trace:
6596
6597* Use the `metadata` item of the man:lttng-regenerate(1) command:
6598+
6599--
6600[role="term"]
6601----
6602$ lttng regenerate metadata
6603----
6604--
6605
6606[IMPORTANT]
6607====
6608`lttng regenerate metadata` has the following limitations:
6609
6610* Tracing session <<creating-destroying-tracing-sessions,created>>
6611 in non-live mode.
6612* User space <<channel,channels>>, if any, are using
6613 <<channel-buffering-schemes,per-user buffering>>.
6614====
6615
6616
6617[role="since-2.9"]
6618[[regenerate-statedump]]
6619=== Regenerate the state dump of a tracing session
6620
6621The LTTng kernel and user space tracers generate state dump
6622<<event,event records>> when the application starts or when you
6623<<basic-tracing-session-control,start a tracing session>>. An analysis
6624can use the state dump event records to set an initial state before it
6625builds the rest of the state from the following event records.
6626http://tracecompass.org/[Trace Compass] is a notable example of an
6627application which uses the state dump of an LTTng trace.
6628
6629When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
6630state dump event records are not included in the snapshot because they
6631were recorded to a sub-buffer that has been consumed or overwritten
6632already.
6633
6634You can use the `lttng regenerate statedump` command to emit the state
6635dump event records again.
6636
6637To regenerate the state dump of the current tracing session, provided
6638create it in snapshot mode, before you take a snapshot:
6639
6640. Use the `statedump` item of the man:lttng-regenerate(1) command:
6641+
6642--
6643[role="term"]
6644----
6645$ lttng regenerate statedump
6646----
6647--
6648
6649. <<basic-tracing-session-control,Stop the tracing session>>:
6650+
6651--
6652[role="term"]
6653----
6654$ lttng stop
6655----
6656--
6657
6658. <<taking-a-snapshot,Take a snapshot>>:
6659+
6660--
6661[role="term"]
6662----
6663$ lttng snapshot record --name=my-snapshot
6664----
6665--
6666
6667Depending on the event throughput, you should run steps 1 and 2
6668as closely as possible.
6669
6670NOTE: To record the state dump events, you need to
6671<<enabling-disabling-events,create event rules>> which enable them.
6672LTTng-UST state dump tracepoints start with `lttng_ust_statedump:`.
6673LTTng-modules state dump tracepoints start with `lttng_statedump_`.
6674
6675
6676[role="since-2.7"]
6677[[persistent-memory-file-systems]]
6678=== Record trace data on persistent memory file systems
6679
6680https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory]
6681(NVRAM) is random-access memory that retains its information when power
6682is turned off (non-volatile). Systems with such memory can store data
6683structures in RAM and retrieve them after a reboot, without flushing
6684to typical _storage_.
6685
6686Linux supports NVRAM file systems thanks to either
6687http://pramfs.sourceforge.net/[PRAMFS] or
6688https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
6689(requires Linux 4.1+).
6690
6691This section does not describe how to operate such file systems;
6692we assume that you have a working persistent memory file system.
6693
6694When you create a <<tracing-session,tracing session>>, you can specify
6695the path of the shared memory holding the sub-buffers. If you specify a
6696location on an NVRAM file system, then you can retrieve the latest
6697recorded trace data when the system reboots after a crash.
6698
6699To record trace data on a persistent memory file system and retrieve the
6700trace data after a system crash:
6701
6702. Create a tracing session with a sub-buffer shared memory path located
6703 on an NVRAM file system:
6704+
6705--
6706[role="term"]
6707----
6708$ lttng create my-session --shm-path=/path/to/shm
6709----
6710--
6711
6712. Configure the tracing session as usual with the man:lttng(1)
6713 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6714
6715. After a system crash, use the man:lttng-crash(1) command-line tool to
6716 view the trace data recorded on the NVRAM file system:
6717+
6718--
6719[role="term"]
6720----
6721$ lttng-crash /path/to/shm
6722----
6723--
6724
6725The binary layout of the ring buffer files is not exactly the same as
6726the trace files layout. This is why you need to use man:lttng-crash(1)
6727instead of your preferred trace viewer directly.
6728
6729To convert the ring buffer files to LTTng trace files:
6730
6731* Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
6732+
6733--
6734[role="term"]
6735----
6736$ lttng-crash --extract=/path/to/trace /path/to/shm
6737----
6738--
6739
6740
90c4e38a
PP
6741[role="since-2.10"]
6742[[notif-trigger-api]]
6743=== Get notified when a channel's buffer usage is too high or too low
6744
6745With LTTng's $$C/C++$$ notification and trigger API, your user
6746application can get notified when the buffer usage of one or more
6747<<channel,channels>> becomes too low or too high. You can use this API
6748and enable or disable <<event,event rules>> during tracing to avoid
6749<<channel-overwrite-mode-vs-discard-mode,discarded event records>>.
6750
6751.Have a user application get notified when an LTTng channel's buffer usage is too high.
6752====
6753In this example, we create and build an application which gets notified
6754when the buffer usage of a specific LTTng channel is higher than
675575{nbsp}%. We only print that it is the case in the example, but we
6756could as well use the API of <<liblttng-ctl-lttng,`liblttng-ctl`>> to
6757disable event rules when this happens.
6758
6759. Create the application's C source file:
6760+
6761--
6762[source,c]
6763.path:{notif-app.c}
6764----
6765#include <stdio.h>
6766#include <assert.h>
6767#include <lttng/domain.h>
6768#include <lttng/action/action.h>
6769#include <lttng/action/notify.h>
6770#include <lttng/condition/condition.h>
6771#include <lttng/condition/buffer-usage.h>
6772#include <lttng/condition/evaluation.h>
6773#include <lttng/notification/channel.h>
6774#include <lttng/notification/notification.h>
6775#include <lttng/trigger/trigger.h>
6776#include <lttng/endpoint.h>
6777
6778int main(int argc, char *argv[])
6779{
d2a86fb9
PP
6780 int exit_status = 0;
6781 struct lttng_notification_channel *notification_channel;
6782 struct lttng_condition *condition;
6783 struct lttng_action *action;
6784 struct lttng_trigger *trigger;
6785 const char *tracing_session_name;
6786 const char *channel_name;
6787
6788 assert(argc >= 3);
6789 tracing_session_name = argv[1];
6790 channel_name = argv[2];
90c4e38a
PP
6791
6792 /*
d2a86fb9
PP
6793 * Create a notification channel. A notification channel
6794 * connects the user application to the LTTng session daemon.
7568806b 6795 * This notification channel can be used to listen to various
d2a86fb9
PP
6796 * types of notifications.
6797 */
6798 notification_channel = lttng_notification_channel_create(
6799 lttng_session_daemon_notification_endpoint);
6800
6801 /*
6802 * Create a "high buffer usage" condition. In this case, the
6803 * condition is reached when the buffer usage is greater than or
7568806b
PP
6804 * equal to 75 %. We create the condition for a specific tracing
6805 * session name, channel name, and for the user space tracing
6806 * domain.
90c4e38a 6807 *
d2a86fb9
PP
6808 * The "low buffer usage" condition type also exists.
6809 */
6810 condition = lttng_condition_buffer_usage_high_create();
6811 lttng_condition_buffer_usage_set_threshold_ratio(condition, .75);
6812 lttng_condition_buffer_usage_set_session_name(
6813 condition, tracing_session_name);
6814 lttng_condition_buffer_usage_set_channel_name(condition,
6815 channel_name);
6816 lttng_condition_buffer_usage_set_domain_type(condition,
6817 LTTNG_DOMAIN_UST);
6818
6819 /*
6820 * Create an action (get a notification) to take when the
6821 * condition created above is reached.
6822 */
6823 action = lttng_action_notify_create();
6824
6825 /*
6826 * Create a trigger. A trigger associates a condition to an
6827 * action: the action is executed when the condition is reached.
90c4e38a 6828 */
d2a86fb9 6829 trigger = lttng_trigger_create(condition, action);
90c4e38a 6830
d2a86fb9
PP
6831 /* Register the trigger to LTTng. */
6832 lttng_register_trigger(trigger);
90c4e38a
PP
6833
6834 /*
d2a86fb9
PP
6835 * Now that we have registered a trigger, a notification will be
6836 * emitted everytime its condition is met. To receive this
6837 * notification, we must subscribe to notifications that match
6838 * the same condition.
90c4e38a 6839 */
7568806b
PP
6840 lttng_notification_channel_subscribe(notification_channel,
6841 condition);
90c4e38a
PP
6842
6843 /*
7568806b
PP
6844 * Notification loop. You can put this in a dedicated thread to
6845 * avoid blocking the main thread.
90c4e38a 6846 */
d2a86fb9
PP
6847 for (;;) {
6848 struct lttng_notification *notification;
6849 enum lttng_notification_channel_status status;
6850 const struct lttng_evaluation *notification_evaluation;
6851 const struct lttng_condition *notification_condition;
6852 double buffer_usage;
6853
6854 /* Receive the next notification. */
6855 status = lttng_notification_channel_get_next_notification(
7568806b 6856 notification_channel, &notification);
d2a86fb9
PP
6857
6858 switch (status) {
6859 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
6860 break;
6861 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
6862 /*
6863 * The session daemon can drop notifications if
6864 * a monitoring application is not consuming the
6865 * notifications fast enough.
6866 */
6867 continue;
6868 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
6869 /*
6870 * The notification channel has been closed by the
6871 * session daemon. This is typically caused by a session
6872 * daemon shutting down.
6873 */
6874 goto end;
6875 default:
6876 /* Unhandled conditions or errors. */
6877 exit_status = 1;
6878 goto end;
6879 }
6880
6881 /*
6882 * A notification provides, amongst other things:
6883 *
6884 * * The condition that caused this notification to be
6885 * emitted.
6886 * * The condition evaluation, which provides more
6887 * specific information on the evaluation of the
6888 * condition.
6889 *
6890 * The condition evaluation provides the buffer usage
7568806b 6891 * value at the moment the condition was reached.
d2a86fb9
PP
6892 */
6893 notification_condition = lttng_notification_get_condition(
6894 notification);
6895 notification_evaluation = lttng_notification_get_evaluation(
6896 notification);
6897
6898 /* We're subscribed to only one condition. */
6899 assert(lttng_condition_get_type(notification_condition) ==
6900 LTTNG_CONDITION_TYPE_BUFFER_USAGE_HIGH);
6901
6902 /*
6903 * Get the exact sampled buffer usage from the
6904 * condition evaluation.
6905 */
6906 lttng_evaluation_buffer_usage_get_usage_ratio(
6907 notification_evaluation, &buffer_usage);
6908
6909 /*
6910 * At this point, instead of printing a message, we
6911 * could do something to reduce the channel's buffer
6912 * usage, like disable specific events.
6913 */
6914 printf("Buffer usage is %f %% in tracing session \"%s\", "
7568806b
PP
6915 "user space channel \"%s\".\n", buffer_usage * 100,
6916 tracing_session_name, channel_name);
d2a86fb9
PP
6917 lttng_notification_destroy(notification);
6918 }
90c4e38a
PP
6919
6920end:
d2a86fb9
PP
6921 lttng_action_destroy(action);
6922 lttng_condition_destroy(condition);
6923 lttng_trigger_destroy(trigger);
6924 lttng_notification_channel_destroy(notification_channel);
6925 return exit_status;
90c4e38a
PP
6926}
6927----
6928--
6929
6930. Build the `notif-app` application, linking it to `liblttng-ctl`:
6931+
6932--
6933[role="term"]
6934----
6935$ gcc -o notif-app notif-app.c -llttng-ctl
6936----
6937--
6938
6939. <<creating-destroying-tracing-sessions,Create a tracing session>>,
6940 <<enabling-disabling-events,create an event rule>> matching all the
6941 user space tracepoints, and
6942 <<basic-tracing-session-control,start tracing>>:
6943+
6944--
6945[role="term"]
6946----
6947$ lttng create my-session
6948$ lttng enable-event --userspace --all
6949$ lttng start
6950----
6951--
6952+
6953If you create the channel manually with the man:lttng-enable-channel(1)
6954command, you can control how frequently are the current values of the
6955channel's properties sampled to evaluate user conditions with the
6956opt:lttng-enable-channel(1):--monitor-timer option.
6957
6958. Run the `notif-app` application. This program accepts the
6959 <<tracing-session,tracing session>> name and the user space channel
6960 name as its two first arguments. The channel which LTTng automatically
6961 creates with the man:lttng-enable-event(1) command above is named
6962 `channel0`:
6963+
6964--
6965[role="term"]
6966----
6967$ ./notif-app my-session channel0
6968----
6969--
6970
6971. In another terminal, run an application with a very high event
6972 throughput so that the 75{nbsp}% buffer usage condition is reached.
6973+
6974In the first terminal, the application should print lines like this:
6975+
6976----
6977Buffer usage is 81.45197 % in tracing session "my-session", user space
6978channel "channel0".
6979----
6980+
6981If you don't see anything, try modifying the condition in
6982path:{notif-app.c} to a lower value (0.1, for example), rebuilding it
6983(step 2) and running it again (step 4).
6984====
6985
6986
85c29972
PP
6987[[reference]]
6988== Reference
6989
6990[[lttng-modules-ref]]
6991=== noch:{LTTng-modules}
6992
6993
6994[role="since-2.9"]
6995[[lttng-tracepoint-enum]]
6996==== `LTTNG_TRACEPOINT_ENUM()` usage
6997
6998Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
6999
7000[source,c]
7001----
7002LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
7003----
7004
7005Replace:
7006
7007* `name` with the name of the enumeration (C identifier, unique
7008 amongst all the defined enumerations).
7009* `entries` with a list of enumeration entries.
7010
7011The available enumeration entry macros are:
7012
7013+ctf_enum_value(__name__, __value__)+::
7014 Entry named +__name__+ mapped to the integral value +__value__+.
7015
7016+ctf_enum_range(__name__, __begin__, __end__)+::
7017 Entry named +__name__+ mapped to the range of integral values between
7018 +__begin__+ (included) and +__end__+ (included).
7019
7020+ctf_enum_auto(__name__)+::
7021 Entry named +__name__+ mapped to the integral value following the
7022 last mapping's value.
7023+
7024The last value of a `ctf_enum_value()` entry is its +__value__+
7025parameter.
7026+
7027The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
7028+
7029If `ctf_enum_auto()` is the first entry in the list, its integral
7030value is 0.
7031
7032Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
7033to use a defined enumeration as a tracepoint field.
7034
7035.Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
7036====
7037[source,c]
7038----
7039LTTNG_TRACEPOINT_ENUM(
7040 my_enum,
7041 TP_ENUM_VALUES(
7042 ctf_enum_auto("AUTO: EXPECT 0")
7043 ctf_enum_value("VALUE: 23", 23)
7044 ctf_enum_value("VALUE: 27", 27)
7045 ctf_enum_auto("AUTO: EXPECT 28")
7046 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
7047 ctf_enum_auto("AUTO: EXPECT 304")
7048 )
7049)
7050----
7051====
7052
7053
7054[role="since-2.7"]
7055[[lttng-modules-tp-fields]]
7056==== Tracepoint fields macros (for `TP_FIELDS()`)
7057
7058[[tp-fast-assign]][[tp-struct-entry]]The available macros to define
7059tracepoint fields, which must be listed within `TP_FIELDS()` in
7060`LTTNG_TRACEPOINT_EVENT()`, are:
7061
7062[role="func-desc growable",cols="asciidoc,asciidoc"]
7063.Available macros to define LTTng-modules tracepoint fields
7064|====
7065|Macro |Description and parameters
7066
7067|
7068+ctf_integer(__t__, __n__, __e__)+
7069
7070+ctf_integer_nowrite(__t__, __n__, __e__)+
7071
7072+ctf_user_integer(__t__, __n__, __e__)+
7073
7074+ctf_user_integer_nowrite(__t__, __n__, __e__)+
7075|
7076Standard integer, displayed in base 10.
7077
7078+__t__+::
7079 Integer C type (`int`, `long`, `size_t`, ...).
7080
7081+__n__+::
7082 Field name.
7083
7084+__e__+::
7085 Argument expression.
7086
7087|
7088+ctf_integer_hex(__t__, __n__, __e__)+
7089
7090+ctf_user_integer_hex(__t__, __n__, __e__)+
7091|
7092Standard integer, displayed in base 16.
7093
7094+__t__+::
7095 Integer C type.
7096
7097+__n__+::
7098 Field name.
7099
7100+__e__+::
7101 Argument expression.
7102
7103|+ctf_integer_oct(__t__, __n__, __e__)+
7104|
7105Standard integer, displayed in base 8.
7106
7107+__t__+::
7108 Integer C type.
7109
7110+__n__+::
7111 Field name.
7112
7113+__e__+::
7114 Argument expression.
7115
7116|
7117+ctf_integer_network(__t__, __n__, __e__)+
7118
7119+ctf_user_integer_network(__t__, __n__, __e__)+
7120|
7121Integer in network byte order (big-endian), displayed in base 10.
7122
7123+__t__+::
7124 Integer C type.
7125
7126+__n__+::
7127 Field name.
7128
7129+__e__+::
7130 Argument expression.
7131
7132|
7133+ctf_integer_network_hex(__t__, __n__, __e__)+
7134
7135+ctf_user_integer_network_hex(__t__, __n__, __e__)+
7136|
7137Integer in network byte order, displayed in base 16.
7138
7139+__t__+::
7140 Integer C type.
7141
7142+__n__+::
7143 Field name.
7144
7145+__e__+::
7146 Argument expression.
7147
7148|
7149+ctf_enum(__N__, __t__, __n__, __e__)+
7150
7151+ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
7152
7153+ctf_user_enum(__N__, __t__, __n__, __e__)+
7154
7155+ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
7156|
7157Enumeration.
7158
7159+__N__+::
7160 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
7161
7162+__t__+::
7163 Integer C type (`int`, `long`, `size_t`, ...).
7164
7165+__n__+::
7166 Field name.
7167
7168+__e__+::
7169 Argument expression.
7170
7171|
7172+ctf_string(__n__, __e__)+
7173
7174+ctf_string_nowrite(__n__, __e__)+
7175
7176+ctf_user_string(__n__, __e__)+
7177
7178+ctf_user_string_nowrite(__n__, __e__)+
7179|
7180Null-terminated string; undefined behavior if +__e__+ is `NULL`.
7181
7182+__n__+::
7183 Field name.
7184
7185+__e__+::
7186 Argument expression.
7187
7188|
7189+ctf_array(__t__, __n__, __e__, __s__)+
7190
7191+ctf_array_nowrite(__t__, __n__, __e__, __s__)+
7192
7193+ctf_user_array(__t__, __n__, __e__, __s__)+
7194
7195+ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
7196|
7197Statically-sized array of integers.
7198
7199+__t__+::
7200 Array element C type.
7201
7202+__n__+::
7203 Field name.
7204
7205+__e__+::
7206 Argument expression.
7207
7208+__s__+::
7209 Number of elements.
7210
7211|
7212+ctf_array_bitfield(__t__, __n__, __e__, __s__)+
7213
7214+ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7215
7216+ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
7217
7218+ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7219|
7220Statically-sized array of bits.
7221
7222The type of +__e__+ must be an integer type. +__s__+ is the number
7223of elements of such type in +__e__+, not the number of bits.
7224
7225+__t__+::
7226 Array element C type.
7227
7228+__n__+::
7229 Field name.
7230
7231+__e__+::
7232 Argument expression.
7233
7234+__s__+::
7235 Number of elements.
7236
7237|
7238+ctf_array_text(__t__, __n__, __e__, __s__)+
7239
7240+ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
7241
7242+ctf_user_array_text(__t__, __n__, __e__, __s__)+
7243
7244+ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
7245|
7246Statically-sized array, printed as text.
7247
7248The string does not need to be null-terminated.
7249
7250+__t__+::
7251 Array element C type (always `char`).
7252
7253+__n__+::
7254 Field name.
7255
7256+__e__+::
7257 Argument expression.
7258
7259+__s__+::
7260 Number of elements.
7261
7262|
7263+ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
7264
7265+ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7266
7267+ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
7268
7269+ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7270|
7271Dynamically-sized array of integers.
7272
7273The type of +__E__+ must be unsigned.
7274
7275+__t__+::
7276 Array element C type.
7277
7278+__n__+::
7279 Field name.
7280
7281+__e__+::
7282 Argument expression.
7283
7284+__T__+::
7285 Length expression C type.
7286
7287+__E__+::
7288 Length expression.
7289
7290|
7291+ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7292
7293+ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7294|
7295Dynamically-sized array of integers, displayed in base 16.
7296
7297The type of +__E__+ must be unsigned.
7298
7299+__t__+::
7300 Array element C type.
7301
7302+__n__+::
7303 Field name.
7304
7305+__e__+::
7306 Argument expression.
7307
7308+__T__+::
7309 Length expression C type.
7310
7311+__E__+::
7312 Length expression.
7313
7314|+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
7315|
7316Dynamically-sized array of integers in network byte order (big-endian),
7317displayed in base 10.
7318
7319The type of +__E__+ must be unsigned.
7320
7321+__t__+::
7322 Array element C type.
7323
7324+__n__+::
7325 Field name.
7326
7327+__e__+::
7328 Argument expression.
7329
7330+__T__+::
7331 Length expression C type.
7332
7333+__E__+::
7334 Length expression.
7335
7336|
7337+ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7338
7339+ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7340
7341+ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7342
7343+ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7344|
7345Dynamically-sized array of bits.
7346
7347The type of +__e__+ must be an integer type. +__s__+ is the number
7348of elements of such type in +__e__+, not the number of bits.
7349
7350The type of +__E__+ must be unsigned.
7351
7352+__t__+::
7353 Array element C type.
7354
7355+__n__+::
7356 Field name.
7357
7358+__e__+::
7359 Argument expression.
7360
7361+__T__+::
7362 Length expression C type.
7363
7364+__E__+::
7365 Length expression.
7366
7367|
7368+ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7369
7370+ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7371
7372+ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7373
7374+ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7375|
7376Dynamically-sized array, displayed as text.
7377
7378The string does not need to be null-terminated.
7379
7380The type of +__E__+ must be unsigned.
7381
7382The behaviour is undefined if +__e__+ is `NULL`.
7383
7384+__t__+::
7385 Sequence element C type (always `char`).
7386
7387+__n__+::
7388 Field name.
7389
7390+__e__+::
7391 Argument expression.
7392
7393+__T__+::
7394 Length expression C type.
7395
7396+__E__+::
7397 Length expression.
7398|====
7399
7400Use the `_user` versions when the argument expression, `e`, is
7401a user space address. In the cases of `ctf_user_integer*()` and
7402`ctf_user_float*()`, `&e` must be a user space address, thus `e` must
7403be addressable.
7404
7405The `_nowrite` versions omit themselves from the session trace, but are
7406otherwise identical. This means the `_nowrite` fields won't be written
7407in the recorded trace. Their primary purpose is to make some
7408of the event context available to the
7409<<enabling-disabling-events,event filters>> without having to
7410commit the data to sub-buffers.
7411
7412
7413[[glossary]]
7414== Glossary
7415
7416Terms related to LTTng and to tracing in general:
7417
7418Babeltrace::
7419 The http://diamon.org/babeltrace[Babeltrace] project, which includes
7420 the cmd:babeltrace command, some libraries, and Python bindings.
7421
7422<<channel-buffering-schemes,buffering scheme>>::
7423 A layout of sub-buffers applied to a given channel.
7424
7425<<channel,channel>>::
7426 An entity which is responsible for a set of ring buffers.
7427+
7428<<event,Event rules>> are always attached to a specific channel.
7429
7430clock::
7431 A reference of time for a tracer.
7432
7433<<lttng-consumerd,consumer daemon>>::
7434 A process which is responsible for consuming the full sub-buffers
7435 and write them to a file system or send them over the network.
7436
7437<<channel-overwrite-mode-vs-discard-mode,discard mode>>:: The event loss
7438 mode in which the tracer _discards_ new event records when there's no
7439 sub-buffer space left to store them.
7440
7441event::
7442 The consequence of the execution of an instrumentation
7443 point, like a tracepoint that you manually place in some source code,
7444 or a Linux kernel KProbe.
7445+
7446An event is said to _occur_ at a specific time. Different actions can
7447be taken upon the occurrence of an event, like record the event's payload
7448to a sub-buffer.
7449
7450<<channel-overwrite-mode-vs-discard-mode,event loss mode>>::
7451 The mechanism by which event records of a given channel are lost
7452 (not recorded) when there is no sub-buffer space left to store them.
7453
7454[[def-event-name]]event name::
7455 The name of an event, which is also the name of the event record.
7456 This is also called the _instrumentation point name_.
7457
7458event record::
7459 A record, in a trace, of the payload of an event which occured.
7460
7461<<event,event rule>>::
7462 Set of conditions which must be satisfied for one or more occuring
7463 events to be recorded.
7464
7465`java.util.logging`::
7466 Java platform's
7467 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities].
7468
7469<<instrumenting,instrumentation>>::
7470 The use of LTTng probes to make a piece of software traceable.
7471
7472instrumentation point::
7473 A point in the execution path of a piece of software that, when
7474 reached by this execution, can emit an event.
7475
7476instrumentation point name::
7477 See _<<def-event-name,event name>>_.
7478
7479log4j::
7480 A http://logging.apache.org/log4j/1.2/[logging library] for Java
7481 developed by the Apache Software Foundation.
7482
7483log level::
7484 Level of severity of a log statement or user space
7485 instrumentation point.
7486
7487LTTng::
7488 The _Linux Trace Toolkit: next generation_ project.
7489
7490<<lttng-cli,cmd:lttng>>::
7491 A command-line tool provided by the LTTng-tools project which you
7492 can use to send and receive control messages to and from a
7493 session daemon.
7494
7495LTTng analyses::
7496 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
7497 which is a set of analyzing programs that are used to obtain a
7498 higher level view of an LTTng trace.
7499
7500cmd:lttng-consumerd::
7501 The name of the consumer daemon program.
7502
7503cmd:lttng-crash::
7504 A utility provided by the LTTng-tools project which can convert
7505 ring buffer files (usually
7506 <<persistent-memory-file-systems,saved on a persistent memory file system>>)
7507 to trace files.
7508
7509LTTng Documentation::
7510 This document.
7511
7512<<lttng-live,LTTng live>>::
7513 A communication protocol between the relay daemon and live viewers
7514 which makes it possible to see events "live", as they are received by
7515 the relay daemon.
7516
7517<<lttng-modules,LTTng-modules>>::
7518 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
7519 which contains the Linux kernel modules to make the Linux kernel
7520 instrumentation points available for LTTng tracing.
7521
7522cmd:lttng-relayd::
7523 The name of the relay daemon program.
7524
7525cmd:lttng-sessiond::
7526 The name of the session daemon program.
7527
7528LTTng-tools::
7529 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
7530 contains the various programs and libraries used to
7531 <<controlling-tracing,control tracing>>.
7532
7533<<lttng-ust,LTTng-UST>>::
7534 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
7535 contains libraries to instrument user applications.
7536
7537<<lttng-ust-agents,LTTng-UST Java agent>>::
7538 A Java package provided by the LTTng-UST project to allow the
7539 LTTng instrumentation of `java.util.logging` and Apache log4j 1.2
7540 logging statements.
7541
7542<<lttng-ust-agents,LTTng-UST Python agent>>::
7543 A Python package provided by the LTTng-UST project to allow the
7544 LTTng instrumentation of Python logging statements.
7545
7546<<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
7547 The event loss mode in which new event records overwrite older
7548 event records when there's no sub-buffer space left to store them.
7549
7550<<channel-buffering-schemes,per-process buffering>>::
7551 A buffering scheme in which each instrumented process has its own
7552 sub-buffers for a given user space channel.
7553
7554<<channel-buffering-schemes,per-user buffering>>::
7555 A buffering scheme in which all the processes of a Unix user share the
7556 same sub-buffer for a given user space channel.
7557
7558<<lttng-relayd,relay daemon>>::
7559 A process which is responsible for receiving the trace data sent by
7560 a distant consumer daemon.
7561
7562ring buffer::
7563 A set of sub-buffers.
7564
7565<<lttng-sessiond,session daemon>>::
7566 A process which receives control commands from you and orchestrates
7567 the tracers and various LTTng daemons.
7568
7569<<taking-a-snapshot,snapshot>>::
7570 A copy of the current data of all the sub-buffers of a given tracing
7571 session, saved as trace files.
7572
7573sub-buffer::
7574 One part of an LTTng ring buffer which contains event records.
7575
7576timestamp::
7577 The time information attached to an event when it is emitted.
7578
7579trace (_noun_)::
7580 A set of files which are the concatenations of one or more
7581 flushed sub-buffers.
7582
7583trace (_verb_)::
7584 The action of recording the events emitted by an application
7585 or by a system, or to initiate such recording by controlling
7586 a tracer.
7587
7588Trace Compass::
7589 The http://tracecompass.org[Trace Compass] project and application.
7590
7591tracepoint::
7592 An instrumentation point using the tracepoint mechanism of the Linux
7593 kernel or of LTTng-UST.
7594
7595tracepoint definition::
7596 The definition of a single tracepoint.
7597
7598tracepoint name::
7599 The name of a tracepoint.
7600
7601tracepoint provider::
7602 A set of functions providing tracepoints to an instrumented user
7603 application.
7604+
7605Not to be confused with a _tracepoint provider package_: many tracepoint
7606providers can exist within a tracepoint provider package.
7607
7608tracepoint provider package::
7609 One or more tracepoint providers compiled as an object file or as
7610 a shared library.
7611
7612tracer::
7613 A software which records emitted events.
7614
7615<<domain,tracing domain>>::
7616 A namespace for event sources.
7617
7618<<tracing-group,tracing group>>::
7619 The Unix group in which a Unix user can be to be allowed to trace the
7620 Linux kernel.
7621
7622<<tracing-session,tracing session>>::
7623 A stateful dialogue between you and a <<lttng-sessiond,session
7624 daemon>>.
7625
7626user application::
7627 An application running in user space, as opposed to a Linux kernel
7628 module, for example.
This page took 0.288645 seconds and 4 git commands to generate.