2.8-2.12: "lost packets" is only meaningful for non-snapshot modes
[lttng-docs.git] / 2.10 / lttng-docs-2.10.txt
CommitLineData
85c29972
PP
1The LTTng Documentation
2=======================
3Philippe Proulx <pproulx@efficios.com>
3cd5f504 4v2.10, 25 February 2021
85c29972
PP
5
6
7include::../common/copyright.txt[]
8
9
9fe3b57a
PP
10include::../common/warning-not-maintained.txt[]
11
12
85c29972
PP
13include::../common/welcome.txt[]
14
15
16include::../common/audience.txt[]
17
18
19[[chapters]]
20=== What's in this documentation?
21
22The LTTng Documentation is divided into the following sections:
23
24* **<<nuts-and-bolts,Nuts and bolts>>** explains the
25 rudiments of software tracing and the rationale behind the
26 LTTng project.
27+
28You can skip this section if you’re familiar with software tracing and
29with the LTTng project.
30
31* **<<installing-lttng,Installation>>** describes the steps to
32 install the LTTng packages on common Linux distributions and from
33 their sources.
34+
35You can skip this section if you already properly installed LTTng on
36your target system.
37
38* **<<getting-started,Quick start>>** is a concise guide to
39 getting started quickly with LTTng kernel and user space tracing.
40+
41We recommend this section if you're new to LTTng or to software tracing
42in general.
43+
44You can skip this section if you're not new to LTTng.
45
46* **<<core-concepts,Core concepts>>** explains the concepts at
47 the heart of LTTng.
48+
49It's a good idea to become familiar with the core concepts
50before attempting to use the toolkit.
51
52* **<<plumbing,Components of LTTng>>** describes the various components
53 of the LTTng machinery, like the daemons, the libraries, and the
54 command-line interface.
55* **<<instrumenting,Instrumentation>>** shows different ways to
56 instrument user applications and the Linux kernel.
57+
58Instrumenting source code is essential to provide a meaningful
59source of events.
60+
61You can skip this section if you do not have a programming background.
62
63* **<<controlling-tracing,Tracing control>>** is divided into topics
64 which demonstrate how to use the vast array of features that
65 LTTng{nbsp}{revision} offers.
66* **<<reference,Reference>>** contains reference tables.
67* **<<glossary,Glossary>>** is a specialized dictionary of terms related
68 to LTTng or to the field of software tracing.
69
70
71include::../common/convention.txt[]
72
73
74include::../common/acknowledgements.txt[]
75
76
77[[whats-new]]
78== What's new in LTTng {revision}?
79
80LTTng{nbsp}{revision} bears the name _KeKriek_. From
81http://brasseriedunham.com/[Brasserie Dunham], the _**KeKriek**_ is a
82sour mashed golden wheat ale fermented with local sour cherries from
83Tougas orchards. Fresh sweet cherry notes with some tartness, lively
84carbonation with a dry finish.
85
86New features and changes in LTTng{nbsp}{revision}:
87
88* **Tracing control**:
89** You can put more than one wildcard special character (`*`), and not
90 only at the end, when you <<enabling-disabling-events,create an event
91 rule>>, in both the instrumentation point name and the literal
92 strings of
a2211984 93 link:/man/1/lttng-enable-event/v{revision}/#doc-filter-syntax[filter expressions]:
85c29972
PP
94+
95--
96[role="term"]
97----
98# lttng enable-event --kernel 'x86_*_local_timer_*' \
99 --filter='name == "*a*b*c*d*e" && count >= 23'
100----
101--
102+
103--
104[role="term"]
105----
106$ lttng enable-event --userspace '*_my_org:*msg*'
107----
108--
109
110** New trigger and notification API for
111 <<liblttng-ctl-lttng,`liblttng-ctl`>>. This new subsystem allows you
112 to register triggers which emit a notification when a given
113 condition is satisfied. As of LTTng{nbsp}{revision}, only
114 <<channel,channel>> buffer usage conditions are available.
115 Documentation is available in the
116 https://github.com/lttng/lttng-tools/tree/stable-{revision}/include/lttng[`liblttng-ctl`
90c4e38a
PP
117 header files] and in
118 <<notif-trigger-api,Get notified when a channel's buffer usage is too
119 high or too low>>.
85c29972
PP
120
121** You can now embed the whole textual LTTng-tools man pages into the
122 executables at build time with the `--enable-embedded-help`
123 configuration option. Thanks to this option, you don't need the
124 http://www.methods.co.nz/asciidoc/[AsciiDoc] and
125 https://directory.fsf.org/wiki/Xmlto[xmlto] tools at build time, and
126 a manual pager at run time, to get access to this documentation.
127
128* **User space tracing**:
129** New blocking mode: an LTTng-UST tracepoint can now block until
130 <<channel,sub-buffer>> space is available instead of discarding event
131 records in <<channel-overwrite-mode-vs-discard-mode,discard mode>>.
132 With this feature, you can be sure that no event records are
133 discarded during your application's execution at the expense of
134 performance.
135+
136For example, the following command lines create a user space tracing
137channel with an infinite blocking timeout and run an application
138instrumented with LTTng-UST which is explicitly allowed to block:
139+
140--
141[role="term"]
142----
143$ lttng create
000f69a6 144$ lttng enable-channel --userspace --blocking-timeout=inf blocking-channel
85c29972
PP
145$ lttng enable-event --userspace --channel=blocking-channel --all
146$ lttng start
147$ LTTNG_UST_ALLOW_BLOCKING=1 my-app
148----
149--
150+
151See the complete <<blocking-timeout-example,blocking timeout example>>.
152
153* **Linux kernel tracing**:
154** Linux 4.10, 4.11, and 4.12 support.
155** The thread state dump events recorded by LTTng-modules now contain
156 the task's CPU identifier. This improves the precision of the
157 scheduler model for analyses.
158** Extended man:socketpair(2) system call tracing data.
159
160
161[[nuts-and-bolts]]
162== Nuts and bolts
163
164What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
165generation_ is a modern toolkit for tracing Linux systems and
166applications. So your first question might be:
167**what is tracing?**
168
169
170[[what-is-tracing]]
171=== What is tracing?
172
173As the history of software engineering progressed and led to what
174we now take for granted--complex, numerous and
175interdependent software applications running in parallel on
176sophisticated operating systems like Linux--the authors of such
177components, software developers, began feeling a natural
178urge to have tools that would ensure the robustness and good performance
179of their masterpieces.
180
181One major achievement in this field is, inarguably, the
182https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
183an essential tool for developers to find and fix bugs. But even the best
184debugger won't help make your software run faster, and nowadays, faster
185software means either more work done by the same hardware, or cheaper
186hardware for the same work.
187
188A _profiler_ is often the tool of choice to identify performance
189bottlenecks. Profiling is suitable to identify _where_ performance is
190lost in a given software. The profiler outputs a profile, a statistical
191summary of observed events, which you may use to discover which
192functions took the most time to execute. However, a profiler won't
193report _why_ some identified functions are the bottleneck. Bottlenecks
194might only occur when specific conditions are met, conditions that are
195sometimes impossible to capture by a statistical profiler, or impossible
196to reproduce with an application altered by the overhead of an
197event-based profiler. For a thorough investigation of software
198performance issues, a history of execution is essential, with the
199recorded values of variables and context fields you choose, and
200with as little influence as possible on the instrumented software. This
201is where tracing comes in handy.
202
203_Tracing_ is a technique used to understand what goes on in a running
204software system. The software used for tracing is called a _tracer_,
205which is conceptually similar to a tape recorder. When recording,
206specific instrumentation points placed in the software source code
207generate events that are saved on a giant tape: a _trace_ file. You
208can trace user applications and the operating system at the same time,
209opening the possibility of resolving a wide range of problems that would
210otherwise be extremely challenging.
211
212Tracing is often compared to _logging_. However, tracers and loggers are
213two different tools, serving two different purposes. Tracers are
214designed to record much lower-level events that occur much more
215frequently than log messages, often in the range of thousands per
216second, with very little execution overhead. Logging is more appropriate
217for a very high-level analysis of less frequent events: user accesses,
218exceptional conditions (errors and warnings, for example), database
219transactions, instant messaging communications, and such. Simply put,
220logging is one of the many use cases that can be satisfied with tracing.
221
222The list of recorded events inside a trace file can be read manually
223like a log file for the maximum level of detail, but it is generally
224much more interesting to perform application-specific analyses to
225produce reduced statistics and graphs that are useful to resolve a
226given problem. Trace viewers and analyzers are specialized tools
227designed to do this.
228
229In the end, this is what LTTng is: a powerful, open source set of
230tools to trace the Linux kernel and user applications at the same time.
231LTTng is composed of several components actively maintained and
232developed by its link:/community/#where[community].
233
234
235[[lttng-alternatives]]
236=== Alternatives to noch:{LTTng}
237
238Excluding proprietary solutions, a few competing software tracers
239exist for Linux:
240
241* https://github.com/dtrace4linux/linux[dtrace4linux] is a port of
242 Sun Microsystems's DTrace to Linux. The cmd:dtrace tool interprets
243 user scripts and is responsible for loading code into the
244 Linux kernel for further execution and collecting the outputted data.
245* https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF] is a
246 subsystem in the Linux kernel in which a virtual machine can execute
247 programs passed from the user space to the kernel. You can attach
248 such programs to tracepoints and KProbes thanks to a system call, and
249 they can output data to the user space when executed thanks to
250 different mechanisms (pipe, VM register values, and eBPF maps, to name
251 a few).
252* https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
253 is the de facto function tracer of the Linux kernel. Its user
254 interface is a set of special files in sysfs.
255* https://perf.wiki.kernel.org/[perf] is
256 a performance analyzing tool for Linux which supports hardware
257 performance counters, tracepoints, as well as other counters and
258 types of probes. perf's controlling utility is the cmd:perf command
259 line/curses tool.
260* http://linux.die.net/man/1/strace[strace]
261 is a command-line utility which records system calls made by a
262 user process, as well as signal deliveries and changes of process
263 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
264 to fulfill its function.
265* http://www.sysdig.org/[sysdig], like SystemTap, uses scripts to
266 analyze Linux kernel events. You write scripts, or _chisels_ in
267 sysdig's jargon, in Lua and sysdig executes them while the system is
268 being traced or afterwards. sysdig's interface is the cmd:sysdig
269 command-line tool as well as the curses-based cmd:csysdig tool.
270* https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and
271 user space tracer which uses custom user scripts to produce plain text
272 traces. SystemTap converts the scripts to the C language, and then
273 compiles them as Linux kernel modules which are loaded to produce
274 trace data. SystemTap's primary user interface is the cmd:stap
275 command-line tool.
276
277The main distinctive features of LTTng is that it produces correlated
278kernel and user space traces, as well as doing so with the lowest
279overhead amongst other solutions. It produces trace files in the
280http://diamon.org/ctf[CTF] format, a file format optimized
281for the production and analyses of multi-gigabyte data.
282
283LTTng is the result of more than 10 years of active open source
284development by a community of passionate developers.
285LTTng{nbsp}{revision} is currently available on major desktop and server
286Linux distributions.
287
288The main interface for tracing control is a single command-line tool
289named cmd:lttng. The latter can create several tracing sessions, enable
290and disable events on the fly, filter events efficiently with custom
291user expressions, start and stop tracing, and much more. LTTng can
292record the traces on the file system or send them over the network, and
293keep them totally or partially. You can view the traces once tracing
294becomes inactive or in real-time.
295
296<<installing-lttng,Install LTTng now>> and
297<<getting-started,start tracing>>!
298
299
300[[installing-lttng]]
301== Installation
302
4234ccdd
PP
303include::../common/warning-no-installation.txt[]
304
85c29972
PP
305**LTTng** is a set of software <<plumbing,components>> which interact to
306<<instrumenting,instrument>> the Linux kernel and user applications, and
307to <<controlling-tracing,control tracing>> (start and stop
308tracing, enable and disable event rules, and the rest). Those
309components are bundled into the following packages:
310
311* **LTTng-tools**: Libraries and command-line interface to
312 control tracing.
313* **LTTng-modules**: Linux kernel modules to instrument and
314 trace the kernel.
315* **LTTng-UST**: Libraries and Java/Python packages to instrument and
316 trace user applications.
317
318Most distributions mark the LTTng-modules and LTTng-UST packages as
4234ccdd
PP
319optional when installing LTTng-tools (which is always required). Note
320that:
85c29972
PP
321
322* You only need to install LTTng-modules if you intend to trace the
323 Linux kernel.
324* You only need to install LTTng-UST if you intend to trace user
325 applications.
326
347b52e8 327
85c29972
PP
328[[building-from-source]]
329=== Build from source
330
331To build and install LTTng{nbsp}{revision} from source:
332
333. Using your distribution's package manager, or from source, install
334 the following dependencies of LTTng-tools and LTTng-UST:
335+
336--
337* https://sourceforge.net/projects/libuuid/[libuuid]
338* http://directory.fsf.org/wiki/Popt[popt]
339* http://liburcu.org/[Userspace RCU]
340* http://www.xmlsoft.org/[libxml2]
341--
342
343. Download, build, and install the latest LTTng-modules{nbsp}{revision}:
344+
345--
346[role="term"]
347----
348$ cd $(mktemp -d) &&
349wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.10.tar.bz2 &&
350tar -xf lttng-modules-latest-2.10.tar.bz2 &&
351cd lttng-modules-2.10.* &&
352make &&
353sudo make modules_install &&
354sudo depmod -a
355----
356--
357
358. Download, build, and install the latest LTTng-UST{nbsp}{revision}:
359+
360--
361[role="term"]
362----
363$ cd $(mktemp -d) &&
364wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.10.tar.bz2 &&
365tar -xf lttng-ust-latest-2.10.tar.bz2 &&
366cd lttng-ust-2.10.* &&
367./configure &&
368make &&
369sudo make install &&
370sudo ldconfig
371----
372--
373+
374--
375[IMPORTANT]
376.Java and Python application tracing
377====
378If you need to instrument and trace <<java-application,Java
379applications>>, pass the `--enable-java-agent-jul`,
380`--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
381`configure` script, depending on which Java logging framework you use.
382
383If you need to instrument and trace <<python-application,Python
384applications>>, pass the `--enable-python-agent` option to the
385`configure` script. You can set the `PYTHON` environment variable to the
386path to the Python interpreter for which to install the LTTng-UST Python
387agent package.
388====
389--
390+
391--
392[NOTE]
393====
394By default, LTTng-UST libraries are installed to
395dir:{/usr/local/lib}, which is the de facto directory in which to
396keep self-compiled and third-party libraries.
397
398When <<building-tracepoint-providers-and-user-application,linking an
399instrumented user application with `liblttng-ust`>>:
400
401* Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
402 variable.
403* Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
404 man:gcc(1), man:g++(1), or man:clang(1).
405====
406--
407
408. Download, build, and install the latest LTTng-tools{nbsp}{revision}:
409+
410--
411[role="term"]
412----
413$ cd $(mktemp -d) &&
414wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
415tar -xf lttng-tools-latest-2.10.tar.bz2 &&
416cd lttng-tools-2.10.* &&
417./configure &&
418make &&
419sudo make install &&
420sudo ldconfig
421----
422--
423
424TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
425previous steps automatically for a given version of LTTng and confine
426the installed files in a specific directory. This can be useful to test
427LTTng without installing it on your system.
428
429
430[[getting-started]]
431== Quick start
432
433This is a short guide to get started quickly with LTTng kernel and user
434space tracing.
435
436Before you follow this guide, make sure to <<installing-lttng,install>>
437LTTng.
438
439This tutorial walks you through the steps to:
440
441. <<tracing-the-linux-kernel,Trace the Linux kernel>>.
442. <<tracing-your-own-user-application,Trace a user application>> written
443 in C.
444. <<viewing-and-analyzing-your-traces,View and analyze the
445 recorded events>>.
446
447
448[[tracing-the-linux-kernel]]
449=== Trace the Linux kernel
450
451The following command lines start with the `#` prompt because you need
452root privileges to trace the Linux kernel. You can also trace the kernel
453as a regular user if your Unix user is a member of the
454<<tracing-group,tracing group>>.
455
456. Create a <<tracing-session,tracing session>> which writes its traces
457 to dir:{/tmp/my-kernel-trace}:
458+
459--
460[role="term"]
461----
462# lttng create my-kernel-session --output=/tmp/my-kernel-trace
463----
464--
465
466. List the available kernel tracepoints and system calls:
467+
468--
469[role="term"]
470----
471# lttng list --kernel
472# lttng list --kernel --syscall
473----
474--
475
476. Create <<event,event rules>> which match the desired instrumentation
477 point names, for example the `sched_switch` and `sched_process_fork`
478 tracepoints, and the man:open(2) and man:close(2) system calls:
479+
480--
481[role="term"]
482----
483# lttng enable-event --kernel sched_switch,sched_process_fork
484# lttng enable-event --kernel --syscall open,close
485----
486--
487+
488You can also create an event rule which matches _all_ the Linux kernel
489tracepoints (this will generate a lot of data when tracing):
490+
491--
492[role="term"]
493----
494# lttng enable-event --kernel --all
495----
496--
497
498. <<basic-tracing-session-control,Start tracing>>:
499+
500--
501[role="term"]
502----
503# lttng start
504----
505--
506
507. Do some operation on your system for a few seconds. For example,
508 load a website, or list the files of a directory.
46adfb4b 509. <<creating-destroying-tracing-sessions,Destroy>> the current
85c29972
PP
510 tracing session:
511+
512--
513[role="term"]
514----
85c29972
PP
515# lttng destroy
516----
517--
518+
519The man:lttng-destroy(1) command does not destroy the trace data; it
520only destroys the state of the tracing session.
46adfb4b
PP
521+
522The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
523implicitly (see <<basic-tracing-session-control,Start and stop a tracing
524session>>). You need to stop tracing to make LTTng flush the remaining
525trace data and make the trace readable.
85c29972
PP
526
527. For the sake of this example, make the recorded trace accessible to
528 the non-root users:
529+
530--
531[role="term"]
532----
533# chown -R $(whoami) /tmp/my-kernel-trace
534----
535--
536
537See <<viewing-and-analyzing-your-traces,View and analyze the
538recorded events>> to view the recorded events.
539
540
541[[tracing-your-own-user-application]]
542=== Trace a user application
543
544This section steps you through a simple example to trace a
545_Hello world_ program written in C.
546
547To create the traceable user application:
548
549. Create the tracepoint provider header file, which defines the
550 tracepoints and the events they can generate:
551+
552--
553[source,c]
554.path:{hello-tp.h}
555----
556#undef TRACEPOINT_PROVIDER
557#define TRACEPOINT_PROVIDER hello_world
558
559#undef TRACEPOINT_INCLUDE
560#define TRACEPOINT_INCLUDE "./hello-tp.h"
561
562#if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
563#define _HELLO_TP_H
564
565#include <lttng/tracepoint.h>
566
567TRACEPOINT_EVENT(
568 hello_world,
569 my_first_tracepoint,
570 TP_ARGS(
571 int, my_integer_arg,
572 char*, my_string_arg
573 ),
574 TP_FIELDS(
575 ctf_string(my_string_field, my_string_arg)
576 ctf_integer(int, my_integer_field, my_integer_arg)
577 )
578)
579
580#endif /* _HELLO_TP_H */
581
582#include <lttng/tracepoint-event.h>
583----
584--
585
586. Create the tracepoint provider package source file:
587+
588--
589[source,c]
590.path:{hello-tp.c}
591----
592#define TRACEPOINT_CREATE_PROBES
593#define TRACEPOINT_DEFINE
594
595#include "hello-tp.h"
596----
597--
598
599. Build the tracepoint provider package:
600+
601--
602[role="term"]
603----
604$ gcc -c -I. hello-tp.c
605----
606--
607
608. Create the _Hello World_ application source file:
609+
610--
611[source,c]
612.path:{hello.c}
613----
614#include <stdio.h>
615#include "hello-tp.h"
616
617int main(int argc, char *argv[])
618{
619 int x;
620
621 puts("Hello, World!\nPress Enter to continue...");
622
623 /*
624 * The following getchar() call is only placed here for the purpose
625 * of this demonstration, to pause the application in order for
626 * you to have time to list its tracepoints. It is not
627 * needed otherwise.
628 */
629 getchar();
630
631 /*
632 * A tracepoint() call.
633 *
634 * Arguments, as defined in hello-tp.h:
635 *
636 * 1. Tracepoint provider name (required)
637 * 2. Tracepoint name (required)
638 * 3. my_integer_arg (first user-defined argument)
639 * 4. my_string_arg (second user-defined argument)
640 *
641 * Notice the tracepoint provider and tracepoint names are
642 * NOT strings: they are in fact parts of variables that the
643 * macros in hello-tp.h create.
644 */
645 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
646
647 for (x = 0; x < argc; ++x) {
648 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
649 }
650
651 puts("Quitting now!");
652 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
653
654 return 0;
655}
656----
657--
658
659. Build the application:
660+
661--
662[role="term"]
663----
664$ gcc -c hello.c
665----
666--
667
668. Link the application with the tracepoint provider package,
669 `liblttng-ust`, and `libdl`:
670+
671--
672[role="term"]
673----
674$ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
675----
676--
677
678Here's the whole build process:
679
680[role="img-100"]
681.User space tracing tutorial's build steps.
682image::ust-flow.png[]
683
684To trace the user application:
685
686. Run the application with a few arguments:
687+
688--
689[role="term"]
690----
691$ ./hello world and beyond
692----
693--
694+
695You see:
696+
697--
698----
699Hello, World!
700Press Enter to continue...
701----
702--
703
704. Start an LTTng <<lttng-sessiond,session daemon>>:
705+
706--
707[role="term"]
708----
709$ lttng-sessiond --daemonize
710----
711--
712+
713Note that a session daemon might already be running, for example as
714a service that the distribution's service manager started.
715
716. List the available user space tracepoints:
717+
718--
719[role="term"]
720----
721$ lttng list --userspace
722----
723--
724+
725You see the `hello_world:my_first_tracepoint` tracepoint listed
726under the `./hello` process.
727
728. Create a <<tracing-session,tracing session>>:
729+
730--
731[role="term"]
732----
733$ lttng create my-user-space-session
734----
735--
736
737. Create an <<event,event rule>> which matches the
738 `hello_world:my_first_tracepoint` event name:
739+
740--
741[role="term"]
742----
743$ lttng enable-event --userspace hello_world:my_first_tracepoint
744----
745--
746
747. <<basic-tracing-session-control,Start tracing>>:
748+
749--
750[role="term"]
751----
752$ lttng start
753----
754--
755
756. Go back to the running `hello` application and press Enter. The
757 program executes all `tracepoint()` instrumentation points and exits.
46adfb4b 758. <<creating-destroying-tracing-sessions,Destroy>> the current
85c29972
PP
759 tracing session:
760+
761--
762[role="term"]
763----
85c29972
PP
764$ lttng destroy
765----
766--
767+
768The man:lttng-destroy(1) command does not destroy the trace data; it
769only destroys the state of the tracing session.
46adfb4b
PP
770+
771The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
772implicitly (see <<basic-tracing-session-control,Start and stop a tracing
773session>>). You need to stop tracing to make LTTng flush the remaining
774trace data and make the trace readable.
85c29972
PP
775
776By default, LTTng saves the traces in
777+$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
778where +__name__+ is the tracing session name. The
779env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
780
781See <<viewing-and-analyzing-your-traces,View and analyze the
782recorded events>> to view the recorded events.
783
784
785[[viewing-and-analyzing-your-traces]]
786=== View and analyze the recorded events
787
788Once you have completed the <<tracing-the-linux-kernel,Trace the Linux
789kernel>> and <<tracing-your-own-user-application,Trace a user
790application>> tutorials, you can inspect the recorded events.
791
792Many tools are available to read LTTng traces:
793
794* **cmd:babeltrace** is a command-line utility which converts trace
795 formats; it supports the format that LTTng produces, CTF, as well as a
796 basic text output which can be ++grep++ed. The cmd:babeltrace command
797 is part of the http://diamon.org/babeltrace[Babeltrace] project.
798* Babeltrace also includes
799 **https://www.python.org/[Python] bindings** so
800 that you can easily open and read an LTTng trace with your own script,
801 benefiting from the power of Python.
802* http://tracecompass.org/[**Trace Compass**]
803 is a graphical user interface for viewing and analyzing any type of
804 logs or traces, including LTTng's.
805* https://github.com/lttng/lttng-analyses[**LTTng analyses**] is a
806 project which includes many high-level analyses of LTTng kernel
807 traces, like scheduling statistics, interrupt frequency distribution,
808 top CPU usage, and more.
809
810NOTE: This section assumes that the traces recorded during the previous
811tutorials were saved to their default location, in the
812dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
813environment variable defaults to `$HOME` if not set.
814
815
816[[viewing-and-analyzing-your-traces-bt]]
817==== Use the cmd:babeltrace command-line tool
818
819The simplest way to list all the recorded events of a trace is to pass
820its path to cmd:babeltrace with no options:
821
822[role="term"]
823----
824$ babeltrace ~/lttng-traces/my-user-space-session*
825----
826
827cmd:babeltrace finds all traces recursively within the given path and
828prints all their events, merging them in chronological order.
829
830You can pipe the output of cmd:babeltrace into a tool like man:grep(1) for
831further filtering:
832
833[role="term"]
834----
835$ babeltrace /tmp/my-kernel-trace | grep _switch
836----
837
838You can pipe the output of cmd:babeltrace into a tool like man:wc(1) to
839count the recorded events:
840
841[role="term"]
842----
843$ babeltrace /tmp/my-kernel-trace | grep _open | wc --lines
844----
845
846
847[[viewing-and-analyzing-your-traces-bt-python]]
848==== Use the Babeltrace Python bindings
849
850The <<viewing-and-analyzing-your-traces-bt,text output of cmd:babeltrace>>
851is useful to isolate events by simple matching using man:grep(1) and
852similar utilities. However, more elaborate filters, such as keeping only
853event records with a field value falling within a specific range, are
854not trivial to write using a shell. Moreover, reductions and even the
855most basic computations involving multiple event records are virtually
856impossible to implement.
857
858Fortunately, Babeltrace ships with Python 3 bindings which makes it easy
859to read the event records of an LTTng trace sequentially and compute the
860desired information.
861
862The following script accepts an LTTng Linux kernel trace path as its
863first argument and prints the short names of the top 5 running processes
864on CPU 0 during the whole trace:
865
866[source,python]
867.path:{top5proc.py}
868----
869from collections import Counter
870import babeltrace
871import sys
872
873
874def top5proc():
875 if len(sys.argv) != 2:
876 msg = 'Usage: python3 {} TRACEPATH'.format(sys.argv[0])
877 print(msg, file=sys.stderr)
878 return False
879
880 # A trace collection contains one or more traces
881 col = babeltrace.TraceCollection()
882
883 # Add the trace provided by the user (LTTng traces always have
884 # the 'ctf' format)
885 if col.add_trace(sys.argv[1], 'ctf') is None:
886 raise RuntimeError('Cannot add trace')
887
888 # This counter dict contains execution times:
889 #
890 # task command name -> total execution time (ns)
891 exec_times = Counter()
892
893 # This contains the last `sched_switch` timestamp
894 last_ts = None
895
896 # Iterate on events
897 for event in col.events:
898 # Keep only `sched_switch` events
899 if event.name != 'sched_switch':
900 continue
901
902 # Keep only events which happened on CPU 0
903 if event['cpu_id'] != 0:
904 continue
905
906 # Event timestamp
907 cur_ts = event.timestamp
908
909 if last_ts is None:
910 # We start here
911 last_ts = cur_ts
912
913 # Previous task command (short) name
914 prev_comm = event['prev_comm']
915
916 # Initialize entry in our dict if not yet done
917 if prev_comm not in exec_times:
918 exec_times[prev_comm] = 0
919
920 # Compute previous command execution time
921 diff = cur_ts - last_ts
922
923 # Update execution time of this command
924 exec_times[prev_comm] += diff
925
926 # Update last timestamp
927 last_ts = cur_ts
928
929 # Display top 5
930 for name, ns in exec_times.most_common(5):
931 s = ns / 1000000000
932 print('{:20}{} s'.format(name, s))
933
934 return True
935
936
937if __name__ == '__main__':
938 sys.exit(0 if top5proc() else 1)
939----
940
941Run this script:
942
943[role="term"]
944----
945$ python3 top5proc.py /tmp/my-kernel-trace/kernel
946----
947
948Output example:
949
950----
951swapper/0 48.607245889 s
952chromium 7.192738188 s
953pavucontrol 0.709894415 s
954Compositor 0.660867933 s
955Xorg.bin 0.616753786 s
956----
957
958Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
959weren't using the CPU that much when tracing, its first position in the
960list makes sense.
961
962
963[[core-concepts]]
964== [[understanding-lttng]]Core concepts
965
966From a user's perspective, the LTTng system is built on a few concepts,
967or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
968operates by sending commands to the <<lttng-sessiond,session daemon>>.
969Understanding how those objects relate to eachother is key in mastering
970the toolkit.
971
972The core concepts are:
973
974* <<tracing-session,Tracing session>>
975* <<domain,Tracing domain>>
976* <<channel,Channel and ring buffer>>
977* <<"event","Instrumentation point, event rule, event, and event record">>
978
979
980[[tracing-session]]
981=== Tracing session
982
983A _tracing session_ is a stateful dialogue between you and
984a <<lttng-sessiond,session daemon>>. You can
985<<creating-destroying-tracing-sessions,create a new tracing
986session>> with the `lttng create` command.
987
988Anything that you do when you control LTTng tracers happens within a
989tracing session. In particular, a tracing session:
990
991* Has its own name.
992* Has its own set of trace files.
993* Has its own state of activity (started or stopped).
994* Has its own <<tracing-session-mode,mode>> (local, network streaming,
995 snapshot, or live).
996* Has its own <<channel,channels>> which have their own
997 <<event,event rules>>.
998
999[role="img-100"]
1000.A _tracing session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,event rules>>.
1001image::concepts.png[]
1002
1003Those attributes and objects are completely isolated between different
1004tracing sessions.
1005
1006A tracing session is analogous to a cash machine session:
1007the operations you do on the banking system through the cash machine do
1008not alter the data of other users of the same system. In the case of
1009the cash machine, a session lasts as long as your bank card is inside.
1010In the case of LTTng, a tracing session lasts from the `lttng create`
1011command to the `lttng destroy` command.
1012
1013[role="img-100"]
1014.Each Unix user has its own set of tracing sessions.
1015image::many-sessions.png[]
1016
1017
1018[[tracing-session-mode]]
1019==== Tracing session mode
1020
1021LTTng can send the generated trace data to different locations. The
1022_tracing session mode_ dictates where to send it. The following modes
1023are available in LTTng{nbsp}{revision}:
1024
3cd5f504 1025[[local-mode]]Local mode::
85c29972
PP
1026 LTTng writes the traces to the file system of the machine being traced
1027 (target system).
1028
3cd5f504 1029[[net-streaming-mode]]Network streaming mode::
85c29972
PP
1030 LTTng sends the traces over the network to a
1031 <<lttng-relayd,relay daemon>> running on a remote system.
1032
1033Snapshot mode::
1034 LTTng does not write the traces by default. Instead, you can request
1035 LTTng to <<taking-a-snapshot,take a snapshot>>, that is, a copy of the
1036 current tracing buffers, and to write it to the target's file system
1037 or to send it over the network to a <<lttng-relayd,relay daemon>>
1038 running on a remote system.
1039
3cd5f504 1040[[live-mode]]Live mode::
85c29972
PP
1041 This mode is similar to the network streaming mode, but a live
1042 trace viewer can connect to the distant relay daemon to
1043 <<lttng-live,view event records as LTTng generates them>> by
1044 the tracers.
1045
1046
1047[[domain]]
1048=== Tracing domain
1049
1050A _tracing domain_ is a namespace for event sources. A tracing domain
1051has its own properties and features.
1052
1053There are currently five available tracing domains:
1054
1055* Linux kernel
1056* User space
1057* `java.util.logging` (JUL)
1058* log4j
1059* Python
1060
1061You must specify a tracing domain when using some commands to avoid
1062ambiguity. For example, since all the domains support named tracepoints
1063as event sources (instrumentation points that you manually insert in the
1064source code), you need to specify a tracing domain when
1065<<enabling-disabling-events,creating an event rule>> because all the
1066tracing domains could have tracepoints with the same names.
1067
1068Some features are reserved to specific tracing domains. Dynamic function
1069entry and return instrumentation points, for example, are currently only
1070supported in the Linux kernel tracing domain, but support for other
1071tracing domains could be added in the future.
1072
1073You can create <<channel,channels>> in the Linux kernel and user space
1074tracing domains. The other tracing domains have a single default
1075channel.
1076
1077
1078[[channel]]
1079=== Channel and ring buffer
1080
1081A _channel_ is an object which is responsible for a set of ring buffers.
1082Each ring buffer is divided into multiple sub-buffers. When an LTTng
1083tracer emits an event, it can record it to one or more
1084sub-buffers. The attributes of a channel determine what to do when
1085there's no space left for a new event record because all sub-buffers
1086are full, where to send a full sub-buffer, and other behaviours.
1087
1088A channel is always associated to a <<domain,tracing domain>>. The
1089`java.util.logging` (JUL), log4j, and Python tracing domains each have
1090a default channel which you cannot configure.
1091
1092A channel also owns <<event,event rules>>. When an LTTng tracer emits
1093an event, it records it to the sub-buffers of all
1094the enabled channels with a satisfied event rule, as long as those
1095channels are part of active <<tracing-session,tracing sessions>>.
1096
1097
1098[[channel-buffering-schemes]]
1099==== Per-user vs. per-process buffering schemes
1100
1101A channel has at least one ring buffer _per CPU_. LTTng always
1102records an event to the ring buffer associated to the CPU on which it
1103occurred.
1104
1105Two _buffering schemes_ are available when you
1106<<enabling-disabling-channels,create a channel>> in the
1107user space <<domain,tracing domain>>:
1108
1109Per-user buffering::
1110 Allocate one set of ring buffers--one per CPU--shared by all the
1111 instrumented processes of each Unix user.
1112+
1113--
1114[role="img-100"]
1115.Per-user buffering scheme.
1116image::per-user-buffering.png[]
1117--
1118
1119Per-process buffering::
1120 Allocate one set of ring buffers--one per CPU--for each
1121 instrumented process.
1122+
1123--
1124[role="img-100"]
1125.Per-process buffering scheme.
1126image::per-process-buffering.png[]
1127--
1128+
1129The per-process buffering scheme tends to consume more memory than the
1130per-user option because systems generally have more instrumented
1131processes than Unix users running instrumented processes. However, the
1132per-process buffering scheme ensures that one process having a high
1133event throughput won't fill all the shared sub-buffers of the same
1134user, only its own.
1135
1136The Linux kernel tracing domain has only one available buffering scheme
1137which is to allocate a single set of ring buffers for the whole system.
1138This scheme is similar to the per-user option, but with a single, global
1139user "running" the kernel.
1140
1141
1142[[channel-overwrite-mode-vs-discard-mode]]
1143==== Overwrite vs. discard event loss modes
1144
1145When an event occurs, LTTng records it to a specific sub-buffer (yellow
1146arc in the following animation) of a specific channel's ring buffer.
1147When there's no space left in a sub-buffer, the tracer marks it as
1148consumable (red) and another, empty sub-buffer starts receiving the
1149following event records. A <<lttng-consumerd,consumer daemon>>
1150eventually consumes the marked sub-buffer (returns to white).
1151
1152[NOTE]
1153[role="docsvg-channel-subbuf-anim"]
1154====
1155{note-no-anim}
1156====
1157
1158In an ideal world, sub-buffers are consumed faster than they are filled,
1159as is the case in the previous animation. In the real world,
1160however, all sub-buffers can be full at some point, leaving no space to
1161record the following events.
1162
1163By default, LTTng-modules and LTTng-UST are _non-blocking_ tracers: when
1164no empty sub-buffer is available, it is acceptable to lose event records
1165when the alternative would be to cause substantial delays in the
1166instrumented application's execution. LTTng privileges performance over
1167integrity; it aims at perturbing the traced system as little as possible
1168in order to make tracing of subtle race conditions and rare interrupt
1169cascades possible.
1170
1171Starting from LTTng{nbsp}2.10, the LTTng user space tracer, LTTng-UST,
1172supports a _blocking mode_. See the <<blocking-timeout-example,blocking
1173timeout example>> to learn how to use the blocking mode.
1174
1175When it comes to losing event records because no empty sub-buffer is
1176available, or because the <<opt-blocking-timeout,blocking timeout>> is
1177reached, the channel's _event loss mode_ determines what to do. The
1178available event loss modes are:
1179
1180Discard mode::
eeb71496
PP
1181 Drop the newest event records until a the tracer releases a
1182 sub-buffer.
1183+
1184This is the only available mode when you specify a
1185<<opt-blocking-timeout,blocking timeout>>.
85c29972
PP
1186
1187Overwrite mode::
1188 Clear the sub-buffer containing the oldest event records and start
1189 writing the newest event records there.
1190+
1191This mode is sometimes called _flight recorder mode_ because it's
1192similar to a
1193https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]:
1194always keep a fixed amount of the latest data.
1195
1196Which mechanism you should choose depends on your context: prioritize
1197the newest or the oldest event records in the ring buffer?
1198
4089946d 1199Beware that, in overwrite mode, the tracer abandons a _whole sub-buffer_
85c29972
PP
1200as soon as a there's no space left for a new event record, whereas in
1201discard mode, the tracer only discards the event record that doesn't
1202fit.
1203
4089946d 1204In discard mode, LTTng increments a count of lost event records when an
3cd5f504
PP
1205event record is lost and saves this count to the trace. Since
1206LTTng{nbsp}2.8, in overwrite mode, LTTng writes to a given sub-buffer
1207its sequence number within its data stream. With a <<local-mode,local>>,
1208<<net-streaming-mode,network streaming>>, or <<live-mode,live>>
1209<<tracing-session,tracing session>>, a trace reader can use such
1210sequence numbers to report lost packets. In overwrite mode, LTTng
1211doesn't write to the trace the exact number of lost event records in
1212those lost sub-buffers.
1213
1214Trace analyses can use saved discarded event record and sub-buffer
1215(packet) counts of the trace to decide whether or not to perform the
1216analyses even if trace data is known to be missing.
85c29972
PP
1217
1218There are a few ways to decrease your probability of losing event
1219records.
1220<<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>> shows
763bc985 1221how you can fine-tune the sub-buffer count and size of a channel to
85c29972
PP
1222virtually stop losing event records, though at the cost of greater
1223memory usage.
1224
1225
1226[[channel-subbuf-size-vs-subbuf-count]]
1227==== Sub-buffer count and size
1228
1229When you <<enabling-disabling-channels,create a channel>>, you can
1230set its number of sub-buffers and their size.
1231
1232Note that there is noticeable CPU overhead introduced when
1233switching sub-buffers (marking a full one as consumable and switching
1234to an empty one for the following events to be recorded). Knowing this,
1235the following list presents a few practical situations along with how
1236to configure the sub-buffer count and size for them:
1237
1238* **High event throughput**: In general, prefer bigger sub-buffers to
1239 lower the risk of losing event records.
1240+
1241Having bigger sub-buffers also ensures a lower
1242<<channel-switch-timer,sub-buffer switching frequency>>.
1243+
1244The number of sub-buffers is only meaningful if you create the channel
1245in overwrite mode: in this case, if a sub-buffer overwrite happens, the
1246other sub-buffers are left unaltered.
1247
1248* **Low event throughput**: In general, prefer smaller sub-buffers
1249 since the risk of losing event records is low.
1250+
1251Because events occur less frequently, the sub-buffer switching frequency
1252should remain low and thus the tracer's overhead should not be a
1253problem.
1254
1255* **Low memory system**: If your target system has a low memory
1256 limit, prefer fewer first, then smaller sub-buffers.
1257+
1258Even if the system is limited in memory, you want to keep the
1259sub-buffers as big as possible to avoid a high sub-buffer switching
1260frequency.
1261
1262Note that LTTng uses http://diamon.org/ctf/[CTF] as its trace format,
1263which means event data is very compact. For example, the average
1264LTTng kernel event record weights about 32{nbsp}bytes. Thus, a
1265sub-buffer size of 1{nbsp}MiB is considered big.
1266
1267The previous situations highlight the major trade-off between a few big
1268sub-buffers and more, smaller sub-buffers: sub-buffer switching
1269frequency vs. how much data is lost in overwrite mode. Assuming a
1270constant event throughput and using the overwrite mode, the two
1271following configurations have the same ring buffer total size:
1272
1273[NOTE]
1274[role="docsvg-channel-subbuf-size-vs-count-anim"]
1275====
1276{note-no-anim}
1277====
1278
1279* **2 sub-buffers of 4{nbsp}MiB each**: Expect a very low sub-buffer
1280 switching frequency, but if a sub-buffer overwrite happens, half of
1281 the event records so far (4{nbsp}MiB) are definitely lost.
1282* **8 sub-buffers of 1{nbsp}MiB each**: Expect 4{nbsp}times the tracer's
1283 overhead as the previous configuration, but if a sub-buffer
1284 overwrite happens, only the eighth of event records so far are
1285 definitely lost.
1286
1287In discard mode, the sub-buffers count parameter is pointless: use two
1288sub-buffers and set their size according to the requirements of your
1289situation.
1290
1291
1292[[channel-switch-timer]]
1293==== Switch timer period
1294
1295The _switch timer period_ is an important configurable attribute of
1296a channel to ensure periodic sub-buffer flushing.
1297
1298When the _switch timer_ expires, a sub-buffer switch happens. You can
1299set the switch timer period attribute when you
1300<<enabling-disabling-channels,create a channel>> to ensure that event
1301data is consumed and committed to trace files or to a distant relay
1302daemon periodically in case of a low event throughput.
1303
1304[NOTE]
1305[role="docsvg-channel-switch-timer"]
1306====
1307{note-no-anim}
1308====
1309
1310This attribute is also convenient when you use big sub-buffers to cope
1311with a sporadic high event throughput, even if the throughput is
1312normally low.
1313
1314
1315[[channel-read-timer]]
1316==== Read timer period
1317
1318By default, the LTTng tracers use a notification mechanism to signal a
1319full sub-buffer so that a consumer daemon can consume it. When such
1320notifications must be avoided, for example in real-time applications,
1321you can use the channel's _read timer_ instead. When the read timer
1322fires, the <<lttng-consumerd,consumer daemon>> checks for full,
1323consumable sub-buffers.
1324
1325
1326[[tracefile-rotation]]
1327==== Trace file count and size
1328
1329By default, trace files can grow as large as needed. You can set the
1330maximum size of each trace file that a channel writes when you
1331<<enabling-disabling-channels,create a channel>>. When the size of
1332a trace file reaches the channel's fixed maximum size, LTTng creates
1333another file to contain the next event records. LTTng appends a file
1334count to each trace file name in this case.
1335
1336If you set the trace file size attribute when you create a channel, the
1337maximum number of trace files that LTTng creates is _unlimited_ by
1338default. To limit them, you can also set a maximum number of trace
1339files. When the number of trace files reaches the channel's fixed
1340maximum count, the oldest trace file is overwritten. This mechanism is
1341called _trace file rotation_.
1342
1343
1344[[event]]
1345=== Instrumentation point, event rule, event, and event record
1346
1347An _event rule_ is a set of conditions which must be **all** satisfied
1348for LTTng to record an occuring event.
1349
1350You set the conditions when you <<enabling-disabling-events,create
1351an event rule>>.
1352
1353You always attach an event rule to <<channel,channel>> when you create
1354it.
1355
1356When an event passes the conditions of an event rule, LTTng records it
1357in one of the attached channel's sub-buffers.
1358
1359The available conditions, as of LTTng{nbsp}{revision}, are:
1360
1361* The event rule _is enabled_.
1362* The instrumentation point's type _is{nbsp}T_.
1363* The instrumentation point's name (sometimes called _event name_)
1364 _matches{nbsp}N_, but _is not{nbsp}E_.
1365* The instrumentation point's log level _is as severe as{nbsp}L_, or
1366 _is exactly{nbsp}L_.
1367* The fields of the event's payload _satisfy_ a filter
1368 expression{nbsp}__F__.
1369
1370As you can see, all the conditions but the dynamic filter are related to
1371the event rule's status or to the instrumentation point, not to the
1372occurring events. This is why, without a filter, checking if an event
1373passes an event rule is not a dynamic task: when you create or modify an
1374event rule, all the tracers of its tracing domain enable or disable the
1375instrumentation points themselves once. This is possible because the
1376attributes of an instrumentation point (type, name, and log level) are
1377defined statically. In other words, without a dynamic filter, the tracer
1378_does not evaluate_ the arguments of an instrumentation point unless it
1379matches an enabled event rule.
1380
1381Note that, for LTTng to record an event, the <<channel,channel>> to
1382which a matching event rule is attached must also be enabled, and the
1383tracing session owning this channel must be active.
1384
1385[role="img-100"]
1386.Logical path from an instrumentation point to an event record.
1387image::event-rule.png[]
1388
1389.Event, event record, or event rule?
1390****
1391With so many similar terms, it's easy to get confused.
1392
1393An **event** is the consequence of the execution of an _instrumentation
1394point_, like a tracepoint that you manually place in some source code,
1395or a Linux kernel KProbe. An event is said to _occur_ at a specific
1396time. Different actions can be taken upon the occurrence of an event,
1397like record the event's payload to a buffer.
1398
1399An **event record** is the representation of an event in a sub-buffer. A
1400tracer is responsible for capturing the payload of an event, current
1401context variables, the event's ID, and the event's timestamp. LTTng
1402can append this sub-buffer to a trace file.
1403
1404An **event rule** is a set of conditions which must all be satisfied for
1405LTTng to record an occuring event. Events still occur without
1406satisfying event rules, but LTTng does not record them.
1407****
1408
1409
1410[[plumbing]]
1411== Components of noch:{LTTng}
1412
1413The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1414to call LTTng a simple _tool_ since it is composed of multiple
1415interacting components. This section describes those components,
1416explains their respective roles, and shows how they connect together to
1417form the LTTng ecosystem.
1418
1419The following diagram shows how the most important components of LTTng
1420interact with user applications, the Linux kernel, and you:
1421
1422[role="img-100"]
1423.Control and trace data paths between LTTng components.
1424image::plumbing.png[]
1425
1426The LTTng project incorporates:
1427
1428* **LTTng-tools**: Libraries and command-line interface to
1429 control tracing sessions.
1430** <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
a9f3997c 1431** <<lttng-consumerd,Consumer daemon>> (cmd:lttng-consumerd).
85c29972
PP
1432** <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1433** <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1434** <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1435* **LTTng-UST**: Libraries and Java/Python packages to trace user
1436 applications.
1437** <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1438 headers to instrument and trace any native user application.
1439** <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1440*** `liblttng-ust-libc-wrapper`
1441*** `liblttng-ust-pthread-wrapper`
1442*** `liblttng-ust-cyg-profile`
1443*** `liblttng-ust-cyg-profile-fast`
1444*** `liblttng-ust-dl`
1445** User space tracepoint provider source files generator command-line
1446 tool (man:lttng-gen-tp(1)).
1447** <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1448 Java applications using `java.util.logging` or
1449 Apache log4j 1.2 logging.
1450** <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1451 Python applications using the standard `logging` package.
1452* **LTTng-modules**: <<lttng-modules,Linux kernel modules>> to trace
1453 the kernel.
1454** LTTng kernel tracer module.
1455** Tracing ring buffer kernel modules.
1456** Probe kernel modules.
1457** LTTng logger kernel module.
1458
1459
1460[[lttng-cli]]
1461=== Tracing control command-line interface
1462
1463[role="img-100"]
1464.The tracing control command-line interface.
1465image::plumbing-lttng-cli.png[]
1466
1467The _man:lttng(1) command-line tool_ is the standard user interface to
1468control LTTng <<tracing-session,tracing sessions>>. The cmd:lttng tool
1469is part of LTTng-tools.
1470
1471The cmd:lttng tool is linked with
1472<<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1473one or more <<lttng-sessiond,session daemons>> behind the scenes.
1474
1475The cmd:lttng tool has a Git-like interface:
1476
1477[role="term"]
1478----
1479$ lttng <GENERAL OPTIONS> <COMMAND> <COMMAND OPTIONS>
1480----
1481
1482The <<controlling-tracing,Tracing control>> section explores the
1483available features of LTTng using the cmd:lttng tool.
1484
1485
1486[[liblttng-ctl-lttng]]
1487=== Tracing control library
1488
1489[role="img-100"]
1490.The tracing control library.
1491image::plumbing-liblttng-ctl.png[]
1492
1493The _LTTng control library_, `liblttng-ctl`, is used to communicate
1494with a <<lttng-sessiond,session daemon>> using a C API that hides the
1495underlying protocol's details. `liblttng-ctl` is part of LTTng-tools.
1496
1497The <<lttng-cli,cmd:lttng command-line tool>>
1498is linked with `liblttng-ctl`.
1499
1500You can use `liblttng-ctl` in C or $$C++$$ source code by including its
1501"master" header:
1502
1503[source,c]
1504----
1505#include <lttng/lttng.h>
1506----
1507
1508Some objects are referenced by name (C string), such as tracing
1509sessions, but most of them require to create a handle first using
1510`lttng_create_handle()`.
1511
1512The best available developer documentation for `liblttng-ctl` is, as of
1513LTTng{nbsp}{revision}, its installed header files. Every function and
1514structure is thoroughly documented.
1515
1516
1517[[lttng-ust]]
1518=== User space tracing library
1519
1520[role="img-100"]
1521.The user space tracing library.
1522image::plumbing-liblttng-ust.png[]
1523
1524The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1525is the LTTng user space tracer. It receives commands from a
1526<<lttng-sessiond,session daemon>>, for example to
1527enable and disable specific instrumentation points, and writes event
1528records to ring buffers shared with a
1529<<lttng-consumerd,consumer daemon>>.
1530`liblttng-ust` is part of LTTng-UST.
1531
1532Public C header files are installed beside `liblttng-ust` to
1533instrument any <<c-application,C or $$C++$$ application>>.
1534
1535<<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1536packages, use their own library providing tracepoints which is
1537linked with `liblttng-ust`.
1538
1539An application or library does not have to initialize `liblttng-ust`
1540manually: its constructor does the necessary tasks to properly register
1541to a session daemon. The initialization phase also enables the
1542instrumentation points matching the <<event,event rules>> that you
1543already created.
1544
1545
1546[[lttng-ust-agents]]
1547=== User space tracing agents
1548
1549[role="img-100"]
1550.The user space tracing agents.
1551image::plumbing-lttng-ust-agents.png[]
1552
1553The _LTTng-UST Java and Python agents_ are regular Java and Python
1554packages which add LTTng tracing capabilities to the
1555native logging frameworks. The LTTng-UST agents are part of LTTng-UST.
1556
1557In the case of Java, the
1558https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1559core logging facilities] and
1560https://logging.apache.org/log4j/1.2/[Apache log4j 1.2] are supported.
1561Note that Apache Log4{nbsp}2 is not supported.
1562
1563In the case of Python, the standard
1564https://docs.python.org/3/library/logging.html[`logging`] package
1565is supported. Both Python 2 and Python 3 modules can import the
1566LTTng-UST Python agent package.
1567
1568The applications using the LTTng-UST agents are in the
1569`java.util.logging` (JUL),
1570log4j, and Python <<domain,tracing domains>>.
1571
1572Both agents use the same mechanism to trace the log statements. When an
1573agent is initialized, it creates a log handler that attaches to the root
1574logger. The agent also registers to a <<lttng-sessiond,session daemon>>.
1575When the application executes a log statement, it is passed to the
1576agent's log handler by the root logger. The agent's log handler calls a
1577native function in a tracepoint provider package shared library linked
1578with <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
1579other fields, like its logger name and its log level. This native
1580function contains a user space instrumentation point, hence tracing the
1581log statement.
1582
1583The log level condition of an
1584<<event,event rule>> is considered when tracing
1585a Java or a Python application, and it's compatible with the standard
1586JUL, log4j, and Python log levels.
1587
1588
1589[[lttng-modules]]
1590=== LTTng kernel modules
1591
1592[role="img-100"]
1593.The LTTng kernel modules.
1594image::plumbing-lttng-modules.png[]
1595
1596The _LTTng kernel modules_ are a set of Linux kernel modules
1597which implement the kernel tracer of the LTTng project. The LTTng
1598kernel modules are part of LTTng-modules.
1599
1600The LTTng kernel modules include:
1601
1602* A set of _probe_ modules.
1603+
1604Each module attaches to a specific subsystem
1605of the Linux kernel using its tracepoint instrument points. There are
1606also modules to attach to the entry and return points of the Linux
1607system call functions.
1608
1609* _Ring buffer_ modules.
1610+
1611A ring buffer implementation is provided as kernel modules. The LTTng
1612kernel tracer writes to the ring buffer; a
1613<<lttng-consumerd,consumer daemon>> reads from the ring buffer.
1614
1615* The _LTTng kernel tracer_ module.
1616* The _LTTng logger_ module.
1617+
1618The LTTng logger module implements the special path:{/proc/lttng-logger}
1619file so that any executable can generate LTTng events by opening and
1620writing to this file.
1621+
1622See <<proc-lttng-logger-abi,LTTng logger>>.
1623
1624Generally, you do not have to load the LTTng kernel modules manually
1625(using man:modprobe(8), for example): a root <<lttng-sessiond,session
1626daemon>> loads the necessary modules when starting. If you have extra
1627probe modules, you can specify to load them to the session daemon on
1628the command line.
1629
1630The LTTng kernel modules are installed in
1631+/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
1632the kernel release (see `uname --kernel-release`).
1633
1634
1635[[lttng-sessiond]]
1636=== Session daemon
1637
1638[role="img-100"]
1639.The session daemon.
1640image::plumbing-sessiond.png[]
1641
1642The _session daemon_, man:lttng-sessiond(8), is a daemon responsible for
1643managing tracing sessions and for controlling the various components of
1644LTTng. The session daemon is part of LTTng-tools.
1645
1646The session daemon sends control requests to and receives control
1647responses from:
1648
1649* The <<lttng-ust,user space tracing library>>.
1650+
1651Any instance of the user space tracing library first registers to
1652a session daemon. Then, the session daemon can send requests to
1653this instance, such as:
1654+
1655--
1656** Get the list of tracepoints.
1657** Share an <<event,event rule>> so that the user space tracing library
1658 can enable or disable tracepoints. Amongst the possible conditions
1659 of an event rule is a filter expression which `liblttng-ust` evalutes
1660 when an event occurs.
1661** Share <<channel,channel>> attributes and ring buffer locations.
1662--
1663+
1664The session daemon and the user space tracing library use a Unix
1665domain socket for their communication.
1666
1667* The <<lttng-ust-agents,user space tracing agents>>.
1668+
1669Any instance of a user space tracing agent first registers to
1670a session daemon. Then, the session daemon can send requests to
1671this instance, such as:
1672+
1673--
1674** Get the list of loggers.
1675** Enable or disable a specific logger.
1676--
1677+
1678The session daemon and the user space tracing agent use a TCP connection
1679for their communication.
1680
1681* The <<lttng-modules,LTTng kernel tracer>>.
1682* The <<lttng-consumerd,consumer daemon>>.
1683+
1684The session daemon sends requests to the consumer daemon to instruct
1685it where to send the trace data streams, amongst other information.
1686
1687* The <<lttng-relayd,relay daemon>>.
1688
1689The session daemon receives commands from the
1690<<liblttng-ctl-lttng,tracing control library>>.
1691
1692The root session daemon loads the appropriate
1693<<lttng-modules,LTTng kernel modules>> on startup. It also spawns
1694a <<lttng-consumerd,consumer daemon>> as soon as you create
1695an <<event,event rule>>.
1696
1697The session daemon does not send and receive trace data: this is the
1698role of the <<lttng-consumerd,consumer daemon>> and
1699<<lttng-relayd,relay daemon>>. It does, however, generate the
1700http://diamon.org/ctf/[CTF] metadata stream.
1701
1702Each Unix user can have its own session daemon instance. The
1703tracing sessions managed by different session daemons are completely
1704independent.
1705
1706The root user's session daemon is the only one which is
1707allowed to control the LTTng kernel tracer, and its spawned consumer
1708daemon is the only one which is allowed to consume trace data from the
1709LTTng kernel tracer. Note, however, that any Unix user which is a member
1710of the <<tracing-group,tracing group>> is allowed
1711to create <<channel,channels>> in the
1712Linux kernel <<domain,tracing domain>>, and thus to trace the Linux
1713kernel.
1714
1715The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
1716session daemon when using its `create` command if none is currently
1717running. You can also start the session daemon manually.
1718
1719
1720[[lttng-consumerd]]
1721=== Consumer daemon
1722
1723[role="img-100"]
1724.The consumer daemon.
1725image::plumbing-consumerd.png[]
1726
a9f3997c 1727The _consumer daemon_, cmd:lttng-consumerd, is a daemon which shares
85c29972
PP
1728ring buffers with user applications or with the LTTng kernel modules to
1729collect trace data and send it to some location (on disk or to a
1730<<lttng-relayd,relay daemon>> over the network). The consumer daemon
1731is part of LTTng-tools.
1732
1733You do not start a consumer daemon manually: a consumer daemon is always
1734spawned by a <<lttng-sessiond,session daemon>> as soon as you create an
1735<<event,event rule>>, that is, before you start tracing. When you kill
1736its owner session daemon, the consumer daemon also exits because it is
1737the session daemon's child process. Command-line options of
1738man:lttng-sessiond(8) target the consumer daemon process.
1739
1740There are up to two running consumer daemons per Unix user, whereas only
1741one session daemon can run per user. This is because each process can be
1742either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
1743and 64-bit processes, it is more efficient to have separate
1744corresponding 32-bit and 64-bit consumer daemons. The root user is an
1745exception: it can have up to _three_ running consumer daemons: 32-bit
1746and 64-bit instances for its user applications, and one more
1747reserved for collecting kernel trace data.
1748
1749
1750[[lttng-relayd]]
1751=== Relay daemon
1752
1753[role="img-100"]
1754.The relay daemon.
1755image::plumbing-relayd.png[]
1756
1757The _relay daemon_, man:lttng-relayd(8), is a daemon acting as a bridge
1758between remote session and consumer daemons, local trace files, and a
1759remote live trace viewer. The relay daemon is part of LTTng-tools.
1760
1761The main purpose of the relay daemon is to implement a receiver of
1762<<sending-trace-data-over-the-network,trace data over the network>>.
1763This is useful when the target system does not have much file system
1764space to record trace files locally.
1765
1766The relay daemon is also a server to which a
1767<<lttng-live,live trace viewer>> can
1768connect. The live trace viewer sends requests to the relay daemon to
1769receive trace data as the target system emits events. The
1770communication protocol is named _LTTng live_; it is used over TCP
1771connections.
1772
1773Note that you can start the relay daemon on the target system directly.
1774This is the setup of choice when the use case is to view events as
1775the target system emits them without the need of a remote system.
1776
1777
1778[[instrumenting]]
1779== [[using-lttng]]Instrumentation
1780
1781There are many examples of tracing and monitoring in our everyday life:
1782
1783* You have access to real-time and historical weather reports and
1784 forecasts thanks to weather stations installed around the country.
1785* You know your heart is safe thanks to an electrocardiogram.
1786* You make sure not to drive your car too fast and to have enough fuel
1787 to reach your destination thanks to gauges visible on your dashboard.
1788
1789All the previous examples have something in common: they rely on
1790**instruments**. Without the electrodes attached to the surface of your
1791body's skin, cardiac monitoring is futile.
1792
1793LTTng, as a tracer, is no different from those real life examples. If
1794you're about to trace a software system or, in other words, record its
1795history of execution, you better have **instrumentation points** in the
1796subject you're tracing, that is, the actual software.
1797
1798Various ways were developed to instrument a piece of software for LTTng
1799tracing. The most straightforward one is to manually place
1800instrumentation points, called _tracepoints_, in the software's source
1801code. It is also possible to add instrumentation points dynamically in
1802the Linux kernel <<domain,tracing domain>>.
1803
1804If you're only interested in tracing the Linux kernel, your
1805instrumentation needs are probably already covered by LTTng's built-in
1806<<lttng-modules,Linux kernel tracepoints>>. You may also wish to trace a
1807user application which is already instrumented for LTTng tracing.
1808In such cases, you can skip this whole section and read the topics of
1809the <<controlling-tracing,Tracing control>> section.
1810
1811Many methods are available to instrument a piece of software for LTTng
1812tracing. They are:
1813
1814* <<c-application,User space instrumentation for C and $$C++$$
1815 applications>>.
1816* <<prebuilt-ust-helpers,Prebuilt user space tracing helpers>>.
1817* <<java-application,User space Java agent>>.
1818* <<python-application,User space Python agent>>.
1819* <<proc-lttng-logger-abi,LTTng logger>>.
1820* <<instrumenting-linux-kernel,LTTng kernel tracepoints>>.
1821
1822
1823[[c-application]]
1824=== [[cxx-application]]User space instrumentation for C and $$C++$$ applications
1825
1826The procedure to instrument a C or $$C++$$ user application with
1827the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`, is:
1828
1829. <<tracepoint-provider,Create the source files of a tracepoint provider
1830 package>>.
1831. <<probing-the-application-source-code,Add tracepoints to
1832 the application's source code>>.
1833. <<building-tracepoint-providers-and-user-application,Build and link
1834 a tracepoint provider package and the user application>>.
1835
1836If you need quick, man:printf(3)-like instrumentation, you can skip
1837those steps and use <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>>
1838instead.
1839
1840IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
1841instrument a user application with `liblttng-ust`.
1842
1843
1844[[tracepoint-provider]]
1845==== Create the source files of a tracepoint provider package
1846
1847A _tracepoint provider_ is a set of compiled functions which provide
1848**tracepoints** to an application, the type of instrumentation point
1849supported by LTTng-UST. Those functions can emit events with
1850user-defined fields and serialize those events as event records to one
1851or more LTTng-UST <<channel,channel>> sub-buffers. The `tracepoint()`
1852macro, which you <<probing-the-application-source-code,insert in a user
1853application's source code>>, calls those functions.
1854
1855A _tracepoint provider package_ is an object file (`.o`) or a shared
1856library (`.so`) which contains one or more tracepoint providers.
1857Its source files are:
1858
1859* One or more <<tpp-header,tracepoint provider header>> (`.h`).
1860* A <<tpp-source,tracepoint provider package source>> (`.c`).
1861
1862A tracepoint provider package is dynamically linked with `liblttng-ust`,
1863the LTTng user space tracer, at run time.
1864
1865[role="img-100"]
1866.User application linked with `liblttng-ust` and containing a tracepoint provider.
1867image::ust-app.png[]
1868
1869NOTE: If you need quick, man:printf(3)-like instrumentation, you can
1870skip creating and using a tracepoint provider and use
1871<<tracef,`tracef()`>> or <<tracelog,`tracelog()`>> instead.
1872
1873
1874[[tpp-header]]
1875===== Create a tracepoint provider header file template
1876
1877A _tracepoint provider header file_ contains the tracepoint
1878definitions of a tracepoint provider.
1879
1880To create a tracepoint provider header file:
1881
1882. Start from this template:
1883+
1884--
1885[source,c]
1886.Tracepoint provider header file template (`.h` file extension).
1887----
1888#undef TRACEPOINT_PROVIDER
1889#define TRACEPOINT_PROVIDER provider_name
1890
1891#undef TRACEPOINT_INCLUDE
1892#define TRACEPOINT_INCLUDE "./tp.h"
1893
1894#if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1895#define _TP_H
1896
1897#include <lttng/tracepoint.h>
1898
1899/*
1900 * Use TRACEPOINT_EVENT(), TRACEPOINT_EVENT_CLASS(),
1901 * TRACEPOINT_EVENT_INSTANCE(), and TRACEPOINT_LOGLEVEL() here.
1902 */
1903
1904#endif /* _TP_H */
1905
1906#include <lttng/tracepoint-event.h>
1907----
1908--
1909
1910. Replace:
1911+
1912* `provider_name` with the name of your tracepoint provider.
1913* `"tp.h"` with the name of your tracepoint provider header file.
1914
1915. Below the `#include <lttng/tracepoint.h>` line, put your
1916 <<defining-tracepoints,tracepoint definitions>>.
1917
1918Your tracepoint provider name must be unique amongst all the possible
1919tracepoint provider names used on the same target system. We
1920suggest to include the name of your project or company in the name,
1921for example, `org_lttng_my_project_tpp`.
1922
1923TIP: [[lttng-gen-tp]]You can use the man:lttng-gen-tp(1) tool to create
1924this boilerplate for you. When using cmd:lttng-gen-tp, all you need to
1925write are the <<defining-tracepoints,tracepoint definitions>>.
1926
1927
1928[[defining-tracepoints]]
1929===== Create a tracepoint definition
1930
1931A _tracepoint definition_ defines, for a given tracepoint:
1932
1933* Its **input arguments**. They are the macro parameters that the
1934 `tracepoint()` macro accepts for this particular tracepoint
1935 in the user application's source code.
1936* Its **output event fields**. They are the sources of event fields
1937 that form the payload of any event that the execution of the
1938 `tracepoint()` macro emits for this particular tracepoint.
1939
1940You can create a tracepoint definition by using the
1941`TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
1942line in the
1943<<tpp-header,tracepoint provider header file template>>.
1944
1945The syntax of the `TRACEPOINT_EVENT()` macro is:
1946
1947[source,c]
1948.`TRACEPOINT_EVENT()` macro syntax.
1949----
1950TRACEPOINT_EVENT(
1951 /* Tracepoint provider name */
1952 provider_name,
1953
1954 /* Tracepoint name */
1955 tracepoint_name,
1956
1957 /* Input arguments */
1958 TP_ARGS(
1959 arguments
1960 ),
1961
1962 /* Output event fields */
1963 TP_FIELDS(
1964 fields
1965 )
1966)
1967----
1968
1969Replace:
1970
1971* `provider_name` with your tracepoint provider name.
1972* `tracepoint_name` with your tracepoint name.
1973* `arguments` with the <<tpp-def-input-args,input arguments>>.
1974* `fields` with the <<tpp-def-output-fields,output event field>>
1975 definitions.
1976
1977This tracepoint emits events named `provider_name:tracepoint_name`.
1978
1979[IMPORTANT]
1980.Event name's length limitation
1981====
1982The concatenation of the tracepoint provider name and the
1983tracepoint name must not exceed **254 characters**. If it does, the
1984instrumented application compiles and runs, but LTTng throws multiple
1985warnings and you could experience serious issues.
1986====
1987
1988[[tpp-def-input-args]]The syntax of the `TP_ARGS()` macro is:
1989
1990[source,c]
1991.`TP_ARGS()` macro syntax.
1992----
1993TP_ARGS(
1994 type, arg_name
1995)
1996----
1997
1998Replace:
1999
2000* `type` with the C type of the argument.
2001* `arg_name` with the argument name.
2002
2003You can repeat `type` and `arg_name` up to 10 times to have
2004more than one argument.
2005
2006.`TP_ARGS()` usage with three arguments.
2007====
2008[source,c]
2009----
2010TP_ARGS(
2011 int, count,
2012 float, ratio,
2013 const char*, query
2014)
2015----
2016====
2017
2018The `TP_ARGS()` and `TP_ARGS(void)` forms are valid to create a
2019tracepoint definition with no input arguments.
2020
2021[[tpp-def-output-fields]]The `TP_FIELDS()` macro contains a list of
2022`ctf_*()` macros. Each `ctf_*()` macro defines one event field. See
2023man:lttng-ust(3) for a complete description of the available `ctf_*()`
2024macros. A `ctf_*()` macro specifies the type, size, and byte order of
2025one event field.
2026
2027Each `ctf_*()` macro takes an _argument expression_ parameter. This is a
2028C expression that the tracer evalutes at the `tracepoint()` macro site
2029in the application's source code. This expression provides a field's
2030source of data. The argument expression can include input argument names
2031listed in the `TP_ARGS()` macro.
2032
2033Each `ctf_*()` macro also takes a _field name_ parameter. Field names
2034must be unique within a given tracepoint definition.
2035
2036Here's a complete tracepoint definition example:
2037
2038.Tracepoint definition.
2039====
2040The following tracepoint definition defines a tracepoint which takes
2041three input arguments and has four output event fields.
2042
2043[source,c]
2044----
2045#include "my-custom-structure.h"
2046
2047TRACEPOINT_EVENT(
2048 my_provider,
2049 my_tracepoint,
2050 TP_ARGS(
2051 const struct my_custom_structure*, my_custom_structure,
2052 float, ratio,
2053 const char*, query
2054 ),
2055 TP_FIELDS(
2056 ctf_string(query_field, query)
2057 ctf_float(double, ratio_field, ratio)
2058 ctf_integer(int, recv_size, my_custom_structure->recv_size)
2059 ctf_integer(int, send_size, my_custom_structure->send_size)
2060 )
2061)
2062----
2063
2064You can refer to this tracepoint definition with the `tracepoint()`
2065macro in your application's source code like this:
2066
2067[source,c]
2068----
2069tracepoint(my_provider, my_tracepoint,
2070 my_structure, some_ratio, the_query);
2071----
2072====
2073
2074NOTE: The LTTng tracer only evaluates tracepoint arguments at run time
2075if they satisfy an enabled <<event,event rule>>.
2076
2077
2078[[using-tracepoint-classes]]
2079===== Use a tracepoint class
2080
2081A _tracepoint class_ is a class of tracepoints which share the same
2082output event field definitions. A _tracepoint instance_ is one
2083instance of such a defined tracepoint class, with its own tracepoint
2084name.
2085
2086The <<defining-tracepoints,`TRACEPOINT_EVENT()` macro>> is actually a
2087shorthand which defines both a tracepoint class and a tracepoint
2088instance at the same time.
2089
2090When you build a tracepoint provider package, the C or $$C++$$ compiler
2091creates one serialization function for each **tracepoint class**. A
2092serialization function is responsible for serializing the event fields
2093of a tracepoint to a sub-buffer when tracing.
2094
2095For various performance reasons, when your situation requires multiple
2096tracepoint definitions with different names, but with the same event
2097fields, we recommend that you manually create a tracepoint class
2098and instantiate as many tracepoint instances as needed. One positive
2099effect of such a design, amongst other advantages, is that all
2100tracepoint instances of the same tracepoint class reuse the same
2101serialization function, thus reducing
2102https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2103
2104.Use a tracepoint class and tracepoint instances.
2105====
2106Consider the following three tracepoint definitions:
2107
2108[source,c]
2109----
2110TRACEPOINT_EVENT(
2111 my_app,
2112 get_account,
2113 TP_ARGS(
2114 int, userid,
2115 size_t, len
2116 ),
2117 TP_FIELDS(
2118 ctf_integer(int, userid, userid)
2119 ctf_integer(size_t, len, len)
2120 )
2121)
2122
2123TRACEPOINT_EVENT(
2124 my_app,
2125 get_settings,
2126 TP_ARGS(
2127 int, userid,
2128 size_t, len
2129 ),
2130 TP_FIELDS(
2131 ctf_integer(int, userid, userid)
2132 ctf_integer(size_t, len, len)
2133 )
2134)
2135
2136TRACEPOINT_EVENT(
2137 my_app,
2138 get_transaction,
2139 TP_ARGS(
2140 int, userid,
2141 size_t, len
2142 ),
2143 TP_FIELDS(
2144 ctf_integer(int, userid, userid)
2145 ctf_integer(size_t, len, len)
2146 )
2147)
2148----
2149
2150In this case, we create three tracepoint classes, with one implicit
2151tracepoint instance for each of them: `get_account`, `get_settings`, and
2152`get_transaction`. However, they all share the same event field names
2153and types. Hence three identical, yet independent serialization
2154functions are created when you build the tracepoint provider package.
2155
2156A better design choice is to define a single tracepoint class and three
2157tracepoint instances:
2158
2159[source,c]
2160----
2161/* The tracepoint class */
2162TRACEPOINT_EVENT_CLASS(
2163 /* Tracepoint provider name */
2164 my_app,
2165
2166 /* Tracepoint class name */
2167 my_class,
2168
2169 /* Input arguments */
2170 TP_ARGS(
2171 int, userid,
2172 size_t, len
2173 ),
2174
2175 /* Output event fields */
2176 TP_FIELDS(
2177 ctf_integer(int, userid, userid)
2178 ctf_integer(size_t, len, len)
2179 )
2180)
2181
2182/* The tracepoint instances */
2183TRACEPOINT_EVENT_INSTANCE(
2184 /* Tracepoint provider name */
2185 my_app,
2186
2187 /* Tracepoint class name */
2188 my_class,
2189
2190 /* Tracepoint name */
2191 get_account,
2192
2193 /* Input arguments */
2194 TP_ARGS(
2195 int, userid,
2196 size_t, len
2197 )
2198)
2199TRACEPOINT_EVENT_INSTANCE(
2200 my_app,
2201 my_class,
2202 get_settings,
2203 TP_ARGS(
2204 int, userid,
2205 size_t, len
2206 )
2207)
2208TRACEPOINT_EVENT_INSTANCE(
2209 my_app,
2210 my_class,
2211 get_transaction,
2212 TP_ARGS(
2213 int, userid,
2214 size_t, len
2215 )
2216)
2217----
2218====
2219
2220
2221[[assigning-log-levels]]
2222===== Assign a log level to a tracepoint definition
2223
2224You can assign an optional _log level_ to a
2225<<defining-tracepoints,tracepoint definition>>.
2226
2227Assigning different levels of severity to tracepoint definitions can
2228be useful: when you <<enabling-disabling-events,create an event rule>>,
2229you can target tracepoints having a log level as severe as a specific
2230value.
2231
2232The concept of LTTng-UST log levels is similar to the levels found
2233in typical logging frameworks:
2234
2235* In a logging framework, the log level is given by the function
2236 or method name you use at the log statement site: `debug()`,
2237 `info()`, `warn()`, `error()`, and so on.
2238* In LTTng-UST, you statically assign the log level to a tracepoint
2239 definition; any `tracepoint()` macro invocation which refers to
2240 this definition has this log level.
2241
2242You can assign a log level to a tracepoint definition with the
2243`TRACEPOINT_LOGLEVEL()` macro. You must use this macro _after_ the
2244<<defining-tracepoints,`TRACEPOINT_EVENT()`>> or
2245<<using-tracepoint-classes,`TRACEPOINT_INSTANCE()`>> macro for a given
2246tracepoint.
2247
2248The syntax of the `TRACEPOINT_LOGLEVEL()` macro is:
2249
2250[source,c]
2251.`TRACEPOINT_LOGLEVEL()` macro syntax.
2252----
2253TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2254----
2255
2256Replace:
2257
2258* `provider_name` with the tracepoint provider name.
2259* `tracepoint_name` with the tracepoint name.
2260* `log_level` with the log level to assign to the tracepoint
2261 definition named `tracepoint_name` in the `provider_name`
2262 tracepoint provider.
2263+
2264See man:lttng-ust(3) for a list of available log level names.
2265
2266.Assign the `TRACE_DEBUG_UNIT` log level to a tracepoint definition.
2267====
2268[source,c]
2269----
2270/* Tracepoint definition */
2271TRACEPOINT_EVENT(
2272 my_app,
2273 get_transaction,
2274 TP_ARGS(
2275 int, userid,
2276 size_t, len
2277 ),
2278 TP_FIELDS(
2279 ctf_integer(int, userid, userid)
2280 ctf_integer(size_t, len, len)
2281 )
2282)
2283
2284/* Log level assignment */
2285TRACEPOINT_LOGLEVEL(my_app, get_transaction, TRACE_DEBUG_UNIT)
2286----
2287====
2288
2289
2290[[tpp-source]]
2291===== Create a tracepoint provider package source file
2292
2293A _tracepoint provider package source file_ is a C source file which
2294includes a <<tpp-header,tracepoint provider header file>> to expand its
2295macros into event serialization and other functions.
2296
2297You can always use the following tracepoint provider package source
2298file template:
2299
2300[source,c]
2301.Tracepoint provider package source file template.
2302----
2303#define TRACEPOINT_CREATE_PROBES
2304
2305#include "tp.h"
2306----
2307
2308Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2309header file>> name. You may also include more than one tracepoint
2310provider header file here to create a tracepoint provider package
2311holding more than one tracepoint providers.
2312
2313
2314[[probing-the-application-source-code]]
2315==== Add tracepoints to an application's source code
2316
2317Once you <<tpp-header,create a tracepoint provider header file>>, you
2318can use the `tracepoint()` macro in your application's
2319source code to insert the tracepoints that this header
2320<<defining-tracepoints,defines>>.
2321
2322The `tracepoint()` macro takes at least two parameters: the tracepoint
2323provider name and the tracepoint name. The corresponding tracepoint
2324definition defines the other parameters.
2325
2326.`tracepoint()` usage.
2327====
2328The following <<defining-tracepoints,tracepoint definition>> defines a
2329tracepoint which takes two input arguments and has two output event
2330fields.
2331
2332[source,c]
2333.Tracepoint provider header file.
2334----
2335#include "my-custom-structure.h"
2336
2337TRACEPOINT_EVENT(
2338 my_provider,
2339 my_tracepoint,
2340 TP_ARGS(
2341 int, argc,
2342 const char*, cmd_name
2343 ),
2344 TP_FIELDS(
2345 ctf_string(cmd_name, cmd_name)
2346 ctf_integer(int, number_of_args, argc)
2347 )
2348)
2349----
2350
2351You can refer to this tracepoint definition with the `tracepoint()`
2352macro in your application's source code like this:
2353
2354[source,c]
2355.Application's source file.
2356----
2357#include "tp.h"
2358
2359int main(int argc, char* argv[])
2360{
2361 tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2362
2363 return 0;
2364}
2365----
2366
2367Note how the application's source code includes
2368the tracepoint provider header file containing the tracepoint
2369definitions to use, path:{tp.h}.
2370====
2371
2372.`tracepoint()` usage with a complex tracepoint definition.
2373====
2374Consider this complex tracepoint definition, where multiple event
2375fields refer to the same input arguments in their argument expression
2376parameter:
2377
2378[source,c]
2379.Tracepoint provider header file.
2380----
2381/* For `struct stat` */
2382#include <sys/types.h>
2383#include <sys/stat.h>
2384#include <unistd.h>
2385
2386TRACEPOINT_EVENT(
2387 my_provider,
2388 my_tracepoint,
2389 TP_ARGS(
2390 int, my_int_arg,
2391 char*, my_str_arg,
2392 struct stat*, st
2393 ),
2394 TP_FIELDS(
2395 ctf_integer(int, my_constant_field, 23 + 17)
2396 ctf_integer(int, my_int_arg_field, my_int_arg)
2397 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2398 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2399 my_str_arg[2] + my_str_arg[3])
2400 ctf_string(my_str_arg_field, my_str_arg)
2401 ctf_integer_hex(off_t, size_field, st->st_size)
2402 ctf_float(double, size_dbl_field, (double) st->st_size)
2403 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2404 size_t, strlen(my_str_arg) / 2)
2405 )
2406)
2407----
2408
2409You can refer to this tracepoint definition with the `tracepoint()`
2410macro in your application's source code like this:
2411
2412[source,c]
2413.Application's source file.
2414----
2415#define TRACEPOINT_DEFINE
2416#include "tp.h"
2417
2418int main(void)
2419{
2420 struct stat s;
2421
2422 stat("/etc/fstab", &s);
2423 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2424
2425 return 0;
2426}
2427----
2428
2429If you look at the event record that LTTng writes when tracing this
2430program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2431it should look like this:
2432
2433.Event record fields
2434|====
2435|Field's name |Field's value
2436|`my_constant_field` |40
2437|`my_int_arg_field` |23
2438|`my_int_arg_field2` |529
2439|`sum4_field` |389
2440|`my_str_arg_field` |`Hello, World!`
2441|`size_field` |0x12d
2442|`size_dbl_field` |301.0
2443|`half_my_str_arg_field` |`Hello,`
2444|====
2445====
2446
2447Sometimes, the arguments you pass to `tracepoint()` are expensive to
2448compute--they use the call stack, for example. To avoid this
2449computation when the tracepoint is disabled, you can use the
2450`tracepoint_enabled()` and `do_tracepoint()` macros.
2451
2452The syntax of the `tracepoint_enabled()` and `do_tracepoint()` macros
2453is:
2454
2455[source,c]
2456.`tracepoint_enabled()` and `do_tracepoint()` macros syntax.
2457----
2458tracepoint_enabled(provider_name, tracepoint_name)
2459do_tracepoint(provider_name, tracepoint_name, ...)
2460----
2461
2462Replace:
2463
2464* `provider_name` with the tracepoint provider name.
2465* `tracepoint_name` with the tracepoint name.
2466
2467`tracepoint_enabled()` returns a non-zero value if the tracepoint named
2468`tracepoint_name` from the provider named `provider_name` is enabled
2469**at run time**.
2470
2471`do_tracepoint()` is like `tracepoint()`, except that it doesn't check
2472if the tracepoint is enabled. Using `tracepoint()` with
2473`tracepoint_enabled()` is dangerous since `tracepoint()` also contains
2474the `tracepoint_enabled()` check, thus a race condition is
2475possible in this situation:
2476
2477[source,c]
2478.Possible race condition when using `tracepoint_enabled()` with `tracepoint()`.
2479----
2480if (tracepoint_enabled(my_provider, my_tracepoint)) {
2481 stuff = prepare_stuff();
2482}
2483
2484tracepoint(my_provider, my_tracepoint, stuff);
2485----
2486
2487If the tracepoint is enabled after the condition, then `stuff` is not
2488prepared: the emitted event will either contain wrong data, or the whole
2489application could crash (segmentation fault, for example).
2490
2491NOTE: Neither `tracepoint_enabled()` nor `do_tracepoint()` have an
2492`STAP_PROBEV()` call. If you need it, you must emit
2493this call yourself.
2494
2495
2496[[building-tracepoint-providers-and-user-application]]
2497==== Build and link a tracepoint provider package and an application
2498
2499Once you have one or more <<tpp-header,tracepoint provider header
2500files>> and a <<tpp-source,tracepoint provider package source file>>,
2501you can create the tracepoint provider package by compiling its source
2502file. From here, multiple build and run scenarios are possible. The
2503following table shows common application and library configurations
2504along with the required command lines to achieve them.
2505
2506In the following diagrams, we use the following file names:
2507
2508`app`::
2509 Executable application.
2510
2511`app.o`::
2512 Application's object file.
2513
2514`tpp.o`::
2515 Tracepoint provider package object file.
2516
2517`tpp.a`::
2518 Tracepoint provider package archive file.
2519
2520`libtpp.so`::
2521 Tracepoint provider package shared object file.
2522
2523`emon.o`::
2524 User library object file.
2525
2526`libemon.so`::
2527 User library shared object file.
2528
2529We use the following symbols in the diagrams of table below:
2530
2531[role="img-100"]
2532.Symbols used in the build scenario diagrams.
2533image::ust-sit-symbols.png[]
2534
2535We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
2536variable in the following instructions.
2537
2538[role="growable ust-scenarios",cols="asciidoc,asciidoc"]
2539.Common tracepoint provider package scenarios.
2540|====
2541|Scenario |Instructions
2542
2543|
2544The instrumented application is statically linked with
2545the tracepoint provider package object.
2546
2547image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
2548
2549|
2550include::../common/ust-sit-step-tp-o.txt[]
2551
2552To build the instrumented application:
2553
2554. In path:{app.c}, before including path:{tpp.h}, add the following line:
2555+
2556--
2557[source,c]
2558----
2559#define TRACEPOINT_DEFINE
2560----
2561--
2562
2563. Compile the application source file:
2564+
2565--
2566[role="term"]
2567----
2568$ gcc -c app.c
2569----
2570--
2571
2572. Build the application:
2573+
2574--
2575[role="term"]
2576----
2577$ gcc -o app app.o tpp.o -llttng-ust -ldl
2578----
2579--
2580
2581To run the instrumented application:
2582
2583* Start the application:
2584+
2585--
2586[role="term"]
2587----
2588$ ./app
2589----
2590--
2591
2592|
2593The instrumented application is statically linked with the
2594tracepoint provider package archive file.
2595
2596image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
2597
2598|
2599To create the tracepoint provider package archive file:
2600
2601. Compile the <<tpp-source,tracepoint provider package source file>>:
2602+
2603--
2604[role="term"]
2605----
2606$ gcc -I. -c tpp.c
2607----
2608--
2609
2610. Create the tracepoint provider package archive file:
2611+
2612--
2613[role="term"]
2614----
2615$ ar rcs tpp.a tpp.o
2616----
2617--
2618
2619To build the instrumented application:
2620
2621. In path:{app.c}, before including path:{tpp.h}, add the following line:
2622+
2623--
2624[source,c]
2625----
2626#define TRACEPOINT_DEFINE
2627----
2628--
2629
2630. Compile the application source file:
2631+
2632--
2633[role="term"]
2634----
2635$ gcc -c app.c
2636----
2637--
2638
2639. Build the application:
2640+
2641--
2642[role="term"]
2643----
2644$ gcc -o app app.o tpp.a -llttng-ust -ldl
2645----
2646--
2647
2648To run the instrumented application:
2649
2650* Start the application:
2651+
2652--
2653[role="term"]
2654----
2655$ ./app
2656----
2657--
2658
2659|
2660The instrumented application is linked with the tracepoint provider
2661package shared object.
2662
2663image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
2664
2665|
2666include::../common/ust-sit-step-tp-so.txt[]
2667
2668To build the instrumented application:
2669
2670. In path:{app.c}, before including path:{tpp.h}, add the following line:
2671+
2672--
2673[source,c]
2674----
2675#define TRACEPOINT_DEFINE
2676----
2677--
2678
2679. Compile the application source file:
2680+
2681--
2682[role="term"]
2683----
2684$ gcc -c app.c
2685----
2686--
2687
2688. Build the application:
2689+
2690--
2691[role="term"]
2692----
2693$ gcc -o app app.o -ldl -L. -ltpp
2694----
2695--
2696
2697To run the instrumented application:
2698
2699* Start the application:
2700+
2701--
2702[role="term"]
2703----
2704$ ./app
2705----
2706--
2707
2708|
2709The tracepoint provider package shared object is preloaded before the
2710instrumented application starts.
2711
2712image::ust-sit+tp-so-preloaded+app-instrumented.png[]
2713
2714|
2715include::../common/ust-sit-step-tp-so.txt[]
2716
2717To build the instrumented application:
2718
2719. In path:{app.c}, before including path:{tpp.h}, add the
2720 following lines:
2721+
2722--
2723[source,c]
2724----
2725#define TRACEPOINT_DEFINE
2726#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2727----
2728--
2729
2730. Compile the application source file:
2731+
2732--
2733[role="term"]
2734----
2735$ gcc -c app.c
2736----
2737--
2738
2739. Build the application:
2740+
2741--
2742[role="term"]
2743----
2744$ gcc -o app app.o -ldl
2745----
2746--
2747
2748To run the instrumented application with tracing support:
2749
2750* Preload the tracepoint provider package shared object and
2751 start the application:
2752+
2753--
2754[role="term"]
2755----
2756$ LD_PRELOAD=./libtpp.so ./app
2757----
2758--
2759
2760To run the instrumented application without tracing support:
2761
2762* Start the application:
2763+
2764--
2765[role="term"]
2766----
2767$ ./app
2768----
2769--
2770
2771|
2772The instrumented application dynamically loads the tracepoint provider
2773package shared object.
2774
2775See the <<dlclose-warning,warning about `dlclose()`>>.
2776
2777image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
2778
2779|
2780include::../common/ust-sit-step-tp-so.txt[]
2781
2782To build the instrumented application:
2783
2784. In path:{app.c}, before including path:{tpp.h}, add the
2785 following lines:
2786+
2787--
2788[source,c]
2789----
2790#define TRACEPOINT_DEFINE
2791#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2792----
2793--
2794
2795. Compile the application source file:
2796+
2797--
2798[role="term"]
2799----
2800$ gcc -c app.c
2801----
2802--
2803
2804. Build the application:
2805+
2806--
2807[role="term"]
2808----
2809$ gcc -o app app.o -ldl
2810----
2811--
2812
2813To run the instrumented application:
2814
2815* Start the application:
2816+
2817--
2818[role="term"]
2819----
2820$ ./app
2821----
2822--
2823
2824|
2825The application is linked with the instrumented user library.
2826
2827The instrumented user library is statically linked with the tracepoint
2828provider package object file.
2829
2830image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
2831
2832|
2833include::../common/ust-sit-step-tp-o-fpic.txt[]
2834
2835To build the instrumented user library:
2836
2837. In path:{emon.c}, before including path:{tpp.h}, add the
2838 following line:
2839+
2840--
2841[source,c]
2842----
2843#define TRACEPOINT_DEFINE
2844----
2845--
2846
2847. Compile the user library source file:
2848+
2849--
2850[role="term"]
2851----
2852$ gcc -I. -fpic -c emon.c
2853----
2854--
2855
2856. Build the user library shared object:
2857+
2858--
2859[role="term"]
2860----
2861$ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
2862----
2863--
2864
2865To build the application:
2866
2867. Compile the application source file:
2868+
2869--
2870[role="term"]
2871----
2872$ gcc -c app.c
2873----
2874--
2875
2876. Build the application:
2877+
2878--
2879[role="term"]
2880----
2881$ gcc -o app app.o -L. -lemon
2882----
2883--
2884
2885To run the application:
2886
2887* Start the application:
2888+
2889--
2890[role="term"]
2891----
2892$ ./app
2893----
2894--
2895
2896|
2897The application is linked with the instrumented user library.
2898
2899The instrumented user library is linked with the tracepoint provider
2900package shared object.
2901
2902image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
2903
2904|
2905include::../common/ust-sit-step-tp-so.txt[]
2906
2907To build the instrumented user library:
2908
2909. In path:{emon.c}, before including path:{tpp.h}, add the
2910 following line:
2911+
2912--
2913[source,c]
2914----
2915#define TRACEPOINT_DEFINE
2916----
2917--
2918
2919. Compile the user library source file:
2920+
2921--
2922[role="term"]
2923----
2924$ gcc -I. -fpic -c emon.c
2925----
2926--
2927
2928. Build the user library shared object:
2929+
2930--
2931[role="term"]
2932----
2933$ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
2934----
2935--
2936
2937To build the application:
2938
2939. Compile the application source file:
2940+
2941--
2942[role="term"]
2943----
2944$ gcc -c app.c
2945----
2946--
2947
2948. Build the application:
2949+
2950--
2951[role="term"]
2952----
2953$ gcc -o app app.o -L. -lemon
2954----
2955--
2956
2957To run the application:
2958
2959* Start the application:
2960+
2961--
2962[role="term"]
2963----
2964$ ./app
2965----
2966--
2967
2968|
2969The tracepoint provider package shared object is preloaded before the
2970application starts.
2971
2972The application is linked with the instrumented user library.
2973
2974image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
2975
2976|
2977include::../common/ust-sit-step-tp-so.txt[]
2978
2979To build the instrumented user library:
2980
2981. In path:{emon.c}, before including path:{tpp.h}, add the
2982 following lines:
2983+
2984--
2985[source,c]
2986----
2987#define TRACEPOINT_DEFINE
2988#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2989----
2990--
2991
2992. Compile the user library source file:
2993+
2994--
2995[role="term"]
2996----
2997$ gcc -I. -fpic -c emon.c
2998----
2999--
3000
3001. Build the user library shared object:
3002+
3003--
3004[role="term"]
3005----
3006$ gcc -shared -o libemon.so emon.o -ldl
3007----
3008--
3009
3010To build the application:
3011
3012. Compile the application source file:
3013+
3014--
3015[role="term"]
3016----
3017$ gcc -c app.c
3018----
3019--
3020
3021. Build the application:
3022+
3023--
3024[role="term"]
3025----
3026$ gcc -o app app.o -L. -lemon
3027----
3028--
3029
3030To run the application with tracing support:
3031
3032* Preload the tracepoint provider package shared object and
3033 start the application:
3034+
3035--
3036[role="term"]
3037----
3038$ LD_PRELOAD=./libtpp.so ./app
3039----
3040--
3041
3042To run the application without tracing support:
3043
3044* Start the application:
3045+
3046--
3047[role="term"]
3048----
3049$ ./app
3050----
3051--
3052
3053|
3054The application is linked with the instrumented user library.
3055
3056The instrumented user library dynamically loads the tracepoint provider
3057package shared object.
3058
3059See the <<dlclose-warning,warning about `dlclose()`>>.
3060
3061image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3062
3063|
3064include::../common/ust-sit-step-tp-so.txt[]
3065
3066To build the instrumented user library:
3067
3068. In path:{emon.c}, before including path:{tpp.h}, add the
3069 following lines:
3070+
3071--
3072[source,c]
3073----
3074#define TRACEPOINT_DEFINE
3075#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3076----
3077--
3078
3079. Compile the user library source file:
3080+
3081--
3082[role="term"]
3083----
3084$ gcc -I. -fpic -c emon.c
3085----
3086--
3087
3088. Build the user library shared object:
3089+
3090--
3091[role="term"]
3092----
3093$ gcc -shared -o libemon.so emon.o -ldl
3094----
3095--
3096
3097To build the application:
3098
3099. Compile the application source file:
3100+
3101--
3102[role="term"]
3103----
3104$ gcc -c app.c
3105----
3106--
3107
3108. Build the application:
3109+
3110--
3111[role="term"]
3112----
3113$ gcc -o app app.o -L. -lemon
3114----
3115--
3116
3117To run the application:
3118
3119* Start the application:
3120+
3121--
3122[role="term"]
3123----
3124$ ./app
3125----
3126--
3127
3128|
3129The application dynamically loads the instrumented user library.
3130
3131The instrumented user library is linked with the tracepoint provider
3132package shared object.
3133
3134See the <<dlclose-warning,warning about `dlclose()`>>.
3135
3136image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3137
3138|
3139include::../common/ust-sit-step-tp-so.txt[]
3140
3141To build the instrumented user library:
3142
3143. In path:{emon.c}, before including path:{tpp.h}, add the
3144 following line:
3145+
3146--
3147[source,c]
3148----
3149#define TRACEPOINT_DEFINE
3150----
3151--
3152
3153. Compile the user library source file:
3154+
3155--
3156[role="term"]
3157----
3158$ gcc -I. -fpic -c emon.c
3159----
3160--
3161
3162. Build the user library shared object:
3163+
3164--
3165[role="term"]
3166----
3167$ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3168----
3169--
3170
3171To build the application:
3172
3173. Compile the application source file:
3174+
3175--
3176[role="term"]
3177----
3178$ gcc -c app.c
3179----
3180--
3181
3182. Build the application:
3183+
3184--
3185[role="term"]
3186----
3187$ gcc -o app app.o -ldl -L. -lemon
3188----
3189--
3190
3191To run the application:
3192
3193* Start the application:
3194+
3195--
3196[role="term"]
3197----
3198$ ./app
3199----
3200--
3201
3202|
3203The application dynamically loads the instrumented user library.
3204
3205The instrumented user library dynamically loads the tracepoint provider
3206package shared object.
3207
3208See the <<dlclose-warning,warning about `dlclose()`>>.
3209
3210image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3211
3212|
3213include::../common/ust-sit-step-tp-so.txt[]
3214
3215To build the instrumented user library:
3216
3217. In path:{emon.c}, before including path:{tpp.h}, add the
3218 following lines:
3219+
3220--
3221[source,c]
3222----
3223#define TRACEPOINT_DEFINE
3224#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3225----
3226--
3227
3228. Compile the user library source file:
3229+
3230--
3231[role="term"]
3232----
3233$ gcc -I. -fpic -c emon.c
3234----
3235--
3236
3237. Build the user library shared object:
3238+
3239--
3240[role="term"]
3241----
3242$ gcc -shared -o libemon.so emon.o -ldl
3243----
3244--
3245
3246To build the application:
3247
3248. Compile the application source file:
3249+
3250--
3251[role="term"]
3252----
3253$ gcc -c app.c
3254----
3255--
3256
3257. Build the application:
3258+
3259--
3260[role="term"]
3261----
3262$ gcc -o app app.o -ldl -L. -lemon
3263----
3264--
3265
3266To run the application:
3267
3268* Start the application:
3269+
3270--
3271[role="term"]
3272----
3273$ ./app
3274----
3275--
3276
3277|
3278The tracepoint provider package shared object is preloaded before the
3279application starts.
3280
3281The application dynamically loads the instrumented user library.
3282
3283image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3284
3285|
3286include::../common/ust-sit-step-tp-so.txt[]
3287
3288To build the instrumented user library:
3289
3290. In path:{emon.c}, before including path:{tpp.h}, add the
3291 following lines:
3292+
3293--
3294[source,c]
3295----
3296#define TRACEPOINT_DEFINE
3297#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3298----
3299--
3300
3301. Compile the user library source file:
3302+
3303--
3304[role="term"]
3305----
3306$ gcc -I. -fpic -c emon.c
3307----
3308--
3309
3310. Build the user library shared object:
3311+
3312--
3313[role="term"]
3314----
3315$ gcc -shared -o libemon.so emon.o -ldl
3316----
3317--
3318
3319To build the application:
3320
3321. Compile the application source file:
3322+
3323--
3324[role="term"]
3325----
3326$ gcc -c app.c
3327----
3328--
3329
3330. Build the application:
3331+
3332--
3333[role="term"]
3334----
3335$ gcc -o app app.o -L. -lemon
3336----
3337--
3338
3339To run the application with tracing support:
3340
3341* Preload the tracepoint provider package shared object and
3342 start the application:
3343+
3344--
3345[role="term"]
3346----
3347$ LD_PRELOAD=./libtpp.so ./app
3348----
3349--
3350
3351To run the application without tracing support:
3352
3353* Start the application:
3354+
3355--
3356[role="term"]
3357----
3358$ ./app
3359----
3360--
3361
3362|
3363The application is statically linked with the tracepoint provider
3364package object file.
3365
3366The application is linked with the instrumented user library.
3367
3368image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3369
3370|
3371include::../common/ust-sit-step-tp-o.txt[]
3372
3373To build the instrumented user library:
3374
3375. In path:{emon.c}, before including path:{tpp.h}, add the
3376 following line:
3377+
3378--
3379[source,c]
3380----
3381#define TRACEPOINT_DEFINE
3382----
3383--
3384
3385. Compile the user library source file:
3386+
3387--
3388[role="term"]
3389----
3390$ gcc -I. -fpic -c emon.c
3391----
3392--
3393
3394. Build the user library shared object:
3395+
3396--
3397[role="term"]
3398----
3399$ gcc -shared -o libemon.so emon.o
3400----
3401--
3402
3403To build the application:
3404
3405. Compile the application source file:
3406+
3407--
3408[role="term"]
3409----
3410$ gcc -c app.c
3411----
3412--
3413
3414. Build the application:
3415+
3416--
3417[role="term"]
3418----
3419$ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3420----
3421--
3422
3423To run the instrumented application:
3424
3425* Start the application:
3426+
3427--
3428[role="term"]
3429----
3430$ ./app
3431----
3432--
3433
3434|
3435The application is statically linked with the tracepoint provider
3436package object file.
3437
3438The application dynamically loads the instrumented user library.
3439
3440image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3441
3442|
3443include::../common/ust-sit-step-tp-o.txt[]
3444
3445To build the application:
3446
3447. In path:{app.c}, before including path:{tpp.h}, add the following line:
3448+
3449--
3450[source,c]
3451----
3452#define TRACEPOINT_DEFINE
3453----
3454--
3455
3456. Compile the application source file:
3457+
3458--
3459[role="term"]
3460----
3461$ gcc -c app.c
3462----
3463--
3464
3465. Build the application:
3466+
3467--
3468[role="term"]
3469----
3470$ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3471 -llttng-ust -ldl
3472----
3473--
3474+
3475The `--export-dynamic` option passed to the linker is necessary for the
3476dynamically loaded library to ``see'' the tracepoint symbols defined in
3477the application.
3478
3479To build the instrumented user library:
3480
3481. Compile the user library source file:
3482+
3483--
3484[role="term"]
3485----
3486$ gcc -I. -fpic -c emon.c
3487----
3488--
3489
3490. Build the user library shared object:
3491+
3492--
3493[role="term"]
3494----
3495$ gcc -shared -o libemon.so emon.o
3496----
3497--
3498
3499To run the application:
3500
3501* Start the application:
3502+
3503--
3504[role="term"]
3505----
3506$ ./app
3507----
3508--
3509|====
3510
3511[[dlclose-warning]]
3512[IMPORTANT]
3513.Do not use man:dlclose(3) on a tracepoint provider package
3514====
3515Never use man:dlclose(3) on any shared object which:
3516
3517* Is linked with, statically or dynamically, a tracepoint provider
3518 package.
3519* Calls man:dlopen(3) itself to dynamically open a tracepoint provider
3520 package shared object.
3521
3522This is currently considered **unsafe** due to a lack of reference
3523counting from LTTng-UST to the shared object.
3524
3525A known workaround (available since glibc 2.2) is to use the
3526`RTLD_NODELETE` flag when calling man:dlopen(3) initially. This has the
3527effect of not unloading the loaded shared object, even if man:dlclose(3)
3528is called.
3529
3530You can also preload the tracepoint provider package shared object with
3531the env:LD_PRELOAD environment variable to overcome this limitation.
3532====
3533
3534
3535[[using-lttng-ust-with-daemons]]
3536===== Use noch:{LTTng-UST} with daemons
3537
3538If your instrumented application calls man:fork(2), man:clone(2),
3539or BSD's man:rfork(2), without a following man:exec(3)-family
3540system call, you must preload the path:{liblttng-ust-fork.so} shared
3541object when you start the application.
3542
3543[role="term"]
3544----
3545$ LD_PRELOAD=liblttng-ust-fork.so ./my-app
3546----
3547
3548If your tracepoint provider package is
3549a shared library which you also preload, you must put both
3550shared objects in env:LD_PRELOAD:
3551
3552[role="term"]
3553----
3554$ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
3555----
3556
3557
3558[role="since-2.9"]
3559[[liblttng-ust-fd]]
3560===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
3561
3562If your instrumented application closes one or more file descriptors
3563which it did not open itself, you must preload the
3564path:{liblttng-ust-fd.so} shared object when you start the application:
3565
3566[role="term"]
3567----
3568$ LD_PRELOAD=liblttng-ust-fd.so ./my-app
3569----
3570
3571Typical use cases include closing all the file descriptors after
3572man:fork(2) or man:rfork(2) and buggy applications doing
3573``double closes''.
3574
3575
3576[[lttng-ust-pkg-config]]
3577===== Use noch:{pkg-config}
3578
3579On some distributions, LTTng-UST ships with a
3580https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
3581metadata file. If this is your case, then you can use cmd:pkg-config to
3582build an application on the command line:
3583
3584[role="term"]
3585----
3586$ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
3587----
3588
3589
3590[[instrumenting-32-bit-app-on-64-bit-system]]
3591===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
3592
3593In order to trace a 32-bit application running on a 64-bit system,
3594LTTng must use a dedicated 32-bit
3595<<lttng-consumerd,consumer daemon>>.
3596
3597The following steps show how to build and install a 32-bit consumer
3598daemon, which is _not_ part of the default 64-bit LTTng build, how to
3599build and install the 32-bit LTTng-UST libraries, and how to build and
3600link an instrumented 32-bit application in that context.
3601
3602To build a 32-bit instrumented application for a 64-bit target system,
3603assuming you have a fresh target system with no installed Userspace RCU
3604or LTTng packages:
3605
3606. Download, build, and install a 32-bit version of Userspace RCU:
3607+
3608--
3609[role="term"]
3610----
3611$ cd $(mktemp -d) &&
3612wget http://lttng.org/files/urcu/userspace-rcu-latest-0.9.tar.bz2 &&
3613tar -xf userspace-rcu-latest-0.9.tar.bz2 &&
3614cd userspace-rcu-0.9.* &&
3615./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
3616make &&
3617sudo make install &&
3618sudo ldconfig
3619----
3620--
3621
3622. Using your distribution's package manager, or from source, install
3623 the following 32-bit versions of the following dependencies of
3624 LTTng-tools and LTTng-UST:
3625+
3626--
3627* https://sourceforge.net/projects/libuuid/[libuuid]
3628* http://directory.fsf.org/wiki/Popt[popt]
3629* http://www.xmlsoft.org/[libxml2]
3630--
3631
3632. Download, build, and install a 32-bit version of the latest
3633 LTTng-UST{nbsp}{revision}:
3634+
3635--
3636[role="term"]
3637----
3638$ cd $(mktemp -d) &&
3639wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.10.tar.bz2 &&
3640tar -xf lttng-ust-latest-2.10.tar.bz2 &&
3641cd lttng-ust-2.10.* &&
3642./configure --libdir=/usr/local/lib32 \
3643 CFLAGS=-m32 CXXFLAGS=-m32 \
3644 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
3645make &&
3646sudo make install &&
3647sudo ldconfig
3648----
3649--
3650+
3651[NOTE]
3652====
3653Depending on your distribution,
365432-bit libraries could be installed at a different location than
3655`/usr/lib32`. For example, Debian is known to install
3656some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
3657
3658In this case, make sure to set `LDFLAGS` to all the
3659relevant 32-bit library paths, for example:
3660
3661[role="term"]
3662----
3663$ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
3664----
3665====
3666
3667. Download the latest LTTng-tools{nbsp}{revision}, build, and install
3668 the 32-bit consumer daemon:
3669+
3670--
3671[role="term"]
3672----
3673$ cd $(mktemp -d) &&
3674wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
3675tar -xf lttng-tools-latest-2.10.tar.bz2 &&
3676cd lttng-tools-2.10.* &&
3677./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3678 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
3679 --disable-bin-lttng --disable-bin-lttng-crash \
3680 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
3681make &&
3682cd src/bin/lttng-consumerd &&
3683sudo make install &&
3684sudo ldconfig
3685----
3686--
3687
3688. From your distribution or from source,
3689 <<installing-lttng,install>> the 64-bit versions of
3690 LTTng-UST and Userspace RCU.
3691. Download, build, and install the 64-bit version of the
3692 latest LTTng-tools{nbsp}{revision}:
3693+
3694--
3695[role="term"]
3696----
3697$ cd $(mktemp -d) &&
3698wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
3699tar -xf lttng-tools-latest-2.10.tar.bz2 &&
3700cd lttng-tools-2.10.* &&
3701./configure --with-consumerd32-libdir=/usr/local/lib32 \
3702 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
3703make &&
3704sudo make install &&
3705sudo ldconfig
3706----
3707--
3708
3709. Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
3710 when linking your 32-bit application:
3711+
3712----
3713-m32 -L/usr/lib32 -L/usr/local/lib32 \
3714-Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
3715----
3716+
3717For example, let's rebuild the quick start example in
3718<<tracing-your-own-user-application,Trace a user application>> as an
3719instrumented 32-bit application:
3720+
3721--
3722[role="term"]
3723----
3724$ gcc -m32 -c -I. hello-tp.c
3725$ gcc -m32 -c hello.c
3726$ gcc -m32 -o hello hello.o hello-tp.o \
3727 -L/usr/lib32 -L/usr/local/lib32 \
3728 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
3729 -llttng-ust -ldl
3730----
3731--
3732
3733No special action is required to execute the 32-bit application and
3734to trace it: use the command-line man:lttng(1) tool as usual.
3735
3736
3737[role="since-2.5"]
3738[[tracef]]
3739==== Use `tracef()`
3740
3741man:tracef(3) is a small LTTng-UST API designed for quick,
3742man:printf(3)-like instrumentation without the burden of
3743<<tracepoint-provider,creating>> and
3744<<building-tracepoint-providers-and-user-application,building>>
3745a tracepoint provider package.
3746
3747To use `tracef()` in your application:
3748
3749. In the C or C++ source files where you need to use `tracef()`,
3750 include `<lttng/tracef.h>`:
3751+
3752--
3753[source,c]
3754----
3755#include <lttng/tracef.h>
3756----
3757--
3758
3759. In the application's source code, use `tracef()` like you would use
3760 man:printf(3):
3761+
3762--
3763[source,c]
3764----
3765 /* ... */
3766
3767 tracef("my message: %d (%s)", my_integer, my_string);
3768
3769 /* ... */
3770----
3771--
3772
3773. Link your application with `liblttng-ust`:
3774+
3775--
3776[role="term"]
3777----
3778$ gcc -o app app.c -llttng-ust
3779----
3780--
3781
3782To trace the events that `tracef()` calls emit:
3783
3784* <<enabling-disabling-events,Create an event rule>> which matches the
3785 `lttng_ust_tracef:*` event name:
3786+
3787--
3788[role="term"]
3789----
3790$ lttng enable-event --userspace 'lttng_ust_tracef:*'
3791----
3792--
3793
3794[IMPORTANT]
3795.Limitations of `tracef()`
3796====
3797The `tracef()` utility function was developed to make user space tracing
3798super simple, albeit with notable disadvantages compared to
3799<<defining-tracepoints,user-defined tracepoints>>:
3800
3801* All the emitted events have the same tracepoint provider and
3802 tracepoint names, respectively `lttng_ust_tracef` and `event`.
3803* There is no static type checking.
3804* The only event record field you actually get, named `msg`, is a string
3805 potentially containing the values you passed to `tracef()`
3806 using your own format string. This also means that you cannot filter
3807 events with a custom expression at run time because there are no
3808 isolated fields.
3809* Since `tracef()` uses the C standard library's man:vasprintf(3)
3810 function behind the scenes to format the strings at run time, its
3811 expected performance is lower than with user-defined tracepoints,
3812 which do not require a conversion to a string.
3813
3814Taking this into consideration, `tracef()` is useful for some quick
3815prototyping and debugging, but you should not consider it for any
3816permanent and serious applicative instrumentation.
3817====
3818
3819
3820[role="since-2.7"]
3821[[tracelog]]
3822==== Use `tracelog()`
3823
3824The man:tracelog(3) API is very similar to <<tracef,`tracef()`>>, with
3825the difference that it accepts an additional log level parameter.
3826
3827The goal of `tracelog()` is to ease the migration from logging to
3828tracing.
3829
3830To use `tracelog()` in your application:
3831
3832. In the C or C++ source files where you need to use `tracelog()`,
3833 include `<lttng/tracelog.h>`:
3834+
3835--
3836[source,c]
3837----
3838#include <lttng/tracelog.h>
3839----
3840--
3841
3842. In the application's source code, use `tracelog()` like you would use
3843 man:printf(3), except for the first parameter which is the log
3844 level:
3845+
3846--
3847[source,c]
3848----
3849 /* ... */
3850
3851 tracelog(TRACE_WARNING, "my message: %d (%s)",
3852 my_integer, my_string);
3853
3854 /* ... */
3855----
3856--
3857+
3858See man:lttng-ust(3) for a list of available log level names.
3859
3860. Link your application with `liblttng-ust`:
3861+
3862--
3863[role="term"]
3864----
3865$ gcc -o app app.c -llttng-ust
3866----
3867--
3868
3869To trace the events that `tracelog()` calls emit with a log level
3870_as severe as_ a specific log level:
3871
3872* <<enabling-disabling-events,Create an event rule>> which matches the
3873 `lttng_ust_tracelog:*` event name and a minimum level
3874 of severity:
3875+
3876--
3877[role="term"]
3878----
3879$ lttng enable-event --userspace 'lttng_ust_tracelog:*'
3880 --loglevel=TRACE_WARNING
3881----
3882--
3883
3884To trace the events that `tracelog()` calls emit with a
3885_specific log level_:
3886
3887* Create an event rule which matches the `lttng_ust_tracelog:*`
3888 event name and a specific log level:
3889+
3890--
3891[role="term"]
3892----
3893$ lttng enable-event --userspace 'lttng_ust_tracelog:*'
3894 --loglevel-only=TRACE_INFO
3895----
3896--
3897
3898
3899[[prebuilt-ust-helpers]]
3900=== Prebuilt user space tracing helpers
3901
146d7451 3902The LTTng-UST package provides a few helpers in the form of preloadable
85c29972
PP
3903shared objects which automatically instrument system functions and
3904calls.
3905
3906The helper shared objects are normally found in dir:{/usr/lib}. If you
3907built LTTng-UST <<building-from-source,from source>>, they are probably
3908located in dir:{/usr/local/lib}.
3909
3910The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
3911are:
3912
3913path:{liblttng-ust-libc-wrapper.so}::
3914path:{liblttng-ust-pthread-wrapper.so}::
3915 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
3916 memory and POSIX threads function tracing>>.
3917
3918path:{liblttng-ust-cyg-profile.so}::
3919path:{liblttng-ust-cyg-profile-fast.so}::
3920 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
3921
3922path:{liblttng-ust-dl.so}::
3923 <<liblttng-ust-dl,Dynamic linker tracing>>.
3924
3925To use a user space tracing helper with any user application:
3926
3927* Preload the helper shared object when you start the application:
3928+
3929--
3930[role="term"]
3931----
3932$ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
3933----
3934--
3935+
3936You can preload more than one helper:
3937+
3938--
3939[role="term"]
3940----
3941$ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
3942----
3943--
3944
3945
3946[role="since-2.3"]
3947[[liblttng-ust-libc-pthread-wrapper]]
3948==== Instrument C standard library memory and POSIX threads functions
3949
3950The path:{liblttng-ust-libc-wrapper.so} and
3951path:{liblttng-ust-pthread-wrapper.so} helpers
3952add instrumentation to some C standard library and POSIX
3953threads functions.
3954
3955[role="growable"]
3956.Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
3957|====
3958|TP provider name |TP name |Instrumented function
3959
3960.6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
3961 |`calloc` |man:calloc(3)
3962 |`realloc` |man:realloc(3)
3963 |`free` |man:free(3)
3964 |`memalign` |man:memalign(3)
3965 |`posix_memalign` |man:posix_memalign(3)
3966|====
3967
3968[role="growable"]
3969.Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
3970|====
3971|TP provider name |TP name |Instrumented function
3972
3973.4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
3974 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
3975 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
3976 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
3977|====
3978
3979When you preload the shared object, it replaces the functions listed
3980in the previous tables by wrappers which contain tracepoints and call
3981the replaced functions.
3982
3983
3984[[liblttng-ust-cyg-profile]]
3985==== Instrument function entry and exit
3986
3987The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
3988to the entry and exit points of functions.
3989
3990man:gcc(1) and man:clang(1) have an option named
3991https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
3992which generates instrumentation calls for entry and exit to functions.
3993The LTTng-UST function tracing helpers,
3994path:{liblttng-ust-cyg-profile.so} and
3995path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
3996to add tracepoints to the two generated functions (which contain
3997`cyg_profile` in their names, hence the helper's name).
3998
3999To use the LTTng-UST function tracing helper, the source files to
4000instrument must be built using the `-finstrument-functions` compiler
4001flag.
4002
4003There are two versions of the LTTng-UST function tracing helper:
4004
4005* **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4006 that you should only use when it can be _guaranteed_ that the
4007 complete event stream is recorded without any lost event record.
4008 Any kind of duplicate information is left out.
4009+
4010Assuming no event record is lost, having only the function addresses on
4011entry is enough to create a call graph, since an event record always
4012contains the ID of the CPU that generated it.
4013+
4014You can use a tool like man:addr2line(1) to convert function addresses
4015back to source file names and line numbers.
4016
4017* **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4018which also works in use cases where event records might get discarded or
4019not recorded from application startup.
4020In these cases, the trace analyzer needs more information to be
4021able to reconstruct the program flow.
4022
4023See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4024points of this helper.
4025
4026All the tracepoints that this helper provides have the
4027log level `TRACE_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4028
4029TIP: It's sometimes a good idea to limit the number of source files that
4030you compile with the `-finstrument-functions` option to prevent LTTng
4031from writing an excessive amount of trace data at run time. When using
4032man:gcc(1), you can use the
4033`-finstrument-functions-exclude-function-list` option to avoid
4034instrument entries and exits of specific function names.
4035
4036
4037[role="since-2.4"]
4038[[liblttng-ust-dl]]
4039==== Instrument the dynamic linker
4040
4041The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4042man:dlopen(3) and man:dlclose(3) function calls.
4043
4044See man:lttng-ust-dl(3) to learn more about the instrumentation points
4045of this helper.
4046
4047
4048[role="since-2.4"]
4049[[java-application]]
4050=== User space Java agent
4051
4052You can instrument any Java application which uses one of the following
4053logging frameworks:
4054
4055* The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4056 (JUL) core logging facilities.
4057* http://logging.apache.org/log4j/1.2/[**Apache log4j 1.2**], since
4058 LTTng 2.6. Note that Apache Log4j{nbsp}2 is not supported.
4059
4060[role="img-100"]
4061.LTTng-UST Java agent imported by a Java application.
4062image::java-app.png[]
4063
d0f6a241 4064Note that the methods described below are new in LTTng{nbsp}2.8.
85c29972
PP
4065Previous LTTng versions use another technique.
4066
4067NOTE: We use http://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4068and https://ci.lttng.org/[continuous integration], thus this version is
4069directly supported. However, the LTTng-UST Java agent is also tested
4070with OpenJDK{nbsp}7.
4071
4072
4073[role="since-2.8"]
4074[[jul]]
4075==== Use the LTTng-UST Java agent for `java.util.logging`
4076
4077To use the LTTng-UST Java agent in a Java application which uses
4078`java.util.logging` (JUL):
4079
4080. In the Java application's source code, import the LTTng-UST
4081 log handler package for `java.util.logging`:
4082+
4083--
4084[source,java]
4085----
4086import org.lttng.ust.agent.jul.LttngLogHandler;
4087----
4088--
4089
4090. Create an LTTng-UST JUL log handler:
4091+
4092--
4093[source,java]
4094----
4095Handler lttngUstLogHandler = new LttngLogHandler();
4096----
4097--
4098
4099. Add this handler to the JUL loggers which should emit LTTng events:
4100+
4101--
4102[source,java]
4103----
4104Logger myLogger = Logger.getLogger("some-logger");
4105
4106myLogger.addHandler(lttngUstLogHandler);
4107----
4108--
4109
4110. Use `java.util.logging` log statements and configuration as usual.
4111 The loggers with an attached LTTng-UST log handler can emit
4112 LTTng events.
4113
4114. Before exiting the application, remove the LTTng-UST log handler from
4115 the loggers attached to it and call its `close()` method:
4116+
4117--
4118[source,java]
4119----
4120myLogger.removeHandler(lttngUstLogHandler);
4121lttngUstLogHandler.close();
4122----
4123--
4124+
4125This is not strictly necessary, but it is recommended for a clean
4126disposal of the handler's resources.
4127
4128. Include the LTTng-UST Java agent's common and JUL-specific JAR files,
4129 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4130 in the
4131 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4132 path] when you build the Java application.
4133+
4134The JAR files are typically located in dir:{/usr/share/java}.
4135+
4136IMPORTANT: The LTTng-UST Java agent must be
4137<<installing-lttng,installed>> for the logging framework your
4138application uses.
4139
4140.Use the LTTng-UST Java agent for `java.util.logging`.
4141====
4142[source,java]
4143.path:{Test.java}
4144----
4145import java.io.IOException;
4146import java.util.logging.Handler;
4147import java.util.logging.Logger;
4148import org.lttng.ust.agent.jul.LttngLogHandler;
4149
4150public class Test
4151{
4152 private static final int answer = 42;
4153
4154 public static void main(String[] argv) throws Exception
4155 {
4156 // Create a logger
4157 Logger logger = Logger.getLogger("jello");
4158
4159 // Create an LTTng-UST log handler
4160 Handler lttngUstLogHandler = new LttngLogHandler();
4161
4162 // Add the LTTng-UST log handler to our logger
4163 logger.addHandler(lttngUstLogHandler);
4164
4165 // Log at will!
4166 logger.info("some info");
4167 logger.warning("some warning");
4168 Thread.sleep(500);
4169 logger.finer("finer information; the answer is " + answer);
4170 Thread.sleep(123);
4171 logger.severe("error!");
4172
4173 // Not mandatory, but cleaner
4174 logger.removeHandler(lttngUstLogHandler);
4175 lttngUstLogHandler.close();
4176 }
4177}
4178----
4179
4180Build this example:
4181
4182[role="term"]
4183----
4184$ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4185----
4186
4187<<creating-destroying-tracing-sessions,Create a tracing session>>,
4188<<enabling-disabling-events,create an event rule>> matching the
4189`jello` JUL logger, and <<basic-tracing-session-control,start tracing>>:
4190
4191[role="term"]
4192----
4193$ lttng create
4194$ lttng enable-event --jul jello
4195$ lttng start
4196----
4197
4198Run the compiled class:
4199
4200[role="term"]
4201----
4202$ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4203----
4204
4205<<basic-tracing-session-control,Stop tracing>> and inspect the
4206recorded events:
4207
4208[role="term"]
4209----
4210$ lttng stop
4211$ lttng view
4212----
4213====
4214
4215In the resulting trace, an <<event,event record>> generated by a Java
4216application using `java.util.logging` is named `lttng_jul:event` and
4217has the following fields:
4218
4219`msg`::
4220 Log record's message.
4221
4222`logger_name`::
4223 Logger name.
4224
4225`class_name`::
4226 Name of the class in which the log statement was executed.
4227
4228`method_name`::
4229 Name of the method in which the log statement was executed.
4230
4231`long_millis`::
4232 Logging time (timestamp in milliseconds).
4233
4234`int_loglevel`::
4235 Log level integer value.
4236
4237`int_threadid`::
4238 ID of the thread in which the log statement was executed.
4239
4240You can use the opt:lttng-enable-event(1):--loglevel or
4241opt:lttng-enable-event(1):--loglevel-only option of the
4242man:lttng-enable-event(1) command to target a range of JUL log levels
4243or a specific JUL log level.
4244
4245
4246[role="since-2.8"]
4247[[log4j]]
4248==== Use the LTTng-UST Java agent for Apache log4j
4249
4250To use the LTTng-UST Java agent in a Java application which uses
4251Apache log4j 1.2:
4252
4253. In the Java application's source code, import the LTTng-UST
4254 log appender package for Apache log4j:
4255+
4256--
4257[source,java]
4258----
4259import org.lttng.ust.agent.log4j.LttngLogAppender;
4260----
4261--
4262
4263. Create an LTTng-UST log4j log appender:
4264+
4265--
4266[source,java]
4267----
4268Appender lttngUstLogAppender = new LttngLogAppender();
4269----
4270--
4271
4272. Add this appender to the log4j loggers which should emit LTTng events:
4273+
4274--
4275[source,java]
4276----
4277Logger myLogger = Logger.getLogger("some-logger");
4278
4279myLogger.addAppender(lttngUstLogAppender);
4280----
4281--
4282
4283. Use Apache log4j log statements and configuration as usual. The
4284 loggers with an attached LTTng-UST log appender can emit LTTng events.
4285
4286. Before exiting the application, remove the LTTng-UST log appender from
4287 the loggers attached to it and call its `close()` method:
4288+
4289--
4290[source,java]
4291----
4292myLogger.removeAppender(lttngUstLogAppender);
4293lttngUstLogAppender.close();
4294----
4295--
4296+
4297This is not strictly necessary, but it is recommended for a clean
4298disposal of the appender's resources.
4299
4300. Include the LTTng-UST Java agent's common and log4j-specific JAR
4301 files, path:{lttng-ust-agent-common.jar} and
4302 path:{lttng-ust-agent-log4j.jar}, in the
4303 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4304 path] when you build the Java application.
4305+
4306The JAR files are typically located in dir:{/usr/share/java}.
4307+
4308IMPORTANT: The LTTng-UST Java agent must be
4309<<installing-lttng,installed>> for the logging framework your
4310application uses.
4311
4312.Use the LTTng-UST Java agent for Apache log4j.
4313====
4314[source,java]
4315.path:{Test.java}
4316----
4317import org.apache.log4j.Appender;
4318import org.apache.log4j.Logger;
4319import org.lttng.ust.agent.log4j.LttngLogAppender;
4320
4321public class Test
4322{
4323 private static final int answer = 42;
4324
4325 public static void main(String[] argv) throws Exception
4326 {
4327 // Create a logger
4328 Logger logger = Logger.getLogger("jello");
4329
4330 // Create an LTTng-UST log appender
4331 Appender lttngUstLogAppender = new LttngLogAppender();
4332
4333 // Add the LTTng-UST log appender to our logger
4334 logger.addAppender(lttngUstLogAppender);
4335
4336 // Log at will!
4337 logger.info("some info");
4338 logger.warn("some warning");
4339 Thread.sleep(500);
4340 logger.debug("debug information; the answer is " + answer);
4341 Thread.sleep(123);
4342 logger.fatal("error!");
4343
4344 // Not mandatory, but cleaner
4345 logger.removeAppender(lttngUstLogAppender);
4346 lttngUstLogAppender.close();
4347 }
4348}
4349
4350----
4351
4352Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4353file):
4354
4355[role="term"]
4356----
4357$ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4358----
4359
4360<<creating-destroying-tracing-sessions,Create a tracing session>>,
4361<<enabling-disabling-events,create an event rule>> matching the
4362`jello` log4j logger, and <<basic-tracing-session-control,start tracing>>:
4363
4364[role="term"]
4365----
4366$ lttng create
4367$ lttng enable-event --log4j jello
4368$ lttng start
4369----
4370
4371Run the compiled class:
4372
4373[role="term"]
4374----
4375$ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4376----
4377
4378<<basic-tracing-session-control,Stop tracing>> and inspect the
4379recorded events:
4380
4381[role="term"]
4382----
4383$ lttng stop
4384$ lttng view
4385----
4386====
4387
4388In the resulting trace, an <<event,event record>> generated by a Java
4389application using log4j is named `lttng_log4j:event` and
4390has the following fields:
4391
4392`msg`::
4393 Log record's message.
4394
4395`logger_name`::
4396 Logger name.
4397
4398`class_name`::
4399 Name of the class in which the log statement was executed.
4400
4401`method_name`::
4402 Name of the method in which the log statement was executed.
4403
4404`filename`::
4405 Name of the file in which the executed log statement is located.
4406
4407`line_number`::
4408 Line number at which the log statement was executed.
4409
4410`timestamp`::
4411 Logging timestamp.
4412
4413`int_loglevel`::
4414 Log level integer value.
4415
4416`thread_name`::
4417 Name of the Java thread in which the log statement was executed.
4418
4419You can use the opt:lttng-enable-event(1):--loglevel or
4420opt:lttng-enable-event(1):--loglevel-only option of the
4421man:lttng-enable-event(1) command to target a range of Apache log4j log levels
4422or a specific log4j log level.
4423
4424
4425[role="since-2.8"]
4426[[java-application-context]]
4427==== Provide application-specific context fields in a Java application
4428
4429A Java application-specific context field is a piece of state provided
4430by the application which <<adding-context,you can add>>, using the
4431man:lttng-add-context(1) command, to each <<event,event record>>
4432produced by the log statements of this application.
4433
4434For example, a given object might have a current request ID variable.
4435You can create a context information retriever for this object and
4436assign a name to this current request ID. You can then, using the
4437man:lttng-add-context(1) command, add this context field by name to
4438the JUL or log4j <<channel,channel>>.
4439
4440To provide application-specific context fields in a Java application:
4441
4442. In the Java application's source code, import the LTTng-UST
4443 Java agent context classes and interfaces:
4444+
4445--
4446[source,java]
4447----
4448import org.lttng.ust.agent.context.ContextInfoManager;
4449import org.lttng.ust.agent.context.IContextInfoRetriever;
4450----
4451--
4452
4453. Create a context information retriever class, that is, a class which
4454 implements the `IContextInfoRetriever` interface:
4455+
4456--
4457[source,java]
4458----
4459class MyContextInfoRetriever implements IContextInfoRetriever
4460{
4461 @Override
4462 public Object retrieveContextInfo(String key)
4463 {
4464 if (key.equals("intCtx")) {
4465 return (short) 17;
4466 } else if (key.equals("strContext")) {
4467 return "context value!";
4468 } else {
4469 return null;
4470 }
4471 }
4472}
4473----
4474--
4475+
4476This `retrieveContextInfo()` method is the only member of the
4477`IContextInfoRetriever` interface. Its role is to return the current
4478value of a state by name to create a context field. The names of the
4479context fields and which state variables they return depends on your
4480specific scenario.
4481+
4482All primitive types and objects are supported as context fields.
4483When `retrieveContextInfo()` returns an object, the context field
4484serializer calls its `toString()` method to add a string field to
4485event records. The method can also return `null`, which means that
4486no context field is available for the required name.
4487
4488. Register an instance of your context information retriever class to
4489 the context information manager singleton:
4490+
4491--
4492[source,java]
4493----
4494IContextInfoRetriever cir = new MyContextInfoRetriever();
4495ContextInfoManager cim = ContextInfoManager.getInstance();
4496cim.registerContextInfoRetriever("retrieverName", cir);
4497----
4498--
4499
4500. Before exiting the application, remove your context information
4501 retriever from the context information manager singleton:
4502+
4503--
4504[source,java]
4505----
4506ContextInfoManager cim = ContextInfoManager.getInstance();
4507cim.unregisterContextInfoRetriever("retrieverName");
4508----
4509--
4510+
4511This is not strictly necessary, but it is recommended for a clean
4512disposal of some manager's resources.
4513
4514. Build your Java application with LTTng-UST Java agent support as
4515 usual, following the procedure for either the <<jul,JUL>> or
4516 <<log4j,Apache log4j>> framework.
4517
4518
4519.Provide application-specific context fields in a Java application.
4520====
4521[source,java]
4522.path:{Test.java}
4523----
4524import java.util.logging.Handler;
4525import java.util.logging.Logger;
4526import org.lttng.ust.agent.jul.LttngLogHandler;
4527import org.lttng.ust.agent.context.ContextInfoManager;
4528import org.lttng.ust.agent.context.IContextInfoRetriever;
4529
4530public class Test
4531{
4532 // Our context information retriever class
4533 private static class MyContextInfoRetriever
4534 implements IContextInfoRetriever
4535 {
4536 @Override
4537 public Object retrieveContextInfo(String key) {
4538 if (key.equals("intCtx")) {
4539 return (short) 17;
4540 } else if (key.equals("strContext")) {
4541 return "context value!";
4542 } else {
4543 return null;
4544 }
4545 }
4546 }
4547
4548 private static final int answer = 42;
4549
4550 public static void main(String args[]) throws Exception
4551 {
4552 // Get the context information manager instance
4553 ContextInfoManager cim = ContextInfoManager.getInstance();
4554
4555 // Create and register our context information retriever
4556 IContextInfoRetriever cir = new MyContextInfoRetriever();
4557 cim.registerContextInfoRetriever("myRetriever", cir);
4558
4559 // Create a logger
4560 Logger logger = Logger.getLogger("jello");
4561
4562 // Create an LTTng-UST log handler
4563 Handler lttngUstLogHandler = new LttngLogHandler();
4564
4565 // Add the LTTng-UST log handler to our logger
4566 logger.addHandler(lttngUstLogHandler);
4567
4568 // Log at will!
4569 logger.info("some info");
4570 logger.warning("some warning");
4571 Thread.sleep(500);
4572 logger.finer("finer information; the answer is " + answer);
4573 Thread.sleep(123);
4574 logger.severe("error!");
4575
4576 // Not mandatory, but cleaner
4577 logger.removeHandler(lttngUstLogHandler);
4578 lttngUstLogHandler.close();
4579 cim.unregisterContextInfoRetriever("myRetriever");
4580 }
4581}
4582----
4583
4584Build this example:
4585
4586[role="term"]
4587----
4588$ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4589----
4590
4591<<creating-destroying-tracing-sessions,Create a tracing session>>
4592and <<enabling-disabling-events,create an event rule>> matching the
4593`jello` JUL logger:
4594
4595[role="term"]
4596----
4597$ lttng create
4598$ lttng enable-event --jul jello
4599----
4600
4601<<adding-context,Add the application-specific context fields>> to the
4602JUL channel:
4603
4604[role="term"]
4605----
4606$ lttng add-context --jul --type='$app.myRetriever:intCtx'
4607$ lttng add-context --jul --type='$app.myRetriever:strContext'
4608----
4609
4610<<basic-tracing-session-control,Start tracing>>:
4611
4612[role="term"]
4613----
4614$ lttng start
4615----
4616
4617Run the compiled class:
4618
4619[role="term"]
4620----
4621$ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4622----
4623
4624<<basic-tracing-session-control,Stop tracing>> and inspect the
4625recorded events:
4626
4627[role="term"]
4628----
4629$ lttng stop
4630$ lttng view
4631----
4632====
4633
4634
4635[role="since-2.7"]
4636[[python-application]]
4637=== User space Python agent
4638
4639You can instrument a Python 2 or Python 3 application which uses the
4640standard https://docs.python.org/3/library/logging.html[`logging`]
4641package.
4642
4643Each log statement emits an LTTng event once the
4644application module imports the
4645<<lttng-ust-agents,LTTng-UST Python agent>> package.
4646
4647[role="img-100"]
4648.A Python application importing the LTTng-UST Python agent.
4649image::python-app.png[]
4650
4651To use the LTTng-UST Python agent:
4652
4653. In the Python application's source code, import the LTTng-UST Python
4654 agent:
4655+
4656--
4657[source,python]
4658----
4659import lttngust
4660----
4661--
4662+
4663The LTTng-UST Python agent automatically adds its logging handler to the
4664root logger at import time.
4665+
4666Any log statement that the application executes before this import does
4667not emit an LTTng event.
4668+
4669IMPORTANT: The LTTng-UST Python agent must be
4670<<installing-lttng,installed>>.
4671
4672. Use log statements and logging configuration as usual.
4673 Since the LTTng-UST Python agent adds a handler to the _root_
4674 logger, you can trace any log statement from any logger.
4675
4676.Use the LTTng-UST Python agent.
4677====
4678[source,python]
4679.path:{test.py}
4680----
4681import lttngust
4682import logging
4683import time
4684
4685
4686def example():
4687 logging.basicConfig()
4688 logger = logging.getLogger('my-logger')
4689
4690 while True:
4691 logger.debug('debug message')
4692 logger.info('info message')
4693 logger.warn('warn message')
4694 logger.error('error message')
4695 logger.critical('critical message')
4696 time.sleep(1)
4697
4698
4699if __name__ == '__main__':
4700 example()
4701----
4702
4703NOTE: `logging.basicConfig()`, which adds to the root logger a basic
4704logging handler which prints to the standard error stream, is not
4705strictly required for LTTng-UST tracing to work, but in versions of
4706Python preceding 3.2, you could see a warning message which indicates
4707that no handler exists for the logger `my-logger`.
4708
4709<<creating-destroying-tracing-sessions,Create a tracing session>>,
4710<<enabling-disabling-events,create an event rule>> matching the
4711`my-logger` Python logger, and <<basic-tracing-session-control,start
4712tracing>>:
4713
4714[role="term"]
4715----
4716$ lttng create
4717$ lttng enable-event --python my-logger
4718$ lttng start
4719----
4720
4721Run the Python script:
4722
4723[role="term"]
4724----
4725$ python test.py
4726----
4727
4728<<basic-tracing-session-control,Stop tracing>> and inspect the recorded
4729events:
4730
4731[role="term"]
4732----
4733$ lttng stop
4734$ lttng view
4735----
4736====
4737
4738In the resulting trace, an <<event,event record>> generated by a Python
4739application is named `lttng_python:event` and has the following fields:
4740
4741`asctime`::
4742 Logging time (string).
4743
4744`msg`::
4745 Log record's message.
4746
4747`logger_name`::
4748 Logger name.
4749
4750`funcName`::
4751 Name of the function in which the log statement was executed.
4752
4753`lineno`::
4754 Line number at which the log statement was executed.
4755
4756`int_loglevel`::
4757 Log level integer value.
4758
4759`thread`::
4760 ID of the Python thread in which the log statement was executed.
4761
4762`threadName`::
4763 Name of the Python thread in which the log statement was executed.
4764
4765You can use the opt:lttng-enable-event(1):--loglevel or
4766opt:lttng-enable-event(1):--loglevel-only option of the
4767man:lttng-enable-event(1) command to target a range of Python log levels
4768or a specific Python log level.
4769
4770When an application imports the LTTng-UST Python agent, the agent tries
4771to register to a <<lttng-sessiond,session daemon>>. Note that you must
4772<<start-sessiond,start the session daemon>> _before_ you run the Python
4773application. If a session daemon is found, the agent tries to register
4774to it during 5{nbsp}seconds, after which the application continues
4775without LTTng tracing support. You can override this timeout value with
4776the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
4777(milliseconds).
4778
4779If the session daemon stops while a Python application with an imported
4780LTTng-UST Python agent runs, the agent retries to connect and to
4781register to a session daemon every 3{nbsp}seconds. You can override this
4782delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
4783variable.
4784
4785
4786[role="since-2.5"]
4787[[proc-lttng-logger-abi]]
4788=== LTTng logger
4789
4790The `lttng-tracer` Linux kernel module, part of
4791<<lttng-modules,LTTng-modules>>, creates the special LTTng logger file
4792path:{/proc/lttng-logger} when it's loaded. Any application can write
4793text data to this file to emit an LTTng event.
4794
4795[role="img-100"]
4796.An application writes to the LTTng logger file to emit an LTTng event.
4797image::lttng-logger.png[]
4798
4799The LTTng logger is the quickest method--not the most efficient,
4800however--to add instrumentation to an application. It is designed
4801mostly to instrument shell scripts:
4802
4803[role="term"]
4804----
4805$ echo "Some message, some $variable" > /proc/lttng-logger
4806----
4807
4808Any event that the LTTng logger emits is named `lttng_logger` and
4809belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
4810other instrumentation points in the kernel tracing domain, **any Unix
4811user** can <<enabling-disabling-events,create an event rule>> which
4812matches its event name, not only the root user or users in the
4813<<tracing-group,tracing group>>.
4814
4815To use the LTTng logger:
4816
4817* From any application, write text data to the path:{/proc/lttng-logger}
4818 file.
4819
4820The `msg` field of `lttng_logger` event records contains the
4821recorded message.
4822
4823NOTE: The maximum message length of an LTTng logger event is
48241024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
4825than one event to contain the remaining data.
4826
4827You should not use the LTTng logger to trace a user application which
4828can be instrumented in a more efficient way, namely:
4829
4830* <<c-application,C and $$C++$$ applications>>.
4831* <<java-application,Java applications>>.
4832* <<python-application,Python applications>>.
4833
4834.Use the LTTng logger.
4835====
4836[source,bash]
4837.path:{test.bash}
4838----
4839echo 'Hello, World!' > /proc/lttng-logger
4840sleep 2
4841df --human-readable --print-type / > /proc/lttng-logger
4842----
4843
4844<<creating-destroying-tracing-sessions,Create a tracing session>>,
4845<<enabling-disabling-events,create an event rule>> matching the
4846`lttng_logger` Linux kernel tracepoint, and
4847<<basic-tracing-session-control,start tracing>>:
4848
4849[role="term"]
4850----
4851$ lttng create
4852$ lttng enable-event --kernel lttng_logger
4853$ lttng start
4854----
4855
4856Run the Bash script:
4857
4858[role="term"]
4859----
4860$ bash test.bash
4861----
4862
4863<<basic-tracing-session-control,Stop tracing>> and inspect the recorded
4864events:
4865
4866[role="term"]
4867----
4868$ lttng stop
4869$ lttng view
4870----
4871====
4872
4873
4874[[instrumenting-linux-kernel]]
4875=== LTTng kernel tracepoints
4876
4877NOTE: This section shows how to _add_ instrumentation points to the
4878Linux kernel. The kernel's subsystems are already thoroughly
4879instrumented at strategic places for LTTng when you
4880<<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
4881package.
4882
4883////
4884There are two methods to instrument the Linux kernel:
4885
4886. <<linux-add-lttng-layer,Add an LTTng layer>> over an existing ftrace
4887 tracepoint which uses the `TRACE_EVENT()` API.
4888+
4889Choose this if you want to instrumentation a Linux kernel tree with an
4890instrumentation point compatible with ftrace, perf, and SystemTap.
4891
4892. Use an <<linux-lttng-tracepoint-event,LTTng-only approach>> to
4893 instrument an out-of-tree kernel module.
4894+
4895Choose this if you don't need ftrace, perf, or SystemTap support.
4896////
4897
4898
4899[[linux-add-lttng-layer]]
4900==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
4901
4902This section shows how to add an LTTng layer to existing ftrace
4903instrumentation using the `TRACE_EVENT()` API.
4904
4905This section does not document the `TRACE_EVENT()` macro. You can
4906read the following articles to learn more about this API:
4907
4908* http://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part 1)]
4909* http://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part 2)]
4910* http://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part 3)]
4911
4912The following procedure assumes that your ftrace tracepoints are
4913correctly defined in their own header and that they are created in
4914one source file using the `CREATE_TRACE_POINTS` definition.
4915
4916To add an LTTng layer over an existing ftrace tracepoint:
4917
4918. Make sure the following kernel configuration options are
4919 enabled:
4920+
4921--
4922* `CONFIG_MODULES`
4923* `CONFIG_KALLSYMS`
4924* `CONFIG_HIGH_RES_TIMERS`
4925* `CONFIG_TRACEPOINTS`
4926--
4927
4928. Build the Linux source tree with your custom ftrace tracepoints.
4929. Boot the resulting Linux image on your target system.
4930+
4931Confirm that the tracepoints exist by looking for their names in the
4932dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
4933is your subsystem's name.
4934
4935. Get a copy of the latest LTTng-modules{nbsp}{revision}:
4936+
4937--
4938[role="term"]
4939----
4940$ cd $(mktemp -d) &&
4941wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.10.tar.bz2 &&
4942tar -xf lttng-modules-latest-2.10.tar.bz2 &&
4943cd lttng-modules-2.10.*
4944----
4945--
4946
4947. In dir:{instrumentation/events/lttng-module}, relative to the root
4948 of the LTTng-modules source tree, create a header file named
4949 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
4950 LTTng-modules tracepoint definitions using the LTTng-modules
4951 macros in it.
4952+
4953Start with this template:
4954+
4955--
4956[source,c]
4957.path:{instrumentation/events/lttng-module/my_subsys.h}
4958----
4959#undef TRACE_SYSTEM
4960#define TRACE_SYSTEM my_subsys
4961
4962#if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
4963#define _LTTNG_MY_SUBSYS_H
4964
4965#include "../../../probes/lttng-tracepoint-event.h"
4966#include <linux/tracepoint.h>
4967
4968LTTNG_TRACEPOINT_EVENT(
4969 /*
4970 * Format is identical to TRACE_EVENT()'s version for the three
4971 * following macro parameters:
4972 */
4973 my_subsys_my_event,
4974 TP_PROTO(int my_int, const char *my_string),
4975 TP_ARGS(my_int, my_string),
4976
4977 /* LTTng-modules specific macros */
4978 TP_FIELDS(
4979 ctf_integer(int, my_int_field, my_int)
4980 ctf_string(my_bar_field, my_bar)
4981 )
4982)
4983
4984#endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
4985
4986#include "../../../probes/define_trace.h"
4987----
4988--
4989+
4990The entries in the `TP_FIELDS()` section are the list of fields for the
4991LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
4992ftrace's `TRACE_EVENT()` macro.
4993+
4994See <<lttng-modules-tp-fields,Tracepoint fields macros>> for a
4995complete description of the available `ctf_*()` macros.
4996
4997. Create the LTTng-modules probe's kernel module C source file,
4998 +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
4999 subsystem name:
5000+
5001--
5002[source,c]
5003.path:{probes/lttng-probe-my-subsys.c}
5004----
5005#include <linux/module.h>
5006#include "../lttng-tracer.h"
5007
5008/*
5009 * Build-time verification of mismatch between mainline
5010 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5011 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5012 */
5013#include <trace/events/my_subsys.h>
5014
5015/* Create LTTng tracepoint probes */
5016#define LTTNG_PACKAGE_BUILD
5017#define CREATE_TRACE_POINTS
5018#define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5019
5020#include "../instrumentation/events/lttng-module/my_subsys.h"
5021
5022MODULE_LICENSE("GPL and additional rights");
5023MODULE_AUTHOR("Your name <your-email>");
5024MODULE_DESCRIPTION("LTTng my_subsys probes");
5025MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5026 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5027 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5028 LTTNG_MODULES_EXTRAVERSION);
5029----
5030--
5031
5032. Edit path:{probes/KBuild} and add your new kernel module object
5033 next to the existing ones:
5034+
5035--
5036[source,make]
5037.path:{probes/KBuild}
5038----
5039# ...
5040
5041obj-m += lttng-probe-module.o
5042obj-m += lttng-probe-power.o
5043
5044obj-m += lttng-probe-my-subsys.o
5045
5046# ...
5047----
5048--
5049
5050. Build and install the LTTng kernel modules:
5051+
5052--
5053[role="term"]
5054----
5055$ make KERNELDIR=/path/to/linux
5056# make modules_install && depmod -a
5057----
5058--
5059+
5060Replace `/path/to/linux` with the path to the Linux source tree where
5061you defined and used tracepoints with ftrace's `TRACE_EVENT()` macro.
5062
5063Note that you can also use the
5064<<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5065instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5066C code that need to be executed before the event fields are recorded.
5067
5068The best way to learn how to use the previous LTTng-modules macros is to
5069inspect the existing LTTng-modules tracepoint definitions in the
5070dir:{instrumentation/events/lttng-module} header files. Compare them
5071with the Linux kernel mainline versions in the
5072dir:{include/trace/events} directory of the Linux source tree.
5073
5074
5075[role="since-2.7"]
5076[[lttng-tracepoint-event-code]]
5077===== Use custom C code to access the data for tracepoint fields
5078
5079Although we recommended to always use the
5080<<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5081the arguments and fields of an LTTng-modules tracepoint when possible,
5082sometimes you need a more complex process to access the data that the
5083tracer records as event record fields. In other words, you need local
5084variables and multiple C{nbsp}statements instead of simple
5085argument-based expressions that you pass to the
5086<<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5087
5088You can use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5089`LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5090a block of C{nbsp}code to be executed before LTTng records the fields.
5091The structure of this macro is:
5092
5093[source,c]
5094.`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5095----
5096LTTNG_TRACEPOINT_EVENT_CODE(
5097 /*
5098 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5099 * version for the following three macro parameters:
5100 */
5101 my_subsys_my_event,
5102 TP_PROTO(int my_int, const char *my_string),
5103 TP_ARGS(my_int, my_string),
5104
5105 /* Declarations of custom local variables */
5106 TP_locvar(
5107 int a = 0;
5108 unsigned long b = 0;
5109 const char *name = "(undefined)";
5110 struct my_struct *my_struct;
5111 ),
5112
5113 /*
5114 * Custom code which uses both tracepoint arguments
5115 * (in TP_ARGS()) and local variables (in TP_locvar()).
5116 *
5117 * Local variables are actually members of a structure pointed
5118 * to by the special variable tp_locvar.
5119 */
5120 TP_code(
5121 if (my_int) {
5122 tp_locvar->a = my_int + 17;
5123 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5124 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5125 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5126 put_my_struct(tp_locvar->my_struct);
5127
5128 if (tp_locvar->b) {
5129 tp_locvar->a = 1;
5130 }
5131 }
5132 ),
5133
5134 /*
5135 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5136 * version for this, except that tp_locvar members can be
5137 * used in the argument expression parameters of
5138 * the ctf_*() macros.
5139 */
5140 TP_FIELDS(
5141 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5142 ctf_integer(int, my_struct_a, tp_locvar->a)
5143 ctf_string(my_string_field, my_string)
5144 ctf_string(my_struct_name, tp_locvar->name)
5145 )
5146)
5147----
5148
5149IMPORTANT: The C code defined in `TP_code()` must not have any side
5150effects when executed. In particular, the code must not allocate
5151memory or get resources without deallocating this memory or putting
5152those resources afterwards.
5153
5154
5155[[instrumenting-linux-kernel-tracing]]
5156==== Load and unload a custom probe kernel module
5157
5158You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5159kernel module>> in the kernel before it can emit LTTng events.
5160
5161To load the default probe kernel modules and a custom probe kernel
5162module:
5163
5164* Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5165 probe modules to load when starting a root <<lttng-sessiond,session
5166 daemon>>:
5167+
5168--
5169.Load the `my_subsys`, `usb`, and the default probe modules.
5170====
5171[role="term"]
5172----
5173# lttng-sessiond --extra-kmod-probes=my_subsys,usb
5174----
5175====
5176--
5177+
5178You only need to pass the subsystem name, not the whole kernel module
5179name.
5180
5181To load _only_ a given custom probe kernel module:
5182
5183* Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5184 modules to load when starting a root session daemon:
5185+
5186--
5187.Load only the `my_subsys` and `usb` probe modules.
5188====
5189[role="term"]
5190----
5191# lttng-sessiond --kmod-probes=my_subsys,usb
5192----
5193====
5194--
5195
5196To confirm that a probe module is loaded:
5197
5198* Use man:lsmod(8):
5199+
5200--
5201[role="term"]
5202----
5203$ lsmod | grep lttng_probe_usb
5204----
5205--
5206
5207To unload the loaded probe modules:
5208
5209* Kill the session daemon with `SIGTERM`:
5210+
5211--
5212[role="term"]
5213----
5214# pkill lttng-sessiond
5215----
5216--
5217+
5218You can also use man:modprobe(8)'s `--remove` option if the session
5219daemon terminates abnormally.
5220
5221
5222[[controlling-tracing]]
5223== Tracing control
5224
5225Once an application or a Linux kernel is
5226<<instrumenting,instrumented>> for LTTng tracing,
5227you can _trace_ it.
5228
5229This section is divided in topics on how to use the various
5230<<plumbing,components of LTTng>>, in particular the <<lttng-cli,cmd:lttng
5231command-line tool>>, to _control_ the LTTng daemons and tracers.
5232
5233NOTE: In the following subsections, we refer to an man:lttng(1) command
5234using its man page name. For example, instead of _Run the `create`
5235command to..._, we use _Run the man:lttng-create(1) command to..._.
5236
5237
5238[[start-sessiond]]
5239=== Start a session daemon
5240
5241In some situations, you need to run a <<lttng-sessiond,session daemon>>
5242(man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5243command-line tool.
5244
5245You will see the following error when you run a command while no session
5246daemon is running:
5247
5248----
5249Error: No session daemon is available
5250----
5251
5252The only command that automatically runs a session daemon is
5253man:lttng-create(1), which you use to
5254<<creating-destroying-tracing-sessions,create a tracing session>>. While
5255this is most of the time the first operation that you do, sometimes it's
5256not. Some examples are:
5257
5258* <<list-instrumentation-points,List the available instrumentation points>>.
5259* <<saving-loading-tracing-session,Load a tracing session configuration>>.
5260
5261[[tracing-group]] Each Unix user must have its own running session
5262daemon to trace user applications. The session daemon that the root user
5263starts is the only one allowed to control the LTTng kernel tracer. Users
5264that are part of the _tracing group_ can control the root session
5265daemon. The default tracing group name is `tracing`; you can set it to
5266something else with the opt:lttng-sessiond(8):--group option when you
5267start the root session daemon.
5268
5269To start a user session daemon:
5270
5271* Run man:lttng-sessiond(8):
5272+
5273--
5274[role="term"]
5275----
5276$ lttng-sessiond --daemonize
5277----
5278--
5279
5280To start the root session daemon:
5281
5282* Run man:lttng-sessiond(8) as the root user:
5283+
5284--
5285[role="term"]
5286----
5287# lttng-sessiond --daemonize
5288----
5289--
5290
5291In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5292start the session daemon in foreground.
5293
5294To stop a session daemon, use man:kill(1) on its process ID (standard
5295`TERM` signal).
5296
5297Note that some Linux distributions could manage the LTTng session daemon
5298as a service. In this case, you should use the service manager to
5299start, restart, and stop session daemons.
5300
5301
5302[[creating-destroying-tracing-sessions]]
5303=== Create and destroy a tracing session
5304
5305Almost all the LTTng control operations happen in the scope of
5306a <<tracing-session,tracing session>>, which is the dialogue between the
5307<<lttng-sessiond,session daemon>> and you.
5308
5309To create a tracing session with a generated name:
5310
5311* Use the man:lttng-create(1) command:
5312+
5313--
5314[role="term"]
5315----
5316$ lttng create
5317----
5318--
5319
5320The created tracing session's name is `auto` followed by the
5321creation date.
5322
5323To create a tracing session with a specific name:
5324
5325* Use the optional argument of the man:lttng-create(1) command:
5326+
5327--
5328[role="term"]
5329----
5330$ lttng create my-session
5331----
5332--
5333+
5334Replace `my-session` with the specific tracing session name.
5335
5336LTTng appends the creation date to the created tracing session's name.
5337
5338LTTng writes the traces of a tracing session in
5339+$LTTNG_HOME/lttng-trace/__name__+ by default, where +__name__+ is the
5340name of the tracing session. Note that the env:LTTNG_HOME environment
5341variable defaults to `$HOME` if not set.
5342
5343To output LTTng traces to a non-default location:
5344
5345* Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command:
5346+
5347--
5348[role="term"]
5349----
5350$ lttng create my-session --output=/tmp/some-directory
5351----
5352--
5353
5354You may create as many tracing sessions as you wish.
5355
5356To list all the existing tracing sessions for your Unix user:
5357
5358* Use the man:lttng-list(1) command:
5359+
5360--
5361[role="term"]
5362----
5363$ lttng list
5364----
5365--
5366
5367When you create a tracing session, it is set as the _current tracing
5368session_. The following man:lttng(1) commands operate on the current
5369tracing session when you don't specify one:
5370
5371[role="list-3-cols"]
5372* `add-context`
5373* `destroy`
5374* `disable-channel`
5375* `disable-event`
5376* `enable-channel`
5377* `enable-event`
5378* `load`
5379* `regenerate`
5380* `save`
5381* `snapshot`
5382* `start`
5383* `stop`
5384* `track`
5385* `untrack`
5386* `view`
5387
5388To change the current tracing session:
5389
5390* Use the man:lttng-set-session(1) command:
5391+
5392--
5393[role="term"]
5394----
5395$ lttng set-session new-session
5396----
5397--
5398+
5399Replace `new-session` by the name of the new current tracing session.
5400
5401When you are done tracing in a given tracing session, you can destroy
5402it. This operation frees the resources taken by the tracing session
5403to destroy; it does not destroy the trace data that LTTng wrote for
5404this tracing session.
5405
5406To destroy the current tracing session:
5407
5408* Use the man:lttng-destroy(1) command:
5409+
5410--
5411[role="term"]
5412----
5413$ lttng destroy
5414----
5415--
5416
46adfb4b
PP
5417The man:lttng-destroy(1) command also runs the man:lttng-stop(1)
5418command implicitly (see <<basic-tracing-session-control,Start and stop a
5419tracing session>>). You need to stop tracing to make LTTng flush the
5420remaining trace data and make the trace readable.
5421
85c29972
PP
5422
5423[[list-instrumentation-points]]
5424=== List the available instrumentation points
5425
5426The <<lttng-sessiond,session daemon>> can query the running instrumented
5427user applications and the Linux kernel to get a list of available
5428instrumentation points. For the Linux kernel <<domain,tracing domain>>,
5429they are tracepoints and system calls. For the user space tracing
5430domain, they are tracepoints. For the other tracing domains, they are
5431logger names.
5432
5433To list the available instrumentation points:
5434
5435* Use the man:lttng-list(1) command with the requested tracing domain's
5436 option amongst:
5437+
5438--
5439* opt:lttng-list(1):--kernel: Linux kernel tracepoints (your Unix user
5440 must be a root user, or it must be a member of the
5441 <<tracing-group,tracing group>>).
5442* opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall: Linux
5443 kernel system calls (your Unix user must be a root user, or it must be
5444 a member of the tracing group).
5445* opt:lttng-list(1):--userspace: user space tracepoints.
5446* opt:lttng-list(1):--jul: `java.util.logging` loggers.
5447* opt:lttng-list(1):--log4j: Apache log4j loggers.
5448* opt:lttng-list(1):--python: Python loggers.
5449--
5450
5451.List the available user space tracepoints.
5452====
5453[role="term"]
5454----
5455$ lttng list --userspace
5456----
5457====
5458
5459.List the available Linux kernel system call tracepoints.
5460====
5461[role="term"]
5462----
5463$ lttng list --kernel --syscall
5464----
5465====
5466
5467
5468[[enabling-disabling-events]]
5469=== Create and enable an event rule
5470
5471Once you <<creating-destroying-tracing-sessions,create a tracing
5472session>>, you can create <<event,event rules>> with the
5473man:lttng-enable-event(1) command.
5474
5475You specify each condition with a command-line option. The available
5476condition options are shown in the following table.
5477
5478[role="growable",cols="asciidoc,asciidoc,default"]
5479.Condition command-line options for the man:lttng-enable-event(1) command.
5480|====
5481|Option |Description |Applicable tracing domains
5482
5483|
5484One of:
5485
5486. `--syscall`
5487. +--probe=__ADDR__+
5488. +--function=__ADDR__+
5489
5490|
5491Instead of using the default _tracepoint_ instrumentation type, use:
5492
5493. A Linux system call.
5494. A Linux https://lwn.net/Articles/132196/[KProbe] (symbol or address).
5495. The entry and return points of a Linux function (symbol or address).
5496
5497|Linux kernel.
5498
5499|First positional argument.
5500
5501|
5502Tracepoint or system call name. In the case of a Linux KProbe or
5503function, this is a custom name given to the event rule. With the
5504JUL, log4j, and Python domains, this is a logger name.
5505
51a225a5
PP
5506With a tracepoint, logger, or system call name, you can use the special
5507`*` globbing character to match anything (for example, `sched_*`,
5508`my_comp*:*msg_*`).
85c29972
PP
5509
5510|All.
5511
5512|
5513One of:
5514
5515. +--loglevel=__LEVEL__+
5516. +--loglevel-only=__LEVEL__+
5517
5518|
5519. Match only tracepoints or log statements with a logging level at
5520 least as severe as +__LEVEL__+.
5521. Match only tracepoints or log statements with a logging level
5522 equal to +__LEVEL__+.
5523
5524See man:lttng-enable-event(1) for the list of available logging level
5525names.
5526
5527|User space, JUL, log4j, and Python.
5528
5529|+--exclude=__EXCLUSIONS__+
5530
5531|
5532When you use a `*` character at the end of the tracepoint or logger
5533name (first positional argument), exclude the specific names in the
5534comma-delimited list +__EXCLUSIONS__+.
5535
5536|
5537User space, JUL, log4j, and Python.
5538
5539|+--filter=__EXPR__+
5540
5541|
5542Match only events which satisfy the expression +__EXPR__+.
5543
5544See man:lttng-enable-event(1) to learn more about the syntax of a
5545filter expression.
5546
5547|All.
5548
5549|====
5550
5551You attach an event rule to a <<channel,channel>> on creation. If you do
5552not specify the channel with the opt:lttng-enable-event(1):--channel
5553option, and if the event rule to create is the first in its
5554<<domain,tracing domain>> for a given tracing session, then LTTng
5555creates a _default channel_ for you. This default channel is reused in
5556subsequent invocations of the man:lttng-enable-event(1) command for the
5557same tracing domain.
5558
5559An event rule is always enabled at creation time.
5560
5561The following examples show how you can combine the previous
5562command-line options to create simple to more complex event rules.
5563
5564.Create an event rule targetting a Linux kernel tracepoint (default channel).
5565====
5566[role="term"]
5567----
5568$ lttng enable-event --kernel sched_switch
5569----
5570====
5571
5572.Create an event rule matching four Linux kernel system calls (default channel).
5573====
5574[role="term"]
5575----
5576$ lttng enable-event --kernel --syscall open,write,read,close
5577----
5578====
5579
5580.Create event rules matching tracepoints with filter expressions (default channel).
5581====
5582[role="term"]
5583----
5584$ lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
5585----
5586
5587[role="term"]
5588----
5589$ lttng enable-event --kernel --all \
5590 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
5591----
5592
5593[role="term"]
5594----
5595$ lttng enable-event --jul my_logger \
5596 --filter='$app.retriever:cur_msg_id > 3'
5597----
5598
5599IMPORTANT: Make sure to always quote the filter string when you
5600use man:lttng(1) from a shell.
5601====
5602
5603.Create an event rule matching any user space tracepoint of a given tracepoint provider with a log level range (default channel).
5604====
5605[role="term"]
5606----
5607$ lttng enable-event --userspace my_app:'*' --loglevel=TRACE_INFO
5608----
5609
5610IMPORTANT: Make sure to always quote the wildcard character when you
5611use man:lttng(1) from a shell.
5612====
5613
5614.Create an event rule matching multiple Python loggers with a wildcard and with exclusions (default channel).
5615====
5616[role="term"]
5617----
5618$ lttng enable-event --python my-app.'*' \
5619 --exclude='my-app.module,my-app.hello'
5620----
5621====
5622
5623.Create an event rule matching any Apache log4j logger with a specific log level (default channel).
5624====
5625[role="term"]
5626----
5627$ lttng enable-event --log4j --all --loglevel-only=LOG4J_WARN
5628----
5629====
5630
5631.Create an event rule attached to a specific channel matching a specific user space tracepoint provider and tracepoint.
5632====
5633[role="term"]
5634----
5635$ lttng enable-event --userspace my_app:my_tracepoint --channel=my-channel
5636----
5637====
5638
5639The event rules of a given channel form a whitelist: as soon as an
5640emitted event passes one of them, LTTng can record the event. For
5641example, an event named `my_app:my_tracepoint` emitted from a user space
5642tracepoint with a `TRACE_ERROR` log level passes both of the following
5643rules:
5644
5645[role="term"]
5646----
5647$ lttng enable-event --userspace my_app:my_tracepoint
5648$ lttng enable-event --userspace my_app:my_tracepoint \
5649 --loglevel=TRACE_INFO
5650----
5651
5652The second event rule is redundant: the first one includes
5653the second one.
5654
5655
5656[[disable-event-rule]]
5657=== Disable an event rule
5658
5659To disable an event rule that you <<enabling-disabling-events,created>>
5660previously, use the man:lttng-disable-event(1) command. This command
5661disables _all_ the event rules (of a given tracing domain and channel)
5662which match an instrumentation point. The other conditions are not
5663supported as of LTTng{nbsp}{revision}.
5664
5665The LTTng tracer does not record an emitted event which passes
5666a _disabled_ event rule.
5667
5668.Disable an event rule matching a Python logger (default channel).
5669====
5670[role="term"]
5671----
5672$ lttng disable-event --python my-logger
5673----
5674====
5675
5676.Disable an event rule matching all `java.util.logging` loggers (default channel).
5677====
5678[role="term"]
5679----
5680$ lttng disable-event --jul '*'
5681----
5682====
5683
5684.Disable _all_ the event rules of the default channel.
5685====
5686The opt:lttng-disable-event(1):--all-events option is not, like the
5687opt:lttng-enable-event(1):--all option of man:lttng-enable-event(1), the
5688equivalent of the event name `*` (wildcard): it disables _all_ the event
5689rules of a given channel.
5690
5691[role="term"]
5692----
5693$ lttng disable-event --jul --all-events
5694----
5695====
5696
5697NOTE: You cannot delete an event rule once you create it.
5698
5699
5700[[status]]
5701=== Get the status of a tracing session
5702
5703To get the status of the current tracing session, that is, its
5704parameters, its channels, event rules, and their attributes:
5705
5706* Use the man:lttng-status(1) command:
5707+
5708--
5709[role="term"]
5710----
5711$ lttng status
5712----
5713--
5714+
5715
5716To get the status of any tracing session:
5717
5718* Use the man:lttng-list(1) command with the tracing session's name:
5719+
5720--
5721[role="term"]
5722----
5723$ lttng list my-session
5724----
5725--
5726+
5727Replace `my-session` with the desired tracing session's name.
5728
5729
5730[[basic-tracing-session-control]]
5731=== Start and stop a tracing session
5732
5733Once you <<creating-destroying-tracing-sessions,create a tracing
5734session>> and
5735<<enabling-disabling-events,create one or more event rules>>,
5736you can start and stop the tracers for this tracing session.
5737
5738To start tracing in the current tracing session:
5739
5740* Use the man:lttng-start(1) command:
5741+
5742--
5743[role="term"]
5744----
5745$ lttng start
5746----
5747--
5748
5749LTTng is very flexible: you can launch user applications before
5750or after the you start the tracers. The tracers only record the events
5751if they pass enabled event rules and if they occur while the tracers are
5752started.
5753
5754To stop tracing in the current tracing session:
5755
5756* Use the man:lttng-stop(1) command:
5757+
5758--
5759[role="term"]
5760----
5761$ lttng stop
5762----
5763--
5764+
5765If there were <<channel-overwrite-mode-vs-discard-mode,lost event
5766records>> or lost sub-buffers since the last time you ran
5767man:lttng-start(1), warnings are printed when you run the
5768man:lttng-stop(1) command.
5769
57dea9c4 5770IMPORTANT: You need to stop tracing to make LTTng flush the remaining
46adfb4b
PP
5771trace data and make the trace readable. Note that the
5772man:lttng-destroy(1) command (see
5773<<creating-destroying-tracing-sessions,Create and destroy a tracing
5774session>>) also runs the man:lttng-stop(1) command implicitly.
57dea9c4 5775
85c29972
PP
5776
5777[[enabling-disabling-channels]]
5778=== Create a channel
5779
5780Once you create a tracing session, you can create a <<channel,channel>>
5781with the man:lttng-enable-channel(1) command.
5782
5783Note that LTTng automatically creates a default channel when, for a
5784given <<domain,tracing domain>>, no channels exist and you
5785<<enabling-disabling-events,create>> the first event rule. This default
5786channel is named `channel0` and its attributes are set to reasonable
5787values. Therefore, you only need to create a channel when you need
5788non-default attributes.
5789
5790You specify each non-default channel attribute with a command-line
5791option when you use the man:lttng-enable-channel(1) command. The
5792available command-line options are:
5793
5794[role="growable",cols="asciidoc,asciidoc"]
5795.Command-line options for the man:lttng-enable-channel(1) command.
5796|====
5797|Option |Description
5798
5799|`--overwrite`
5800
5801|
5802Use the _overwrite_
5803<<channel-overwrite-mode-vs-discard-mode,event loss mode>> instead of
5804the default _discard_ mode.
5805
5806|`--buffers-pid` (user space tracing domain only)
5807
5808|
5809Use the per-process <<channel-buffering-schemes,buffering scheme>>
5810instead of the default per-user buffering scheme.
5811
5812|+--subbuf-size=__SIZE__+
5813
5814|
5815Allocate sub-buffers of +__SIZE__+ bytes (power of two), for each CPU,
5816either for each Unix user (default), or for each instrumented process.
5817
5818See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
5819
5820|+--num-subbuf=__COUNT__+
5821
5822|
5823Allocate +__COUNT__+ sub-buffers (power of two), for each CPU, either
5824for each Unix user (default), or for each instrumented process.
5825
5826See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
5827
5828|+--tracefile-size=__SIZE__+
5829
5830|
5831Set the maximum size of each trace file that this channel writes within
5832a stream to +__SIZE__+ bytes instead of no maximum.
5833
5834See <<tracefile-rotation,Trace file count and size>>.
5835
5836|+--tracefile-count=__COUNT__+
5837
5838|
5839Limit the number of trace files that this channel creates to
5840+__COUNT__+ channels instead of no limit.
5841
5842See <<tracefile-rotation,Trace file count and size>>.
5843
5844|+--switch-timer=__PERIODUS__+
5845
5846|
5847Set the <<channel-switch-timer,switch timer period>>
5848to +__PERIODUS__+{nbsp}µs.
5849
5850|+--read-timer=__PERIODUS__+
5851
5852|
5853Set the <<channel-read-timer,read timer period>>
5854to +__PERIODUS__+{nbsp}µs.
5855
5856|[[opt-blocking-timeout]]+--blocking-timeout=__TIMEOUTUS__+
5857
5858|
5859Set the timeout of user space applications which load LTTng-UST
5860in blocking mode to +__TIMEOUTUS__+:
5861
58620 (default)::
5863 Never block (non-blocking mode).
5864
000f69a6 5865`inf`::
85c29972
PP
5866 Block forever until space is available in a sub-buffer to record
5867 the event.
5868
5869__n__, a positive value::
5870 Wait for at most __n__ µs when trying to write into a sub-buffer.
5871
5872Note that, for this option to have any effect on an instrumented
5873user space application, you need to run the application with a set
5874env:LTTNG_UST_ALLOW_BLOCKING environment variable.
5875
5876|+--output=__TYPE__+ (Linux kernel tracing domain only)
5877
5878|
5879Set the channel's output type to +__TYPE__+, either `mmap` or `splice`.
5880
5881|====
5882
5883You can only create a channel in the Linux kernel and user space
5884<<domain,tracing domains>>: other tracing domains have their own channel
5885created on the fly when <<enabling-disabling-events,creating event
5886rules>>.
5887
5888[IMPORTANT]
5889====
5890Because of a current LTTng limitation, you must create all channels
5891_before_ you <<basic-tracing-session-control,start tracing>> in a given
5892tracing session, that is, before the first time you run
5893man:lttng-start(1).
5894
5895Since LTTng automatically creates a default channel when you use the
5896man:lttng-enable-event(1) command with a specific tracing domain, you
5897cannot, for example, create a Linux kernel event rule, start tracing,
5898and then create a user space event rule, because no user space channel
5899exists yet and it's too late to create one.
5900
5901For this reason, make sure to configure your channels properly
5902before starting the tracers for the first time!
5903====
5904
5905The following examples show how you can combine the previous
5906command-line options to create simple to more complex channels.
5907
5908.Create a Linux kernel channel with default attributes.
5909====
5910[role="term"]
5911----
5912$ lttng enable-channel --kernel my-channel
5913----
5914====
5915
5916.Create a user space channel with 4 sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
5917====
5918[role="term"]
5919----
5920$ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
5921 --buffers-pid my-channel
5922----
5923====
5924
71b643ed 5925.[[blocking-timeout-example]]Create a default user space channel with an infinite blocking timeout.
85c29972
PP
5926====
5927<<creating-destroying-tracing-sessions,Create a tracing-session>>,
5928create the channel, <<enabling-disabling-events,create an event rule>>,
5929and <<basic-tracing-session-control,start tracing>>:
5930
5931[role="term"]
5932----
5933$ lttng create
000f69a6 5934$ lttng enable-channel --userspace --blocking-timeout=inf blocking-channel
85c29972
PP
5935$ lttng enable-event --userspace --channel=blocking-channel --all
5936$ lttng start
5937----
5938
5939Run an application instrumented with LTTng-UST and allow it to block:
5940
5941[role="term"]
5942----
5943$ LTTNG_UST_ALLOW_BLOCKING=1 my-app
5944----
5945====
5946
5947.Create a Linux kernel channel which rotates 8 trace files of 4{nbsp}MiB each for each stream
5948====
5949[role="term"]
5950----
5951$ lttng enable-channel --kernel --tracefile-count=8 \
5952 --tracefile-size=4194304 my-channel
5953----
5954====
5955
5956.Create a user space channel in overwrite (or _flight recorder_) mode.
5957====
5958[role="term"]
5959----
5960$ lttng enable-channel --userspace --overwrite my-channel
5961----
5962====
5963
5964You can <<enabling-disabling-events,create>> the same event rule in
5965two different channels:
5966
5967[role="term"]
5968----
5969$ lttng enable-event --userspace --channel=my-channel app:tp
5970$ lttng enable-event --userspace --channel=other-channel app:tp
5971----
5972
5973If both channels are enabled, when a tracepoint named `app:tp` is
5974reached, LTTng records two events, one for each channel.
5975
5976
5977[[disable-channel]]
5978=== Disable a channel
5979
5980To disable a specific channel that you <<enabling-disabling-channels,created>>
5981previously, use the man:lttng-disable-channel(1) command.
5982
5983.Disable a specific Linux kernel channel.
5984====
5985[role="term"]
5986----
5987$ lttng disable-channel --kernel my-channel
5988----
5989====
5990
5991The state of a channel precedes the individual states of event rules
5992attached to it: event rules which belong to a disabled channel, even if
5993they are enabled, are also considered disabled.
5994
5995
5996[[adding-context]]
5997=== Add context fields to a channel
5998
5999Event record fields in trace files provide important information about
6000events that occured previously, but sometimes some external context may
6001help you solve a problem faster. Examples of context fields are:
6002
6003* The **process ID**, **thread ID**, **process name**, and
6004 **process priority** of the thread in which the event occurs.
6005* The **hostname** of the system on which the event occurs.
6006* The current values of many possible **performance counters** using
6007 perf, for example:
6008** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6009** Cache misses.
6010** Branch instructions, misses, and loads.
6011** CPU faults.
6012* Any context defined at the application level (supported for the
6013 JUL and log4j <<domain,tracing domains>>).
6014
6015To get the full list of available context fields, see
6016`lttng add-context --list`. Some context fields are reserved for a
6017specific <<domain,tracing domain>> (Linux kernel or user space).
6018
6019You add context fields to <<channel,channels>>. All the events
6020that a channel with added context fields records contain those fields.
6021
6022To add context fields to one or all the channels of a given tracing
6023session:
6024
6025* Use the man:lttng-add-context(1) command.
6026
6027.Add context fields to all the channels of the current tracing session.
6028====
6029The following command line adds the virtual process identifier and
6030the per-thread CPU cycles count fields to all the user space channels
6031of the current tracing session.
6032
6033[role="term"]
6034----
6035$ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6036----
6037====
6038
6039.Add performance counter context fields by raw ID
6040====
6041See man:lttng-add-context(1) for the exact format of the context field
6042type, which is partly compatible with the format used in
6043man:perf-record(1).
6044
6045[role="term"]
6046----
6047$ lttng add-context --userspace --type=perf:thread:raw:r0110:test
6048$ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6049----
6050====
6051
6052.Add a context field to a specific channel.
6053====
6054The following command line adds the thread identifier context field
6055to the Linux kernel channel named `my-channel` in the current
6056tracing session.
6057
6058[role="term"]
6059----
6060$ lttng add-context --kernel --channel=my-channel --type=tid
6061----
6062====
6063
6064.Add an application-specific context field to a specific channel.
6065====
6066The following command line adds the `cur_msg_id` context field of the
6067`retriever` context retriever for all the instrumented
6068<<java-application,Java applications>> recording <<event,event records>>
6069in the channel named `my-channel`:
6070
6071[role="term"]
6072----
6073$ lttng add-context --kernel --channel=my-channel \
6074 --type='$app:retriever:cur_msg_id'
6075----
6076
6077IMPORTANT: Make sure to always quote the `$` character when you
6078use man:lttng-add-context(1) from a shell.
6079====
6080
6081NOTE: You cannot remove context fields from a channel once you add it.
6082
6083
6084[role="since-2.7"]
6085[[pid-tracking]]
6086=== Track process IDs
6087
6088It's often useful to allow only specific process IDs (PIDs) to emit
6089events. For example, you may wish to record all the system calls made by
6090a given process (Ă  la http://linux.die.net/man/1/strace[strace]).
6091
6092The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6093purpose. Both commands operate on a whitelist of process IDs. You _add_
6094entries to this whitelist with the man:lttng-track(1) command and remove
6095entries with the man:lttng-untrack(1) command. Any process which has one
6096of the PIDs in the whitelist is allowed to emit LTTng events which pass
6097an enabled <<event,event rule>>.
6098
6099NOTE: The PID tracker tracks the _numeric process IDs_. Should a
6100process with a given tracked ID exit and another process be given this
6101ID, then the latter would also be allowed to emit events.
6102
6103.Track and untrack process IDs.
6104====
6105For the sake of the following example, assume the target system has 16
6106possible PIDs.
6107
6108When you
6109<<creating-destroying-tracing-sessions,create a tracing session>>,
6110the whitelist contains all the possible PIDs:
6111
6112[role="img-100"]
6113.All PIDs are tracked.
6114image::track-all.png[]
6115
6116When the whitelist is full and you use the man:lttng-track(1) command to
6117specify some PIDs to track, LTTng first clears the whitelist, then it
6118tracks the specific PIDs. After:
6119
6120[role="term"]
6121----
6122$ lttng track --pid=3,4,7,10,13
6123----
6124
6125the whitelist is:
6126
6127[role="img-100"]
6128.PIDs 3, 4, 7, 10, and 13 are tracked.
6129image::track-3-4-7-10-13.png[]
6130
6131You can add more PIDs to the whitelist afterwards:
6132
6133[role="term"]
6134----
6135$ lttng track --pid=1,15,16
6136----
6137
6138The result is:
6139
6140[role="img-100"]
6141.PIDs 1, 15, and 16 are added to the whitelist.
6142image::track-1-3-4-7-10-13-15-16.png[]
6143
6144The man:lttng-untrack(1) command removes entries from the PID tracker's
6145whitelist. Given the previous example, the following command:
6146
6147[role="term"]
6148----
6149$ lttng untrack --pid=3,7,10,13
6150----
6151
6152leads to this whitelist:
6153
6154[role="img-100"]
6155.PIDs 3, 7, 10, and 13 are removed from the whitelist.
6156image::track-1-4-15-16.png[]
6157
a9f3997c
PP
6158LTTng can track all possible PIDs again using the
6159opt:lttng-track(1):--all option:
85c29972
PP
6160
6161[role="term"]
6162----
6163$ lttng track --pid --all
6164----
6165
6166The result is, again:
6167
6168[role="img-100"]
6169.All PIDs are tracked.
6170image::track-all.png[]
6171====
6172
6173.Track only specific PIDs
6174====
6175A very typical use case with PID tracking is to start with an empty
6176whitelist, then <<basic-tracing-session-control,start the tracers>>, and
6177then add PIDs manually while tracers are active. You can accomplish this
6178by using the opt:lttng-untrack(1):--all option of the
6179man:lttng-untrack(1) command to clear the whitelist after you
6180<<creating-destroying-tracing-sessions,create a tracing session>>:
6181
6182[role="term"]
6183----
6184$ lttng untrack --pid --all
6185----
6186
6187gives:
6188
6189[role="img-100"]
6190.No PIDs are tracked.
6191image::untrack-all.png[]
6192
6193If you trace with this whitelist configuration, the tracer records no
6194events for this <<domain,tracing domain>> because no processes are
6195tracked. You can use the man:lttng-track(1) command as usual to track
6196specific PIDs, for example:
6197
6198[role="term"]
6199----
6200$ lttng track --pid=6,11
6201----
6202
6203Result:
6204
6205[role="img-100"]
6206.PIDs 6 and 11 are tracked.
6207image::track-6-11.png[]
6208====
6209
6210
6211[role="since-2.5"]
6212[[saving-loading-tracing-session]]
6213=== Save and load tracing session configurations
6214
6215Configuring a <<tracing-session,tracing session>> can be long. Some of
6216the tasks involved are:
6217
6218* <<enabling-disabling-channels,Create channels>> with
6219 specific attributes.
6220* <<adding-context,Add context fields>> to specific channels.
6221* <<enabling-disabling-events,Create event rules>> with specific log
6222 level and filter conditions.
6223
6224If you use LTTng to solve real world problems, chances are you have to
6225record events using the same tracing session setup over and over,
6226modifying a few variables each time in your instrumented program
6227or environment. To avoid constant tracing session reconfiguration,
6228the man:lttng(1) command-line tool can save and load tracing session
6229configurations to/from XML files.
6230
6231To save a given tracing session configuration:
6232
6233* Use the man:lttng-save(1) command:
6234+
6235--
6236[role="term"]
6237----
6238$ lttng save my-session
6239----
6240--
6241+
6242Replace `my-session` with the name of the tracing session to save.
6243
6244LTTng saves tracing session configurations to
6245dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6246env:LTTNG_HOME environment variable defaults to `$HOME` if not set. Use
6247the opt:lttng-save(1):--output-path option to change this destination
6248directory.
6249
6250LTTng saves all configuration parameters, for example:
6251
6252* The tracing session name.
6253* The trace data output path.
6254* The channels with their state and all their attributes.
6255* The context fields you added to channels.
6256* The event rules with their state, log level and filter conditions.
6257
6258To load a tracing session:
6259
6260* Use the man:lttng-load(1) command:
6261+
6262--
6263[role="term"]
6264----
6265$ lttng load my-session
6266----
6267--
6268+
6269Replace `my-session` with the name of the tracing session to load.
6270
6271When LTTng loads a configuration, it restores your saved tracing session
6272as if you just configured it manually.
6273
6274See man:lttng(1) for the complete list of command-line options. You
6275can also save and load all many sessions at a time, and decide in which
6276directory to output the XML files.
6277
6278
6279[[sending-trace-data-over-the-network]]
6280=== Send trace data over the network
6281
6282LTTng can send the recorded trace data to a remote system over the
6283network instead of writing it to the local file system.
6284
6285To send the trace data over the network:
6286
6287. On the _remote_ system (which can also be the target system),
6288 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6289+
6290--
6291[role="term"]
6292----
6293$ lttng-relayd
6294----
6295--
6296
6297. On the _target_ system, create a tracing session configured to
6298 send trace data over the network:
6299+
6300--
6301[role="term"]
6302----
6303$ lttng create my-session --set-url=net://remote-system
6304----
6305--
6306+
6307Replace `remote-system` by the host name or IP address of the
6308remote system. See man:lttng-create(1) for the exact URL format.
6309
6310. On the target system, use the man:lttng(1) command-line tool as usual.
6311 When tracing is active, the target's consumer daemon sends sub-buffers
6312 to the relay daemon running on the remote system instead of flushing
6313 them to the local file system. The relay daemon writes the received
6314 packets to the local file system.
6315
6316The relay daemon writes trace files to
6317+$LTTNG_HOME/lttng-traces/__hostname__/__session__+ by default, where
6318+__hostname__+ is the host name of the target system and +__session__+
6319is the tracing session name. Note that the env:LTTNG_HOME environment
6320variable defaults to `$HOME` if not set. Use the
6321opt:lttng-relayd(8):--output option of man:lttng-relayd(8) to write
6322trace files to another base directory.
6323
6324
6325[role="since-2.4"]
6326[[lttng-live]]
6327=== View events as LTTng emits them (noch:{LTTng} live)
6328
6329LTTng live is a network protocol implemented by the <<lttng-relayd,relay
6330daemon>> (man:lttng-relayd(8)) to allow compatible trace viewers to
6331display events as LTTng emits them on the target system while tracing is
6332active.
6333
6334The relay daemon creates a _tee_: it forwards the trace data to both
6335the local file system and to connected live viewers:
6336
6337[role="img-90"]
6338.The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live viewer.
6339image::live.png[]
6340
6341To use LTTng live:
6342
6343. On the _target system_, create a <<tracing-session,tracing session>>
6344 in _live mode_:
6345+
6346--
6347[role="term"]
6348----
6349$ lttng create my-session --live
6350----
6351--
6352+
6353This spawns a local relay daemon.
6354
6355. Start the live viewer and configure it to connect to the relay
6356 daemon. For example, with http://diamon.org/babeltrace[Babeltrace]:
6357+
6358--
6359[role="term"]
6360----
6361$ babeltrace --input-format=lttng-live \
6362 net://localhost/host/hostname/my-session
6363----
6364--
6365+
6366Replace:
6367+
6368--
6369* `hostname` with the host name of the target system.
6370* `my-session` with the name of the tracing session to view.
6371--
6372
6373. Configure the tracing session as usual with the man:lttng(1)
6374 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6375
6376You can list the available live tracing sessions with Babeltrace:
6377
6378[role="term"]
6379----
6380$ babeltrace --input-format=lttng-live net://localhost
6381----
6382
6383You can start the relay daemon on another system. In this case, you need
6384to specify the relay daemon's URL when you create the tracing session
6385with the opt:lttng-create(1):--set-url option. You also need to replace
6386`localhost` in the procedure above with the host name of the system on
6387which the relay daemon is running.
6388
6389See man:lttng-create(1) and man:lttng-relayd(8) for the complete list of
6390command-line options.
6391
6392
6393[role="since-2.3"]
6394[[taking-a-snapshot]]
6395=== Take a snapshot of the current sub-buffers of a tracing session
6396
6397The normal behavior of LTTng is to append full sub-buffers to growing
6398trace data files. This is ideal to keep a full history of the events
6399that occurred on the target system, but it can
6400represent too much data in some situations. For example, you may wish
6401to trace your application continuously until some critical situation
6402happens, in which case you only need the latest few recorded
6403events to perform the desired analysis, not multi-gigabyte trace files.
6404
6405With the man:lttng-snapshot(1) command, you can take a snapshot of the
6406current sub-buffers of a given <<tracing-session,tracing session>>.
6407LTTng can write the snapshot to the local file system or send it over
6408the network.
6409
6410To take a snapshot:
6411
6412. Create a tracing session in _snapshot mode_:
6413+
6414--
6415[role="term"]
6416----
6417$ lttng create my-session --snapshot
6418----
6419--
6420+
6421The <<channel-overwrite-mode-vs-discard-mode,event loss mode>> of
6422<<channel,channels>> created in this mode is automatically set to
6423_overwrite_ (flight recorder mode).
6424
6425. Configure the tracing session as usual with the man:lttng(1)
6426 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6427
6428. **Optional**: When you need to take a snapshot,
6429 <<basic-tracing-session-control,stop tracing>>.
6430+
6431You can take a snapshot when the tracers are active, but if you stop
6432them first, you are sure that the data in the sub-buffers does not
6433change before you actually take the snapshot.
6434
6435. Take a snapshot:
6436+
6437--
6438[role="term"]
6439----
6440$ lttng snapshot record --name=my-first-snapshot
6441----
6442--
6443+
6444LTTng writes the current sub-buffers of all the current tracing
6445session's channels to trace files on the local file system. Those trace
6446files have `my-first-snapshot` in their name.
6447
6448There is no difference between the format of a normal trace file and the
6449format of a snapshot: viewers of LTTng traces also support LTTng
6450snapshots.
6451
6452By default, LTTng writes snapshot files to the path shown by
6453`lttng snapshot list-output`. You can change this path or decide to send
6454snapshots over the network using either:
6455
6456. An output path or URL that you specify when you create the
6457 tracing session.
6458. An snapshot output path or URL that you add using
6459 `lttng snapshot add-output`
6460. An output path or URL that you provide directly to the
6461 `lttng snapshot record` command.
6462
6463Method 3 overrides method 2, which overrides method 1. When you
6464specify a URL, a relay daemon must listen on a remote system (see
6465<<sending-trace-data-over-the-network,Send trace data over the network>>).
6466
6467
6468[role="since-2.6"]
6469[[mi]]
6470=== Use the machine interface
6471
6472With any command of the man:lttng(1) command-line tool, you can set the
6473opt:lttng(1):--mi option to `xml` (before the command name) to get an
6474XML machine interface output, for example:
6475
6476[role="term"]
6477----
6478$ lttng --mi=xml enable-event --kernel --syscall open
6479----
6480
6481A schema definition (XSD) is
6482https://github.com/lttng/lttng-tools/blob/stable-2.10/src/common/mi-lttng-3.0.xsd[available]
6483to ease the integration with external tools as much as possible.
6484
6485
6486[role="since-2.8"]
6487[[metadata-regenerate]]
6488=== Regenerate the metadata of an LTTng trace
6489
6490An LTTng trace, which is a http://diamon.org/ctf[CTF] trace, has both
6491data stream files and a metadata file. This metadata file contains,
6492amongst other things, information about the offset of the clock sources
6493used to timestamp <<event,event records>> when tracing.
6494
6495If, once a <<tracing-session,tracing session>> is
6496<<basic-tracing-session-control,started>>, a major
6497https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
6498happens, the trace's clock offset also needs to be updated. You
6499can use the `metadata` item of the man:lttng-regenerate(1) command
6500to do so.
6501
6502The main use case of this command is to allow a system to boot with
6503an incorrect wall time and trace it with LTTng before its wall time
6504is corrected. Once the system is known to be in a state where its
6505wall time is correct, it can run `lttng regenerate metadata`.
6506
6507To regenerate the metadata of an LTTng trace:
6508
6509* Use the `metadata` item of the man:lttng-regenerate(1) command:
6510+
6511--
6512[role="term"]
6513----
6514$ lttng regenerate metadata
6515----
6516--
6517
6518[IMPORTANT]
6519====
6520`lttng regenerate metadata` has the following limitations:
6521
6522* Tracing session <<creating-destroying-tracing-sessions,created>>
6523 in non-live mode.
6524* User space <<channel,channels>>, if any, are using
6525 <<channel-buffering-schemes,per-user buffering>>.
6526====
6527
6528
6529[role="since-2.9"]
6530[[regenerate-statedump]]
6531=== Regenerate the state dump of a tracing session
6532
6533The LTTng kernel and user space tracers generate state dump
6534<<event,event records>> when the application starts or when you
6535<<basic-tracing-session-control,start a tracing session>>. An analysis
6536can use the state dump event records to set an initial state before it
6537builds the rest of the state from the following event records.
6538http://tracecompass.org/[Trace Compass] is a notable example of an
6539application which uses the state dump of an LTTng trace.
6540
6541When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
6542state dump event records are not included in the snapshot because they
6543were recorded to a sub-buffer that has been consumed or overwritten
6544already.
6545
6546You can use the `lttng regenerate statedump` command to emit the state
6547dump event records again.
6548
6549To regenerate the state dump of the current tracing session, provided
6550create it in snapshot mode, before you take a snapshot:
6551
6552. Use the `statedump` item of the man:lttng-regenerate(1) command:
6553+
6554--
6555[role="term"]
6556----
6557$ lttng regenerate statedump
6558----
6559--
6560
6561. <<basic-tracing-session-control,Stop the tracing session>>:
6562+
6563--
6564[role="term"]
6565----
6566$ lttng stop
6567----
6568--
6569
6570. <<taking-a-snapshot,Take a snapshot>>:
6571+
6572--
6573[role="term"]
6574----
6575$ lttng snapshot record --name=my-snapshot
6576----
6577--
6578
6579Depending on the event throughput, you should run steps 1 and 2
6580as closely as possible.
6581
6582NOTE: To record the state dump events, you need to
6583<<enabling-disabling-events,create event rules>> which enable them.
6584LTTng-UST state dump tracepoints start with `lttng_ust_statedump:`.
6585LTTng-modules state dump tracepoints start with `lttng_statedump_`.
6586
6587
6588[role="since-2.7"]
6589[[persistent-memory-file-systems]]
6590=== Record trace data on persistent memory file systems
6591
6592https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory]
6593(NVRAM) is random-access memory that retains its information when power
6594is turned off (non-volatile). Systems with such memory can store data
6595structures in RAM and retrieve them after a reboot, without flushing
6596to typical _storage_.
6597
6598Linux supports NVRAM file systems thanks to either
6599http://pramfs.sourceforge.net/[PRAMFS] or
6600https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
6601(requires Linux 4.1+).
6602
6603This section does not describe how to operate such file systems;
6604we assume that you have a working persistent memory file system.
6605
6606When you create a <<tracing-session,tracing session>>, you can specify
6607the path of the shared memory holding the sub-buffers. If you specify a
6608location on an NVRAM file system, then you can retrieve the latest
6609recorded trace data when the system reboots after a crash.
6610
6611To record trace data on a persistent memory file system and retrieve the
6612trace data after a system crash:
6613
6614. Create a tracing session with a sub-buffer shared memory path located
6615 on an NVRAM file system:
6616+
6617--
6618[role="term"]
6619----
6620$ lttng create my-session --shm-path=/path/to/shm
6621----
6622--
6623
6624. Configure the tracing session as usual with the man:lttng(1)
6625 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6626
6627. After a system crash, use the man:lttng-crash(1) command-line tool to
6628 view the trace data recorded on the NVRAM file system:
6629+
6630--
6631[role="term"]
6632----
6633$ lttng-crash /path/to/shm
6634----
6635--
6636
6637The binary layout of the ring buffer files is not exactly the same as
6638the trace files layout. This is why you need to use man:lttng-crash(1)
6639instead of your preferred trace viewer directly.
6640
6641To convert the ring buffer files to LTTng trace files:
6642
6643* Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
6644+
6645--
6646[role="term"]
6647----
6648$ lttng-crash --extract=/path/to/trace /path/to/shm
6649----
6650--
6651
6652
90c4e38a
PP
6653[role="since-2.10"]
6654[[notif-trigger-api]]
6655=== Get notified when a channel's buffer usage is too high or too low
6656
6657With LTTng's $$C/C++$$ notification and trigger API, your user
6658application can get notified when the buffer usage of one or more
6659<<channel,channels>> becomes too low or too high. You can use this API
6660and enable or disable <<event,event rules>> during tracing to avoid
6661<<channel-overwrite-mode-vs-discard-mode,discarded event records>>.
6662
6663.Have a user application get notified when an LTTng channel's buffer usage is too high.
6664====
6665In this example, we create and build an application which gets notified
6666when the buffer usage of a specific LTTng channel is higher than
666775{nbsp}%. We only print that it is the case in the example, but we
6668could as well use the API of <<liblttng-ctl-lttng,`liblttng-ctl`>> to
6669disable event rules when this happens.
6670
6671. Create the application's C source file:
6672+
6673--
6674[source,c]
6675.path:{notif-app.c}
6676----
6677#include <stdio.h>
6678#include <assert.h>
6679#include <lttng/domain.h>
6680#include <lttng/action/action.h>
6681#include <lttng/action/notify.h>
6682#include <lttng/condition/condition.h>
6683#include <lttng/condition/buffer-usage.h>
6684#include <lttng/condition/evaluation.h>
6685#include <lttng/notification/channel.h>
6686#include <lttng/notification/notification.h>
6687#include <lttng/trigger/trigger.h>
6688#include <lttng/endpoint.h>
6689
6690int main(int argc, char *argv[])
6691{
d2a86fb9
PP
6692 int exit_status = 0;
6693 struct lttng_notification_channel *notification_channel;
6694 struct lttng_condition *condition;
6695 struct lttng_action *action;
6696 struct lttng_trigger *trigger;
6697 const char *tracing_session_name;
6698 const char *channel_name;
6699
6700 assert(argc >= 3);
6701 tracing_session_name = argv[1];
6702 channel_name = argv[2];
90c4e38a
PP
6703
6704 /*
d2a86fb9
PP
6705 * Create a notification channel. A notification channel
6706 * connects the user application to the LTTng session daemon.
7568806b 6707 * This notification channel can be used to listen to various
d2a86fb9
PP
6708 * types of notifications.
6709 */
6710 notification_channel = lttng_notification_channel_create(
6711 lttng_session_daemon_notification_endpoint);
6712
6713 /*
6714 * Create a "high buffer usage" condition. In this case, the
6715 * condition is reached when the buffer usage is greater than or
7568806b
PP
6716 * equal to 75 %. We create the condition for a specific tracing
6717 * session name, channel name, and for the user space tracing
6718 * domain.
90c4e38a 6719 *
d2a86fb9
PP
6720 * The "low buffer usage" condition type also exists.
6721 */
6722 condition = lttng_condition_buffer_usage_high_create();
6723 lttng_condition_buffer_usage_set_threshold_ratio(condition, .75);
6724 lttng_condition_buffer_usage_set_session_name(
6725 condition, tracing_session_name);
6726 lttng_condition_buffer_usage_set_channel_name(condition,
6727 channel_name);
6728 lttng_condition_buffer_usage_set_domain_type(condition,
6729 LTTNG_DOMAIN_UST);
6730
6731 /*
6732 * Create an action (get a notification) to take when the
6733 * condition created above is reached.
6734 */
6735 action = lttng_action_notify_create();
6736
6737 /*
6738 * Create a trigger. A trigger associates a condition to an
6739 * action: the action is executed when the condition is reached.
90c4e38a 6740 */
d2a86fb9 6741 trigger = lttng_trigger_create(condition, action);
90c4e38a 6742
d2a86fb9
PP
6743 /* Register the trigger to LTTng. */
6744 lttng_register_trigger(trigger);
90c4e38a
PP
6745
6746 /*
d2a86fb9
PP
6747 * Now that we have registered a trigger, a notification will be
6748 * emitted everytime its condition is met. To receive this
6749 * notification, we must subscribe to notifications that match
6750 * the same condition.
90c4e38a 6751 */
7568806b
PP
6752 lttng_notification_channel_subscribe(notification_channel,
6753 condition);
90c4e38a
PP
6754
6755 /*
7568806b
PP
6756 * Notification loop. You can put this in a dedicated thread to
6757 * avoid blocking the main thread.
90c4e38a 6758 */
d2a86fb9
PP
6759 for (;;) {
6760 struct lttng_notification *notification;
6761 enum lttng_notification_channel_status status;
6762 const struct lttng_evaluation *notification_evaluation;
6763 const struct lttng_condition *notification_condition;
6764 double buffer_usage;
6765
6766 /* Receive the next notification. */
6767 status = lttng_notification_channel_get_next_notification(
7568806b 6768 notification_channel, &notification);
d2a86fb9
PP
6769
6770 switch (status) {
6771 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
6772 break;
6773 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
6774 /*
6775 * The session daemon can drop notifications if
6776 * a monitoring application is not consuming the
6777 * notifications fast enough.
6778 */
6779 continue;
6780 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
6781 /*
6782 * The notification channel has been closed by the
6783 * session daemon. This is typically caused by a session
6784 * daemon shutting down.
6785 */
6786 goto end;
6787 default:
6788 /* Unhandled conditions or errors. */
6789 exit_status = 1;
6790 goto end;
6791 }
6792
6793 /*
6794 * A notification provides, amongst other things:
6795 *
6796 * * The condition that caused this notification to be
6797 * emitted.
6798 * * The condition evaluation, which provides more
6799 * specific information on the evaluation of the
6800 * condition.
6801 *
6802 * The condition evaluation provides the buffer usage
7568806b 6803 * value at the moment the condition was reached.
d2a86fb9
PP
6804 */
6805 notification_condition = lttng_notification_get_condition(
6806 notification);
6807 notification_evaluation = lttng_notification_get_evaluation(
6808 notification);
6809
6810 /* We're subscribed to only one condition. */
6811 assert(lttng_condition_get_type(notification_condition) ==
6812 LTTNG_CONDITION_TYPE_BUFFER_USAGE_HIGH);
6813
6814 /*
6815 * Get the exact sampled buffer usage from the
6816 * condition evaluation.
6817 */
6818 lttng_evaluation_buffer_usage_get_usage_ratio(
6819 notification_evaluation, &buffer_usage);
6820
6821 /*
6822 * At this point, instead of printing a message, we
6823 * could do something to reduce the channel's buffer
6824 * usage, like disable specific events.
6825 */
6826 printf("Buffer usage is %f %% in tracing session \"%s\", "
7568806b
PP
6827 "user space channel \"%s\".\n", buffer_usage * 100,
6828 tracing_session_name, channel_name);
d2a86fb9
PP
6829 lttng_notification_destroy(notification);
6830 }
90c4e38a
PP
6831
6832end:
d2a86fb9
PP
6833 lttng_action_destroy(action);
6834 lttng_condition_destroy(condition);
6835 lttng_trigger_destroy(trigger);
6836 lttng_notification_channel_destroy(notification_channel);
6837 return exit_status;
90c4e38a
PP
6838}
6839----
6840--
6841
6842. Build the `notif-app` application, linking it to `liblttng-ctl`:
6843+
6844--
6845[role="term"]
6846----
6847$ gcc -o notif-app notif-app.c -llttng-ctl
6848----
6849--
6850
6851. <<creating-destroying-tracing-sessions,Create a tracing session>>,
6852 <<enabling-disabling-events,create an event rule>> matching all the
6853 user space tracepoints, and
6854 <<basic-tracing-session-control,start tracing>>:
6855+
6856--
6857[role="term"]
6858----
6859$ lttng create my-session
6860$ lttng enable-event --userspace --all
6861$ lttng start
6862----
6863--
6864+
6865If you create the channel manually with the man:lttng-enable-channel(1)
6866command, you can control how frequently are the current values of the
6867channel's properties sampled to evaluate user conditions with the
6868opt:lttng-enable-channel(1):--monitor-timer option.
6869
6870. Run the `notif-app` application. This program accepts the
6871 <<tracing-session,tracing session>> name and the user space channel
6872 name as its two first arguments. The channel which LTTng automatically
6873 creates with the man:lttng-enable-event(1) command above is named
6874 `channel0`:
6875+
6876--
6877[role="term"]
6878----
6879$ ./notif-app my-session channel0
6880----
6881--
6882
6883. In another terminal, run an application with a very high event
6884 throughput so that the 75{nbsp}% buffer usage condition is reached.
6885+
6886In the first terminal, the application should print lines like this:
6887+
6888----
6889Buffer usage is 81.45197 % in tracing session "my-session", user space
6890channel "channel0".
6891----
6892+
6893If you don't see anything, try modifying the condition in
6894path:{notif-app.c} to a lower value (0.1, for example), rebuilding it
6895(step 2) and running it again (step 4).
6896====
6897
6898
85c29972
PP
6899[[reference]]
6900== Reference
6901
6902[[lttng-modules-ref]]
6903=== noch:{LTTng-modules}
6904
6905
6906[role="since-2.9"]
6907[[lttng-tracepoint-enum]]
6908==== `LTTNG_TRACEPOINT_ENUM()` usage
6909
6910Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
6911
6912[source,c]
6913----
6914LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
6915----
6916
6917Replace:
6918
6919* `name` with the name of the enumeration (C identifier, unique
6920 amongst all the defined enumerations).
6921* `entries` with a list of enumeration entries.
6922
6923The available enumeration entry macros are:
6924
6925+ctf_enum_value(__name__, __value__)+::
6926 Entry named +__name__+ mapped to the integral value +__value__+.
6927
6928+ctf_enum_range(__name__, __begin__, __end__)+::
6929 Entry named +__name__+ mapped to the range of integral values between
6930 +__begin__+ (included) and +__end__+ (included).
6931
6932+ctf_enum_auto(__name__)+::
6933 Entry named +__name__+ mapped to the integral value following the
6934 last mapping's value.
6935+
6936The last value of a `ctf_enum_value()` entry is its +__value__+
6937parameter.
6938+
6939The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
6940+
6941If `ctf_enum_auto()` is the first entry in the list, its integral
6942value is 0.
6943
6944Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
6945to use a defined enumeration as a tracepoint field.
6946
6947.Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
6948====
6949[source,c]
6950----
6951LTTNG_TRACEPOINT_ENUM(
6952 my_enum,
6953 TP_ENUM_VALUES(
6954 ctf_enum_auto("AUTO: EXPECT 0")
6955 ctf_enum_value("VALUE: 23", 23)
6956 ctf_enum_value("VALUE: 27", 27)
6957 ctf_enum_auto("AUTO: EXPECT 28")
6958 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
6959 ctf_enum_auto("AUTO: EXPECT 304")
6960 )
6961)
6962----
6963====
6964
6965
6966[role="since-2.7"]
6967[[lttng-modules-tp-fields]]
6968==== Tracepoint fields macros (for `TP_FIELDS()`)
6969
6970[[tp-fast-assign]][[tp-struct-entry]]The available macros to define
6971tracepoint fields, which must be listed within `TP_FIELDS()` in
6972`LTTNG_TRACEPOINT_EVENT()`, are:
6973
6974[role="func-desc growable",cols="asciidoc,asciidoc"]
6975.Available macros to define LTTng-modules tracepoint fields
6976|====
6977|Macro |Description and parameters
6978
6979|
6980+ctf_integer(__t__, __n__, __e__)+
6981
6982+ctf_integer_nowrite(__t__, __n__, __e__)+
6983
6984+ctf_user_integer(__t__, __n__, __e__)+
6985
6986+ctf_user_integer_nowrite(__t__, __n__, __e__)+
6987|
6988Standard integer, displayed in base 10.
6989
6990+__t__+::
6991 Integer C type (`int`, `long`, `size_t`, ...).
6992
6993+__n__+::
6994 Field name.
6995
6996+__e__+::
6997 Argument expression.
6998
6999|
7000+ctf_integer_hex(__t__, __n__, __e__)+
7001
7002+ctf_user_integer_hex(__t__, __n__, __e__)+
7003|
7004Standard integer, displayed in base 16.
7005
7006+__t__+::
7007 Integer C type.
7008
7009+__n__+::
7010 Field name.
7011
7012+__e__+::
7013 Argument expression.
7014
7015|+ctf_integer_oct(__t__, __n__, __e__)+
7016|
7017Standard integer, displayed in base 8.
7018
7019+__t__+::
7020 Integer C type.
7021
7022+__n__+::
7023 Field name.
7024
7025+__e__+::
7026 Argument expression.
7027
7028|
7029+ctf_integer_network(__t__, __n__, __e__)+
7030
7031+ctf_user_integer_network(__t__, __n__, __e__)+
7032|
7033Integer in network byte order (big-endian), displayed in base 10.
7034
7035+__t__+::
7036 Integer C type.
7037
7038+__n__+::
7039 Field name.
7040
7041+__e__+::
7042 Argument expression.
7043
7044|
7045+ctf_integer_network_hex(__t__, __n__, __e__)+
7046
7047+ctf_user_integer_network_hex(__t__, __n__, __e__)+
7048|
7049Integer in network byte order, displayed in base 16.
7050
7051+__t__+::
7052 Integer C type.
7053
7054+__n__+::
7055 Field name.
7056
7057+__e__+::
7058 Argument expression.
7059
7060|
7061+ctf_enum(__N__, __t__, __n__, __e__)+
7062
7063+ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
7064
7065+ctf_user_enum(__N__, __t__, __n__, __e__)+
7066
7067+ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
7068|
7069Enumeration.
7070
7071+__N__+::
7072 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
7073
7074+__t__+::
7075 Integer C type (`int`, `long`, `size_t`, ...).
7076
7077+__n__+::
7078 Field name.
7079
7080+__e__+::
7081 Argument expression.
7082
7083|
7084+ctf_string(__n__, __e__)+
7085
7086+ctf_string_nowrite(__n__, __e__)+
7087
7088+ctf_user_string(__n__, __e__)+
7089
7090+ctf_user_string_nowrite(__n__, __e__)+
7091|
7092Null-terminated string; undefined behavior if +__e__+ is `NULL`.
7093
7094+__n__+::
7095 Field name.
7096
7097+__e__+::
7098 Argument expression.
7099
7100|
7101+ctf_array(__t__, __n__, __e__, __s__)+
7102
7103+ctf_array_nowrite(__t__, __n__, __e__, __s__)+
7104
7105+ctf_user_array(__t__, __n__, __e__, __s__)+
7106
7107+ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
7108|
7109Statically-sized array of integers.
7110
7111+__t__+::
7112 Array element C type.
7113
7114+__n__+::
7115 Field name.
7116
7117+__e__+::
7118 Argument expression.
7119
7120+__s__+::
7121 Number of elements.
7122
7123|
7124+ctf_array_bitfield(__t__, __n__, __e__, __s__)+
7125
7126+ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7127
7128+ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
7129
7130+ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7131|
7132Statically-sized array of bits.
7133
7134The type of +__e__+ must be an integer type. +__s__+ is the number
7135of elements of such type in +__e__+, not the number of bits.
7136
7137+__t__+::
7138 Array element C type.
7139
7140+__n__+::
7141 Field name.
7142
7143+__e__+::
7144 Argument expression.
7145
7146+__s__+::
7147 Number of elements.
7148
7149|
7150+ctf_array_text(__t__, __n__, __e__, __s__)+
7151
7152+ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
7153
7154+ctf_user_array_text(__t__, __n__, __e__, __s__)+
7155
7156+ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
7157|
7158Statically-sized array, printed as text.
7159
7160The string does not need to be null-terminated.
7161
7162+__t__+::
7163 Array element C type (always `char`).
7164
7165+__n__+::
7166 Field name.
7167
7168+__e__+::
7169 Argument expression.
7170
7171+__s__+::
7172 Number of elements.
7173
7174|
7175+ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
7176
7177+ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7178
7179+ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
7180
7181+ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7182|
7183Dynamically-sized array of integers.
7184
7185The type of +__E__+ must be unsigned.
7186
7187+__t__+::
7188 Array element C type.
7189
7190+__n__+::
7191 Field name.
7192
7193+__e__+::
7194 Argument expression.
7195
7196+__T__+::
7197 Length expression C type.
7198
7199+__E__+::
7200 Length expression.
7201
7202|
7203+ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7204
7205+ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7206|
7207Dynamically-sized array of integers, displayed in base 16.
7208
7209The type of +__E__+ must be unsigned.
7210
7211+__t__+::
7212 Array element C type.
7213
7214+__n__+::
7215 Field name.
7216
7217+__e__+::
7218 Argument expression.
7219
7220+__T__+::
7221 Length expression C type.
7222
7223+__E__+::
7224 Length expression.
7225
7226|+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
7227|
7228Dynamically-sized array of integers in network byte order (big-endian),
7229displayed in base 10.
7230
7231The type of +__E__+ must be unsigned.
7232
7233+__t__+::
7234 Array element C type.
7235
7236+__n__+::
7237 Field name.
7238
7239+__e__+::
7240 Argument expression.
7241
7242+__T__+::
7243 Length expression C type.
7244
7245+__E__+::
7246 Length expression.
7247
7248|
7249+ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7250
7251+ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7252
7253+ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7254
7255+ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7256|
7257Dynamically-sized array of bits.
7258
7259The type of +__e__+ must be an integer type. +__s__+ is the number
7260of elements of such type in +__e__+, not the number of bits.
7261
7262The type of +__E__+ must be unsigned.
7263
7264+__t__+::
7265 Array element C type.
7266
7267+__n__+::
7268 Field name.
7269
7270+__e__+::
7271 Argument expression.
7272
7273+__T__+::
7274 Length expression C type.
7275
7276+__E__+::
7277 Length expression.
7278
7279|
7280+ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7281
7282+ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7283
7284+ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7285
7286+ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7287|
7288Dynamically-sized array, displayed as text.
7289
7290The string does not need to be null-terminated.
7291
7292The type of +__E__+ must be unsigned.
7293
7294The behaviour is undefined if +__e__+ is `NULL`.
7295
7296+__t__+::
7297 Sequence element C type (always `char`).
7298
7299+__n__+::
7300 Field name.
7301
7302+__e__+::
7303 Argument expression.
7304
7305+__T__+::
7306 Length expression C type.
7307
7308+__E__+::
7309 Length expression.
7310|====
7311
7312Use the `_user` versions when the argument expression, `e`, is
7313a user space address. In the cases of `ctf_user_integer*()` and
7314`ctf_user_float*()`, `&e` must be a user space address, thus `e` must
7315be addressable.
7316
7317The `_nowrite` versions omit themselves from the session trace, but are
7318otherwise identical. This means the `_nowrite` fields won't be written
7319in the recorded trace. Their primary purpose is to make some
7320of the event context available to the
7321<<enabling-disabling-events,event filters>> without having to
7322commit the data to sub-buffers.
7323
7324
7325[[glossary]]
7326== Glossary
7327
7328Terms related to LTTng and to tracing in general:
7329
7330Babeltrace::
7331 The http://diamon.org/babeltrace[Babeltrace] project, which includes
7332 the cmd:babeltrace command, some libraries, and Python bindings.
7333
7334<<channel-buffering-schemes,buffering scheme>>::
7335 A layout of sub-buffers applied to a given channel.
7336
7337<<channel,channel>>::
7338 An entity which is responsible for a set of ring buffers.
7339+
7340<<event,Event rules>> are always attached to a specific channel.
7341
7342clock::
7343 A reference of time for a tracer.
7344
7345<<lttng-consumerd,consumer daemon>>::
7346 A process which is responsible for consuming the full sub-buffers
7347 and write them to a file system or send them over the network.
7348
7349<<channel-overwrite-mode-vs-discard-mode,discard mode>>:: The event loss
7350 mode in which the tracer _discards_ new event records when there's no
7351 sub-buffer space left to store them.
7352
7353event::
7354 The consequence of the execution of an instrumentation
7355 point, like a tracepoint that you manually place in some source code,
7356 or a Linux kernel KProbe.
7357+
7358An event is said to _occur_ at a specific time. Different actions can
7359be taken upon the occurrence of an event, like record the event's payload
7360to a sub-buffer.
7361
7362<<channel-overwrite-mode-vs-discard-mode,event loss mode>>::
7363 The mechanism by which event records of a given channel are lost
7364 (not recorded) when there is no sub-buffer space left to store them.
7365
7366[[def-event-name]]event name::
7367 The name of an event, which is also the name of the event record.
7368 This is also called the _instrumentation point name_.
7369
7370event record::
7371 A record, in a trace, of the payload of an event which occured.
7372
7373<<event,event rule>>::
7374 Set of conditions which must be satisfied for one or more occuring
7375 events to be recorded.
7376
7377`java.util.logging`::
7378 Java platform's
7379 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities].
7380
7381<<instrumenting,instrumentation>>::
7382 The use of LTTng probes to make a piece of software traceable.
7383
7384instrumentation point::
7385 A point in the execution path of a piece of software that, when
7386 reached by this execution, can emit an event.
7387
7388instrumentation point name::
7389 See _<<def-event-name,event name>>_.
7390
7391log4j::
7392 A http://logging.apache.org/log4j/1.2/[logging library] for Java
7393 developed by the Apache Software Foundation.
7394
7395log level::
7396 Level of severity of a log statement or user space
7397 instrumentation point.
7398
7399LTTng::
7400 The _Linux Trace Toolkit: next generation_ project.
7401
7402<<lttng-cli,cmd:lttng>>::
7403 A command-line tool provided by the LTTng-tools project which you
7404 can use to send and receive control messages to and from a
7405 session daemon.
7406
7407LTTng analyses::
7408 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
7409 which is a set of analyzing programs that are used to obtain a
7410 higher level view of an LTTng trace.
7411
7412cmd:lttng-consumerd::
7413 The name of the consumer daemon program.
7414
7415cmd:lttng-crash::
7416 A utility provided by the LTTng-tools project which can convert
7417 ring buffer files (usually
7418 <<persistent-memory-file-systems,saved on a persistent memory file system>>)
7419 to trace files.
7420
7421LTTng Documentation::
7422 This document.
7423
7424<<lttng-live,LTTng live>>::
7425 A communication protocol between the relay daemon and live viewers
7426 which makes it possible to see events "live", as they are received by
7427 the relay daemon.
7428
7429<<lttng-modules,LTTng-modules>>::
7430 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
7431 which contains the Linux kernel modules to make the Linux kernel
7432 instrumentation points available for LTTng tracing.
7433
7434cmd:lttng-relayd::
7435 The name of the relay daemon program.
7436
7437cmd:lttng-sessiond::
7438 The name of the session daemon program.
7439
7440LTTng-tools::
7441 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
7442 contains the various programs and libraries used to
7443 <<controlling-tracing,control tracing>>.
7444
7445<<lttng-ust,LTTng-UST>>::
7446 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
7447 contains libraries to instrument user applications.
7448
7449<<lttng-ust-agents,LTTng-UST Java agent>>::
7450 A Java package provided by the LTTng-UST project to allow the
7451 LTTng instrumentation of `java.util.logging` and Apache log4j 1.2
7452 logging statements.
7453
7454<<lttng-ust-agents,LTTng-UST Python agent>>::
7455 A Python package provided by the LTTng-UST project to allow the
7456 LTTng instrumentation of Python logging statements.
7457
7458<<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
7459 The event loss mode in which new event records overwrite older
7460 event records when there's no sub-buffer space left to store them.
7461
7462<<channel-buffering-schemes,per-process buffering>>::
7463 A buffering scheme in which each instrumented process has its own
7464 sub-buffers for a given user space channel.
7465
7466<<channel-buffering-schemes,per-user buffering>>::
7467 A buffering scheme in which all the processes of a Unix user share the
7468 same sub-buffer for a given user space channel.
7469
7470<<lttng-relayd,relay daemon>>::
7471 A process which is responsible for receiving the trace data sent by
7472 a distant consumer daemon.
7473
7474ring buffer::
7475 A set of sub-buffers.
7476
7477<<lttng-sessiond,session daemon>>::
7478 A process which receives control commands from you and orchestrates
7479 the tracers and various LTTng daemons.
7480
7481<<taking-a-snapshot,snapshot>>::
7482 A copy of the current data of all the sub-buffers of a given tracing
7483 session, saved as trace files.
7484
7485sub-buffer::
7486 One part of an LTTng ring buffer which contains event records.
7487
7488timestamp::
7489 The time information attached to an event when it is emitted.
7490
7491trace (_noun_)::
7492 A set of files which are the concatenations of one or more
7493 flushed sub-buffers.
7494
7495trace (_verb_)::
7496 The action of recording the events emitted by an application
7497 or by a system, or to initiate such recording by controlling
7498 a tracer.
7499
7500Trace Compass::
7501 The http://tracecompass.org[Trace Compass] project and application.
7502
7503tracepoint::
7504 An instrumentation point using the tracepoint mechanism of the Linux
7505 kernel or of LTTng-UST.
7506
7507tracepoint definition::
7508 The definition of a single tracepoint.
7509
7510tracepoint name::
7511 The name of a tracepoint.
7512
7513tracepoint provider::
7514 A set of functions providing tracepoints to an instrumented user
7515 application.
7516+
7517Not to be confused with a _tracepoint provider package_: many tracepoint
7518providers can exist within a tracepoint provider package.
7519
7520tracepoint provider package::
7521 One or more tracepoint providers compiled as an object file or as
7522 a shared library.
7523
7524tracer::
7525 A software which records emitted events.
7526
7527<<domain,tracing domain>>::
7528 A namespace for event sources.
7529
7530<<tracing-group,tracing group>>::
7531 The Unix group in which a Unix user can be to be allowed to trace the
7532 Linux kernel.
7533
7534<<tracing-session,tracing session>>::
7535 A stateful dialogue between you and a <<lttng-sessiond,session
7536 daemon>>.
7537
7538user application::
7539 An application running in user space, as opposed to a Linux kernel
7540 module, for example.
This page took 0.338586 seconds and 4 git commands to generate.