2.8-2.12: "lost packets" is only meaningful for non-snapshot modes
[lttng-docs.git] / 2.10 / lttng-docs-2.10.txt
1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.10, 25 February 2021
5
6
7 include::../common/copyright.txt[]
8
9
10 include::../common/warning-not-maintained.txt[]
11
12
13 include::../common/welcome.txt[]
14
15
16 include::../common/audience.txt[]
17
18
19 [[chapters]]
20 === What's in this documentation?
21
22 The LTTng Documentation is divided into the following sections:
23
24 * **<<nuts-and-bolts,Nuts and bolts>>** explains the
25 rudiments of software tracing and the rationale behind the
26 LTTng project.
27 +
28 You can skip this section if you’re familiar with software tracing and
29 with the LTTng project.
30
31 * **<<installing-lttng,Installation>>** describes the steps to
32 install the LTTng packages on common Linux distributions and from
33 their sources.
34 +
35 You can skip this section if you already properly installed LTTng on
36 your target system.
37
38 * **<<getting-started,Quick start>>** is a concise guide to
39 getting started quickly with LTTng kernel and user space tracing.
40 +
41 We recommend this section if you're new to LTTng or to software tracing
42 in general.
43 +
44 You can skip this section if you're not new to LTTng.
45
46 * **<<core-concepts,Core concepts>>** explains the concepts at
47 the heart of LTTng.
48 +
49 It's a good idea to become familiar with the core concepts
50 before attempting to use the toolkit.
51
52 * **<<plumbing,Components of LTTng>>** describes the various components
53 of the LTTng machinery, like the daemons, the libraries, and the
54 command-line interface.
55 * **<<instrumenting,Instrumentation>>** shows different ways to
56 instrument user applications and the Linux kernel.
57 +
58 Instrumenting source code is essential to provide a meaningful
59 source of events.
60 +
61 You can skip this section if you do not have a programming background.
62
63 * **<<controlling-tracing,Tracing control>>** is divided into topics
64 which demonstrate how to use the vast array of features that
65 LTTng{nbsp}{revision} offers.
66 * **<<reference,Reference>>** contains reference tables.
67 * **<<glossary,Glossary>>** is a specialized dictionary of terms related
68 to LTTng or to the field of software tracing.
69
70
71 include::../common/convention.txt[]
72
73
74 include::../common/acknowledgements.txt[]
75
76
77 [[whats-new]]
78 == What's new in LTTng {revision}?
79
80 LTTng{nbsp}{revision} bears the name _KeKriek_. From
81 http://brasseriedunham.com/[Brasserie Dunham], the _**KeKriek**_ is a
82 sour mashed golden wheat ale fermented with local sour cherries from
83 Tougas orchards. Fresh sweet cherry notes with some tartness, lively
84 carbonation with a dry finish.
85
86 New features and changes in LTTng{nbsp}{revision}:
87
88 * **Tracing control**:
89 ** You can put more than one wildcard special character (`*`), and not
90 only at the end, when you <<enabling-disabling-events,create an event
91 rule>>, in both the instrumentation point name and the literal
92 strings of
93 link:/man/1/lttng-enable-event/v{revision}/#doc-filter-syntax[filter expressions]:
94 +
95 --
96 [role="term"]
97 ----
98 # lttng enable-event --kernel 'x86_*_local_timer_*' \
99 --filter='name == "*a*b*c*d*e" && count >= 23'
100 ----
101 --
102 +
103 --
104 [role="term"]
105 ----
106 $ lttng enable-event --userspace '*_my_org:*msg*'
107 ----
108 --
109
110 ** New trigger and notification API for
111 <<liblttng-ctl-lttng,`liblttng-ctl`>>. This new subsystem allows you
112 to register triggers which emit a notification when a given
113 condition is satisfied. As of LTTng{nbsp}{revision}, only
114 <<channel,channel>> buffer usage conditions are available.
115 Documentation is available in the
116 https://github.com/lttng/lttng-tools/tree/stable-{revision}/include/lttng[`liblttng-ctl`
117 header files] and in
118 <<notif-trigger-api,Get notified when a channel's buffer usage is too
119 high or too low>>.
120
121 ** You can now embed the whole textual LTTng-tools man pages into the
122 executables at build time with the `--enable-embedded-help`
123 configuration option. Thanks to this option, you don't need the
124 http://www.methods.co.nz/asciidoc/[AsciiDoc] and
125 https://directory.fsf.org/wiki/Xmlto[xmlto] tools at build time, and
126 a manual pager at run time, to get access to this documentation.
127
128 * **User space tracing**:
129 ** New blocking mode: an LTTng-UST tracepoint can now block until
130 <<channel,sub-buffer>> space is available instead of discarding event
131 records in <<channel-overwrite-mode-vs-discard-mode,discard mode>>.
132 With this feature, you can be sure that no event records are
133 discarded during your application's execution at the expense of
134 performance.
135 +
136 For example, the following command lines create a user space tracing
137 channel with an infinite blocking timeout and run an application
138 instrumented with LTTng-UST which is explicitly allowed to block:
139 +
140 --
141 [role="term"]
142 ----
143 $ lttng create
144 $ lttng enable-channel --userspace --blocking-timeout=inf blocking-channel
145 $ lttng enable-event --userspace --channel=blocking-channel --all
146 $ lttng start
147 $ LTTNG_UST_ALLOW_BLOCKING=1 my-app
148 ----
149 --
150 +
151 See the complete <<blocking-timeout-example,blocking timeout example>>.
152
153 * **Linux kernel tracing**:
154 ** Linux 4.10, 4.11, and 4.12 support.
155 ** The thread state dump events recorded by LTTng-modules now contain
156 the task's CPU identifier. This improves the precision of the
157 scheduler model for analyses.
158 ** Extended man:socketpair(2) system call tracing data.
159
160
161 [[nuts-and-bolts]]
162 == Nuts and bolts
163
164 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
165 generation_ is a modern toolkit for tracing Linux systems and
166 applications. So your first question might be:
167 **what is tracing?**
168
169
170 [[what-is-tracing]]
171 === What is tracing?
172
173 As the history of software engineering progressed and led to what
174 we now take for granted--complex, numerous and
175 interdependent software applications running in parallel on
176 sophisticated operating systems like Linux--the authors of such
177 components, software developers, began feeling a natural
178 urge to have tools that would ensure the robustness and good performance
179 of their masterpieces.
180
181 One major achievement in this field is, inarguably, the
182 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
183 an essential tool for developers to find and fix bugs. But even the best
184 debugger won't help make your software run faster, and nowadays, faster
185 software means either more work done by the same hardware, or cheaper
186 hardware for the same work.
187
188 A _profiler_ is often the tool of choice to identify performance
189 bottlenecks. Profiling is suitable to identify _where_ performance is
190 lost in a given software. The profiler outputs a profile, a statistical
191 summary of observed events, which you may use to discover which
192 functions took the most time to execute. However, a profiler won't
193 report _why_ some identified functions are the bottleneck. Bottlenecks
194 might only occur when specific conditions are met, conditions that are
195 sometimes impossible to capture by a statistical profiler, or impossible
196 to reproduce with an application altered by the overhead of an
197 event-based profiler. For a thorough investigation of software
198 performance issues, a history of execution is essential, with the
199 recorded values of variables and context fields you choose, and
200 with as little influence as possible on the instrumented software. This
201 is where tracing comes in handy.
202
203 _Tracing_ is a technique used to understand what goes on in a running
204 software system. The software used for tracing is called a _tracer_,
205 which is conceptually similar to a tape recorder. When recording,
206 specific instrumentation points placed in the software source code
207 generate events that are saved on a giant tape: a _trace_ file. You
208 can trace user applications and the operating system at the same time,
209 opening the possibility of resolving a wide range of problems that would
210 otherwise be extremely challenging.
211
212 Tracing is often compared to _logging_. However, tracers and loggers are
213 two different tools, serving two different purposes. Tracers are
214 designed to record much lower-level events that occur much more
215 frequently than log messages, often in the range of thousands per
216 second, with very little execution overhead. Logging is more appropriate
217 for a very high-level analysis of less frequent events: user accesses,
218 exceptional conditions (errors and warnings, for example), database
219 transactions, instant messaging communications, and such. Simply put,
220 logging is one of the many use cases that can be satisfied with tracing.
221
222 The list of recorded events inside a trace file can be read manually
223 like a log file for the maximum level of detail, but it is generally
224 much more interesting to perform application-specific analyses to
225 produce reduced statistics and graphs that are useful to resolve a
226 given problem. Trace viewers and analyzers are specialized tools
227 designed to do this.
228
229 In the end, this is what LTTng is: a powerful, open source set of
230 tools to trace the Linux kernel and user applications at the same time.
231 LTTng is composed of several components actively maintained and
232 developed by its link:/community/#where[community].
233
234
235 [[lttng-alternatives]]
236 === Alternatives to noch:{LTTng}
237
238 Excluding proprietary solutions, a few competing software tracers
239 exist for Linux:
240
241 * https://github.com/dtrace4linux/linux[dtrace4linux] is a port of
242 Sun Microsystems's DTrace to Linux. The cmd:dtrace tool interprets
243 user scripts and is responsible for loading code into the
244 Linux kernel for further execution and collecting the outputted data.
245 * https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF] is a
246 subsystem in the Linux kernel in which a virtual machine can execute
247 programs passed from the user space to the kernel. You can attach
248 such programs to tracepoints and KProbes thanks to a system call, and
249 they can output data to the user space when executed thanks to
250 different mechanisms (pipe, VM register values, and eBPF maps, to name
251 a few).
252 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
253 is the de facto function tracer of the Linux kernel. Its user
254 interface is a set of special files in sysfs.
255 * https://perf.wiki.kernel.org/[perf] is
256 a performance analyzing tool for Linux which supports hardware
257 performance counters, tracepoints, as well as other counters and
258 types of probes. perf's controlling utility is the cmd:perf command
259 line/curses tool.
260 * http://linux.die.net/man/1/strace[strace]
261 is a command-line utility which records system calls made by a
262 user process, as well as signal deliveries and changes of process
263 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
264 to fulfill its function.
265 * http://www.sysdig.org/[sysdig], like SystemTap, uses scripts to
266 analyze Linux kernel events. You write scripts, or _chisels_ in
267 sysdig's jargon, in Lua and sysdig executes them while the system is
268 being traced or afterwards. sysdig's interface is the cmd:sysdig
269 command-line tool as well as the curses-based cmd:csysdig tool.
270 * https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and
271 user space tracer which uses custom user scripts to produce plain text
272 traces. SystemTap converts the scripts to the C language, and then
273 compiles them as Linux kernel modules which are loaded to produce
274 trace data. SystemTap's primary user interface is the cmd:stap
275 command-line tool.
276
277 The main distinctive features of LTTng is that it produces correlated
278 kernel and user space traces, as well as doing so with the lowest
279 overhead amongst other solutions. It produces trace files in the
280 http://diamon.org/ctf[CTF] format, a file format optimized
281 for the production and analyses of multi-gigabyte data.
282
283 LTTng is the result of more than 10 years of active open source
284 development by a community of passionate developers.
285 LTTng{nbsp}{revision} is currently available on major desktop and server
286 Linux distributions.
287
288 The main interface for tracing control is a single command-line tool
289 named cmd:lttng. The latter can create several tracing sessions, enable
290 and disable events on the fly, filter events efficiently with custom
291 user expressions, start and stop tracing, and much more. LTTng can
292 record the traces on the file system or send them over the network, and
293 keep them totally or partially. You can view the traces once tracing
294 becomes inactive or in real-time.
295
296 <<installing-lttng,Install LTTng now>> and
297 <<getting-started,start tracing>>!
298
299
300 [[installing-lttng]]
301 == Installation
302
303 include::../common/warning-no-installation.txt[]
304
305 **LTTng** is a set of software <<plumbing,components>> which interact to
306 <<instrumenting,instrument>> the Linux kernel and user applications, and
307 to <<controlling-tracing,control tracing>> (start and stop
308 tracing, enable and disable event rules, and the rest). Those
309 components are bundled into the following packages:
310
311 * **LTTng-tools**: Libraries and command-line interface to
312 control tracing.
313 * **LTTng-modules**: Linux kernel modules to instrument and
314 trace the kernel.
315 * **LTTng-UST**: Libraries and Java/Python packages to instrument and
316 trace user applications.
317
318 Most distributions mark the LTTng-modules and LTTng-UST packages as
319 optional when installing LTTng-tools (which is always required). Note
320 that:
321
322 * You only need to install LTTng-modules if you intend to trace the
323 Linux kernel.
324 * You only need to install LTTng-UST if you intend to trace user
325 applications.
326
327
328 [[building-from-source]]
329 === Build from source
330
331 To build and install LTTng{nbsp}{revision} from source:
332
333 . Using your distribution's package manager, or from source, install
334 the following dependencies of LTTng-tools and LTTng-UST:
335 +
336 --
337 * https://sourceforge.net/projects/libuuid/[libuuid]
338 * http://directory.fsf.org/wiki/Popt[popt]
339 * http://liburcu.org/[Userspace RCU]
340 * http://www.xmlsoft.org/[libxml2]
341 --
342
343 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
344 +
345 --
346 [role="term"]
347 ----
348 $ cd $(mktemp -d) &&
349 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.10.tar.bz2 &&
350 tar -xf lttng-modules-latest-2.10.tar.bz2 &&
351 cd lttng-modules-2.10.* &&
352 make &&
353 sudo make modules_install &&
354 sudo depmod -a
355 ----
356 --
357
358 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
359 +
360 --
361 [role="term"]
362 ----
363 $ cd $(mktemp -d) &&
364 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.10.tar.bz2 &&
365 tar -xf lttng-ust-latest-2.10.tar.bz2 &&
366 cd lttng-ust-2.10.* &&
367 ./configure &&
368 make &&
369 sudo make install &&
370 sudo ldconfig
371 ----
372 --
373 +
374 --
375 [IMPORTANT]
376 .Java and Python application tracing
377 ====
378 If you need to instrument and trace <<java-application,Java
379 applications>>, pass the `--enable-java-agent-jul`,
380 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
381 `configure` script, depending on which Java logging framework you use.
382
383 If you need to instrument and trace <<python-application,Python
384 applications>>, pass the `--enable-python-agent` option to the
385 `configure` script. You can set the `PYTHON` environment variable to the
386 path to the Python interpreter for which to install the LTTng-UST Python
387 agent package.
388 ====
389 --
390 +
391 --
392 [NOTE]
393 ====
394 By default, LTTng-UST libraries are installed to
395 dir:{/usr/local/lib}, which is the de facto directory in which to
396 keep self-compiled and third-party libraries.
397
398 When <<building-tracepoint-providers-and-user-application,linking an
399 instrumented user application with `liblttng-ust`>>:
400
401 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
402 variable.
403 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
404 man:gcc(1), man:g++(1), or man:clang(1).
405 ====
406 --
407
408 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
409 +
410 --
411 [role="term"]
412 ----
413 $ cd $(mktemp -d) &&
414 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
415 tar -xf lttng-tools-latest-2.10.tar.bz2 &&
416 cd lttng-tools-2.10.* &&
417 ./configure &&
418 make &&
419 sudo make install &&
420 sudo ldconfig
421 ----
422 --
423
424 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
425 previous steps automatically for a given version of LTTng and confine
426 the installed files in a specific directory. This can be useful to test
427 LTTng without installing it on your system.
428
429
430 [[getting-started]]
431 == Quick start
432
433 This is a short guide to get started quickly with LTTng kernel and user
434 space tracing.
435
436 Before you follow this guide, make sure to <<installing-lttng,install>>
437 LTTng.
438
439 This tutorial walks you through the steps to:
440
441 . <<tracing-the-linux-kernel,Trace the Linux kernel>>.
442 . <<tracing-your-own-user-application,Trace a user application>> written
443 in C.
444 . <<viewing-and-analyzing-your-traces,View and analyze the
445 recorded events>>.
446
447
448 [[tracing-the-linux-kernel]]
449 === Trace the Linux kernel
450
451 The following command lines start with the `#` prompt because you need
452 root privileges to trace the Linux kernel. You can also trace the kernel
453 as a regular user if your Unix user is a member of the
454 <<tracing-group,tracing group>>.
455
456 . Create a <<tracing-session,tracing session>> which writes its traces
457 to dir:{/tmp/my-kernel-trace}:
458 +
459 --
460 [role="term"]
461 ----
462 # lttng create my-kernel-session --output=/tmp/my-kernel-trace
463 ----
464 --
465
466 . List the available kernel tracepoints and system calls:
467 +
468 --
469 [role="term"]
470 ----
471 # lttng list --kernel
472 # lttng list --kernel --syscall
473 ----
474 --
475
476 . Create <<event,event rules>> which match the desired instrumentation
477 point names, for example the `sched_switch` and `sched_process_fork`
478 tracepoints, and the man:open(2) and man:close(2) system calls:
479 +
480 --
481 [role="term"]
482 ----
483 # lttng enable-event --kernel sched_switch,sched_process_fork
484 # lttng enable-event --kernel --syscall open,close
485 ----
486 --
487 +
488 You can also create an event rule which matches _all_ the Linux kernel
489 tracepoints (this will generate a lot of data when tracing):
490 +
491 --
492 [role="term"]
493 ----
494 # lttng enable-event --kernel --all
495 ----
496 --
497
498 . <<basic-tracing-session-control,Start tracing>>:
499 +
500 --
501 [role="term"]
502 ----
503 # lttng start
504 ----
505 --
506
507 . Do some operation on your system for a few seconds. For example,
508 load a website, or list the files of a directory.
509 . <<creating-destroying-tracing-sessions,Destroy>> the current
510 tracing session:
511 +
512 --
513 [role="term"]
514 ----
515 # lttng destroy
516 ----
517 --
518 +
519 The man:lttng-destroy(1) command does not destroy the trace data; it
520 only destroys the state of the tracing session.
521 +
522 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
523 implicitly (see <<basic-tracing-session-control,Start and stop a tracing
524 session>>). You need to stop tracing to make LTTng flush the remaining
525 trace data and make the trace readable.
526
527 . For the sake of this example, make the recorded trace accessible to
528 the non-root users:
529 +
530 --
531 [role="term"]
532 ----
533 # chown -R $(whoami) /tmp/my-kernel-trace
534 ----
535 --
536
537 See <<viewing-and-analyzing-your-traces,View and analyze the
538 recorded events>> to view the recorded events.
539
540
541 [[tracing-your-own-user-application]]
542 === Trace a user application
543
544 This section steps you through a simple example to trace a
545 _Hello world_ program written in C.
546
547 To create the traceable user application:
548
549 . Create the tracepoint provider header file, which defines the
550 tracepoints and the events they can generate:
551 +
552 --
553 [source,c]
554 .path:{hello-tp.h}
555 ----
556 #undef TRACEPOINT_PROVIDER
557 #define TRACEPOINT_PROVIDER hello_world
558
559 #undef TRACEPOINT_INCLUDE
560 #define TRACEPOINT_INCLUDE "./hello-tp.h"
561
562 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
563 #define _HELLO_TP_H
564
565 #include <lttng/tracepoint.h>
566
567 TRACEPOINT_EVENT(
568 hello_world,
569 my_first_tracepoint,
570 TP_ARGS(
571 int, my_integer_arg,
572 char*, my_string_arg
573 ),
574 TP_FIELDS(
575 ctf_string(my_string_field, my_string_arg)
576 ctf_integer(int, my_integer_field, my_integer_arg)
577 )
578 )
579
580 #endif /* _HELLO_TP_H */
581
582 #include <lttng/tracepoint-event.h>
583 ----
584 --
585
586 . Create the tracepoint provider package source file:
587 +
588 --
589 [source,c]
590 .path:{hello-tp.c}
591 ----
592 #define TRACEPOINT_CREATE_PROBES
593 #define TRACEPOINT_DEFINE
594
595 #include "hello-tp.h"
596 ----
597 --
598
599 . Build the tracepoint provider package:
600 +
601 --
602 [role="term"]
603 ----
604 $ gcc -c -I. hello-tp.c
605 ----
606 --
607
608 . Create the _Hello World_ application source file:
609 +
610 --
611 [source,c]
612 .path:{hello.c}
613 ----
614 #include <stdio.h>
615 #include "hello-tp.h"
616
617 int main(int argc, char *argv[])
618 {
619 int x;
620
621 puts("Hello, World!\nPress Enter to continue...");
622
623 /*
624 * The following getchar() call is only placed here for the purpose
625 * of this demonstration, to pause the application in order for
626 * you to have time to list its tracepoints. It is not
627 * needed otherwise.
628 */
629 getchar();
630
631 /*
632 * A tracepoint() call.
633 *
634 * Arguments, as defined in hello-tp.h:
635 *
636 * 1. Tracepoint provider name (required)
637 * 2. Tracepoint name (required)
638 * 3. my_integer_arg (first user-defined argument)
639 * 4. my_string_arg (second user-defined argument)
640 *
641 * Notice the tracepoint provider and tracepoint names are
642 * NOT strings: they are in fact parts of variables that the
643 * macros in hello-tp.h create.
644 */
645 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
646
647 for (x = 0; x < argc; ++x) {
648 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
649 }
650
651 puts("Quitting now!");
652 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
653
654 return 0;
655 }
656 ----
657 --
658
659 . Build the application:
660 +
661 --
662 [role="term"]
663 ----
664 $ gcc -c hello.c
665 ----
666 --
667
668 . Link the application with the tracepoint provider package,
669 `liblttng-ust`, and `libdl`:
670 +
671 --
672 [role="term"]
673 ----
674 $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
675 ----
676 --
677
678 Here's the whole build process:
679
680 [role="img-100"]
681 .User space tracing tutorial's build steps.
682 image::ust-flow.png[]
683
684 To trace the user application:
685
686 . Run the application with a few arguments:
687 +
688 --
689 [role="term"]
690 ----
691 $ ./hello world and beyond
692 ----
693 --
694 +
695 You see:
696 +
697 --
698 ----
699 Hello, World!
700 Press Enter to continue...
701 ----
702 --
703
704 . Start an LTTng <<lttng-sessiond,session daemon>>:
705 +
706 --
707 [role="term"]
708 ----
709 $ lttng-sessiond --daemonize
710 ----
711 --
712 +
713 Note that a session daemon might already be running, for example as
714 a service that the distribution's service manager started.
715
716 . List the available user space tracepoints:
717 +
718 --
719 [role="term"]
720 ----
721 $ lttng list --userspace
722 ----
723 --
724 +
725 You see the `hello_world:my_first_tracepoint` tracepoint listed
726 under the `./hello` process.
727
728 . Create a <<tracing-session,tracing session>>:
729 +
730 --
731 [role="term"]
732 ----
733 $ lttng create my-user-space-session
734 ----
735 --
736
737 . Create an <<event,event rule>> which matches the
738 `hello_world:my_first_tracepoint` event name:
739 +
740 --
741 [role="term"]
742 ----
743 $ lttng enable-event --userspace hello_world:my_first_tracepoint
744 ----
745 --
746
747 . <<basic-tracing-session-control,Start tracing>>:
748 +
749 --
750 [role="term"]
751 ----
752 $ lttng start
753 ----
754 --
755
756 . Go back to the running `hello` application and press Enter. The
757 program executes all `tracepoint()` instrumentation points and exits.
758 . <<creating-destroying-tracing-sessions,Destroy>> the current
759 tracing session:
760 +
761 --
762 [role="term"]
763 ----
764 $ lttng destroy
765 ----
766 --
767 +
768 The man:lttng-destroy(1) command does not destroy the trace data; it
769 only destroys the state of the tracing session.
770 +
771 The man:lttng-destroy(1) command also runs the man:lttng-stop(1) command
772 implicitly (see <<basic-tracing-session-control,Start and stop a tracing
773 session>>). You need to stop tracing to make LTTng flush the remaining
774 trace data and make the trace readable.
775
776 By default, LTTng saves the traces in
777 +$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
778 where +__name__+ is the tracing session name. The
779 env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
780
781 See <<viewing-and-analyzing-your-traces,View and analyze the
782 recorded events>> to view the recorded events.
783
784
785 [[viewing-and-analyzing-your-traces]]
786 === View and analyze the recorded events
787
788 Once you have completed the <<tracing-the-linux-kernel,Trace the Linux
789 kernel>> and <<tracing-your-own-user-application,Trace a user
790 application>> tutorials, you can inspect the recorded events.
791
792 Many tools are available to read LTTng traces:
793
794 * **cmd:babeltrace** is a command-line utility which converts trace
795 formats; it supports the format that LTTng produces, CTF, as well as a
796 basic text output which can be ++grep++ed. The cmd:babeltrace command
797 is part of the http://diamon.org/babeltrace[Babeltrace] project.
798 * Babeltrace also includes
799 **https://www.python.org/[Python] bindings** so
800 that you can easily open and read an LTTng trace with your own script,
801 benefiting from the power of Python.
802 * http://tracecompass.org/[**Trace Compass**]
803 is a graphical user interface for viewing and analyzing any type of
804 logs or traces, including LTTng's.
805 * https://github.com/lttng/lttng-analyses[**LTTng analyses**] is a
806 project which includes many high-level analyses of LTTng kernel
807 traces, like scheduling statistics, interrupt frequency distribution,
808 top CPU usage, and more.
809
810 NOTE: This section assumes that the traces recorded during the previous
811 tutorials were saved to their default location, in the
812 dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
813 environment variable defaults to `$HOME` if not set.
814
815
816 [[viewing-and-analyzing-your-traces-bt]]
817 ==== Use the cmd:babeltrace command-line tool
818
819 The simplest way to list all the recorded events of a trace is to pass
820 its path to cmd:babeltrace with no options:
821
822 [role="term"]
823 ----
824 $ babeltrace ~/lttng-traces/my-user-space-session*
825 ----
826
827 cmd:babeltrace finds all traces recursively within the given path and
828 prints all their events, merging them in chronological order.
829
830 You can pipe the output of cmd:babeltrace into a tool like man:grep(1) for
831 further filtering:
832
833 [role="term"]
834 ----
835 $ babeltrace /tmp/my-kernel-trace | grep _switch
836 ----
837
838 You can pipe the output of cmd:babeltrace into a tool like man:wc(1) to
839 count the recorded events:
840
841 [role="term"]
842 ----
843 $ babeltrace /tmp/my-kernel-trace | grep _open | wc --lines
844 ----
845
846
847 [[viewing-and-analyzing-your-traces-bt-python]]
848 ==== Use the Babeltrace Python bindings
849
850 The <<viewing-and-analyzing-your-traces-bt,text output of cmd:babeltrace>>
851 is useful to isolate events by simple matching using man:grep(1) and
852 similar utilities. However, more elaborate filters, such as keeping only
853 event records with a field value falling within a specific range, are
854 not trivial to write using a shell. Moreover, reductions and even the
855 most basic computations involving multiple event records are virtually
856 impossible to implement.
857
858 Fortunately, Babeltrace ships with Python 3 bindings which makes it easy
859 to read the event records of an LTTng trace sequentially and compute the
860 desired information.
861
862 The following script accepts an LTTng Linux kernel trace path as its
863 first argument and prints the short names of the top 5 running processes
864 on CPU 0 during the whole trace:
865
866 [source,python]
867 .path:{top5proc.py}
868 ----
869 from collections import Counter
870 import babeltrace
871 import sys
872
873
874 def top5proc():
875 if len(sys.argv) != 2:
876 msg = 'Usage: python3 {} TRACEPATH'.format(sys.argv[0])
877 print(msg, file=sys.stderr)
878 return False
879
880 # A trace collection contains one or more traces
881 col = babeltrace.TraceCollection()
882
883 # Add the trace provided by the user (LTTng traces always have
884 # the 'ctf' format)
885 if col.add_trace(sys.argv[1], 'ctf') is None:
886 raise RuntimeError('Cannot add trace')
887
888 # This counter dict contains execution times:
889 #
890 # task command name -> total execution time (ns)
891 exec_times = Counter()
892
893 # This contains the last `sched_switch` timestamp
894 last_ts = None
895
896 # Iterate on events
897 for event in col.events:
898 # Keep only `sched_switch` events
899 if event.name != 'sched_switch':
900 continue
901
902 # Keep only events which happened on CPU 0
903 if event['cpu_id'] != 0:
904 continue
905
906 # Event timestamp
907 cur_ts = event.timestamp
908
909 if last_ts is None:
910 # We start here
911 last_ts = cur_ts
912
913 # Previous task command (short) name
914 prev_comm = event['prev_comm']
915
916 # Initialize entry in our dict if not yet done
917 if prev_comm not in exec_times:
918 exec_times[prev_comm] = 0
919
920 # Compute previous command execution time
921 diff = cur_ts - last_ts
922
923 # Update execution time of this command
924 exec_times[prev_comm] += diff
925
926 # Update last timestamp
927 last_ts = cur_ts
928
929 # Display top 5
930 for name, ns in exec_times.most_common(5):
931 s = ns / 1000000000
932 print('{:20}{} s'.format(name, s))
933
934 return True
935
936
937 if __name__ == '__main__':
938 sys.exit(0 if top5proc() else 1)
939 ----
940
941 Run this script:
942
943 [role="term"]
944 ----
945 $ python3 top5proc.py /tmp/my-kernel-trace/kernel
946 ----
947
948 Output example:
949
950 ----
951 swapper/0 48.607245889 s
952 chromium 7.192738188 s
953 pavucontrol 0.709894415 s
954 Compositor 0.660867933 s
955 Xorg.bin 0.616753786 s
956 ----
957
958 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
959 weren't using the CPU that much when tracing, its first position in the
960 list makes sense.
961
962
963 [[core-concepts]]
964 == [[understanding-lttng]]Core concepts
965
966 From a user's perspective, the LTTng system is built on a few concepts,
967 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
968 operates by sending commands to the <<lttng-sessiond,session daemon>>.
969 Understanding how those objects relate to eachother is key in mastering
970 the toolkit.
971
972 The core concepts are:
973
974 * <<tracing-session,Tracing session>>
975 * <<domain,Tracing domain>>
976 * <<channel,Channel and ring buffer>>
977 * <<"event","Instrumentation point, event rule, event, and event record">>
978
979
980 [[tracing-session]]
981 === Tracing session
982
983 A _tracing session_ is a stateful dialogue between you and
984 a <<lttng-sessiond,session daemon>>. You can
985 <<creating-destroying-tracing-sessions,create a new tracing
986 session>> with the `lttng create` command.
987
988 Anything that you do when you control LTTng tracers happens within a
989 tracing session. In particular, a tracing session:
990
991 * Has its own name.
992 * Has its own set of trace files.
993 * Has its own state of activity (started or stopped).
994 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
995 snapshot, or live).
996 * Has its own <<channel,channels>> which have their own
997 <<event,event rules>>.
998
999 [role="img-100"]
1000 .A _tracing session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,event rules>>.
1001 image::concepts.png[]
1002
1003 Those attributes and objects are completely isolated between different
1004 tracing sessions.
1005
1006 A tracing session is analogous to a cash machine session:
1007 the operations you do on the banking system through the cash machine do
1008 not alter the data of other users of the same system. In the case of
1009 the cash machine, a session lasts as long as your bank card is inside.
1010 In the case of LTTng, a tracing session lasts from the `lttng create`
1011 command to the `lttng destroy` command.
1012
1013 [role="img-100"]
1014 .Each Unix user has its own set of tracing sessions.
1015 image::many-sessions.png[]
1016
1017
1018 [[tracing-session-mode]]
1019 ==== Tracing session mode
1020
1021 LTTng can send the generated trace data to different locations. The
1022 _tracing session mode_ dictates where to send it. The following modes
1023 are available in LTTng{nbsp}{revision}:
1024
1025 [[local-mode]]Local mode::
1026 LTTng writes the traces to the file system of the machine being traced
1027 (target system).
1028
1029 [[net-streaming-mode]]Network streaming mode::
1030 LTTng sends the traces over the network to a
1031 <<lttng-relayd,relay daemon>> running on a remote system.
1032
1033 Snapshot mode::
1034 LTTng does not write the traces by default. Instead, you can request
1035 LTTng to <<taking-a-snapshot,take a snapshot>>, that is, a copy of the
1036 current tracing buffers, and to write it to the target's file system
1037 or to send it over the network to a <<lttng-relayd,relay daemon>>
1038 running on a remote system.
1039
1040 [[live-mode]]Live mode::
1041 This mode is similar to the network streaming mode, but a live
1042 trace viewer can connect to the distant relay daemon to
1043 <<lttng-live,view event records as LTTng generates them>> by
1044 the tracers.
1045
1046
1047 [[domain]]
1048 === Tracing domain
1049
1050 A _tracing domain_ is a namespace for event sources. A tracing domain
1051 has its own properties and features.
1052
1053 There are currently five available tracing domains:
1054
1055 * Linux kernel
1056 * User space
1057 * `java.util.logging` (JUL)
1058 * log4j
1059 * Python
1060
1061 You must specify a tracing domain when using some commands to avoid
1062 ambiguity. For example, since all the domains support named tracepoints
1063 as event sources (instrumentation points that you manually insert in the
1064 source code), you need to specify a tracing domain when
1065 <<enabling-disabling-events,creating an event rule>> because all the
1066 tracing domains could have tracepoints with the same names.
1067
1068 Some features are reserved to specific tracing domains. Dynamic function
1069 entry and return instrumentation points, for example, are currently only
1070 supported in the Linux kernel tracing domain, but support for other
1071 tracing domains could be added in the future.
1072
1073 You can create <<channel,channels>> in the Linux kernel and user space
1074 tracing domains. The other tracing domains have a single default
1075 channel.
1076
1077
1078 [[channel]]
1079 === Channel and ring buffer
1080
1081 A _channel_ is an object which is responsible for a set of ring buffers.
1082 Each ring buffer is divided into multiple sub-buffers. When an LTTng
1083 tracer emits an event, it can record it to one or more
1084 sub-buffers. The attributes of a channel determine what to do when
1085 there's no space left for a new event record because all sub-buffers
1086 are full, where to send a full sub-buffer, and other behaviours.
1087
1088 A channel is always associated to a <<domain,tracing domain>>. The
1089 `java.util.logging` (JUL), log4j, and Python tracing domains each have
1090 a default channel which you cannot configure.
1091
1092 A channel also owns <<event,event rules>>. When an LTTng tracer emits
1093 an event, it records it to the sub-buffers of all
1094 the enabled channels with a satisfied event rule, as long as those
1095 channels are part of active <<tracing-session,tracing sessions>>.
1096
1097
1098 [[channel-buffering-schemes]]
1099 ==== Per-user vs. per-process buffering schemes
1100
1101 A channel has at least one ring buffer _per CPU_. LTTng always
1102 records an event to the ring buffer associated to the CPU on which it
1103 occurred.
1104
1105 Two _buffering schemes_ are available when you
1106 <<enabling-disabling-channels,create a channel>> in the
1107 user space <<domain,tracing domain>>:
1108
1109 Per-user buffering::
1110 Allocate one set of ring buffers--one per CPU--shared by all the
1111 instrumented processes of each Unix user.
1112 +
1113 --
1114 [role="img-100"]
1115 .Per-user buffering scheme.
1116 image::per-user-buffering.png[]
1117 --
1118
1119 Per-process buffering::
1120 Allocate one set of ring buffers--one per CPU--for each
1121 instrumented process.
1122 +
1123 --
1124 [role="img-100"]
1125 .Per-process buffering scheme.
1126 image::per-process-buffering.png[]
1127 --
1128 +
1129 The per-process buffering scheme tends to consume more memory than the
1130 per-user option because systems generally have more instrumented
1131 processes than Unix users running instrumented processes. However, the
1132 per-process buffering scheme ensures that one process having a high
1133 event throughput won't fill all the shared sub-buffers of the same
1134 user, only its own.
1135
1136 The Linux kernel tracing domain has only one available buffering scheme
1137 which is to allocate a single set of ring buffers for the whole system.
1138 This scheme is similar to the per-user option, but with a single, global
1139 user "running" the kernel.
1140
1141
1142 [[channel-overwrite-mode-vs-discard-mode]]
1143 ==== Overwrite vs. discard event loss modes
1144
1145 When an event occurs, LTTng records it to a specific sub-buffer (yellow
1146 arc in the following animation) of a specific channel's ring buffer.
1147 When there's no space left in a sub-buffer, the tracer marks it as
1148 consumable (red) and another, empty sub-buffer starts receiving the
1149 following event records. A <<lttng-consumerd,consumer daemon>>
1150 eventually consumes the marked sub-buffer (returns to white).
1151
1152 [NOTE]
1153 [role="docsvg-channel-subbuf-anim"]
1154 ====
1155 {note-no-anim}
1156 ====
1157
1158 In an ideal world, sub-buffers are consumed faster than they are filled,
1159 as is the case in the previous animation. In the real world,
1160 however, all sub-buffers can be full at some point, leaving no space to
1161 record the following events.
1162
1163 By default, LTTng-modules and LTTng-UST are _non-blocking_ tracers: when
1164 no empty sub-buffer is available, it is acceptable to lose event records
1165 when the alternative would be to cause substantial delays in the
1166 instrumented application's execution. LTTng privileges performance over
1167 integrity; it aims at perturbing the traced system as little as possible
1168 in order to make tracing of subtle race conditions and rare interrupt
1169 cascades possible.
1170
1171 Starting from LTTng{nbsp}2.10, the LTTng user space tracer, LTTng-UST,
1172 supports a _blocking mode_. See the <<blocking-timeout-example,blocking
1173 timeout example>> to learn how to use the blocking mode.
1174
1175 When it comes to losing event records because no empty sub-buffer is
1176 available, or because the <<opt-blocking-timeout,blocking timeout>> is
1177 reached, the channel's _event loss mode_ determines what to do. The
1178 available event loss modes are:
1179
1180 Discard mode::
1181 Drop the newest event records until a the tracer releases a
1182 sub-buffer.
1183 +
1184 This is the only available mode when you specify a
1185 <<opt-blocking-timeout,blocking timeout>>.
1186
1187 Overwrite mode::
1188 Clear the sub-buffer containing the oldest event records and start
1189 writing the newest event records there.
1190 +
1191 This mode is sometimes called _flight recorder mode_ because it's
1192 similar to a
1193 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]:
1194 always keep a fixed amount of the latest data.
1195
1196 Which mechanism you should choose depends on your context: prioritize
1197 the newest or the oldest event records in the ring buffer?
1198
1199 Beware that, in overwrite mode, the tracer abandons a _whole sub-buffer_
1200 as soon as a there's no space left for a new event record, whereas in
1201 discard mode, the tracer only discards the event record that doesn't
1202 fit.
1203
1204 In discard mode, LTTng increments a count of lost event records when an
1205 event record is lost and saves this count to the trace. Since
1206 LTTng{nbsp}2.8, in overwrite mode, LTTng writes to a given sub-buffer
1207 its sequence number within its data stream. With a <<local-mode,local>>,
1208 <<net-streaming-mode,network streaming>>, or <<live-mode,live>>
1209 <<tracing-session,tracing session>>, a trace reader can use such
1210 sequence numbers to report lost packets. In overwrite mode, LTTng
1211 doesn't write to the trace the exact number of lost event records in
1212 those lost sub-buffers.
1213
1214 Trace analyses can use saved discarded event record and sub-buffer
1215 (packet) counts of the trace to decide whether or not to perform the
1216 analyses even if trace data is known to be missing.
1217
1218 There are a few ways to decrease your probability of losing event
1219 records.
1220 <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>> shows
1221 how you can fine-tune the sub-buffer count and size of a channel to
1222 virtually stop losing event records, though at the cost of greater
1223 memory usage.
1224
1225
1226 [[channel-subbuf-size-vs-subbuf-count]]
1227 ==== Sub-buffer count and size
1228
1229 When you <<enabling-disabling-channels,create a channel>>, you can
1230 set its number of sub-buffers and their size.
1231
1232 Note that there is noticeable CPU overhead introduced when
1233 switching sub-buffers (marking a full one as consumable and switching
1234 to an empty one for the following events to be recorded). Knowing this,
1235 the following list presents a few practical situations along with how
1236 to configure the sub-buffer count and size for them:
1237
1238 * **High event throughput**: In general, prefer bigger sub-buffers to
1239 lower the risk of losing event records.
1240 +
1241 Having bigger sub-buffers also ensures a lower
1242 <<channel-switch-timer,sub-buffer switching frequency>>.
1243 +
1244 The number of sub-buffers is only meaningful if you create the channel
1245 in overwrite mode: in this case, if a sub-buffer overwrite happens, the
1246 other sub-buffers are left unaltered.
1247
1248 * **Low event throughput**: In general, prefer smaller sub-buffers
1249 since the risk of losing event records is low.
1250 +
1251 Because events occur less frequently, the sub-buffer switching frequency
1252 should remain low and thus the tracer's overhead should not be a
1253 problem.
1254
1255 * **Low memory system**: If your target system has a low memory
1256 limit, prefer fewer first, then smaller sub-buffers.
1257 +
1258 Even if the system is limited in memory, you want to keep the
1259 sub-buffers as big as possible to avoid a high sub-buffer switching
1260 frequency.
1261
1262 Note that LTTng uses http://diamon.org/ctf/[CTF] as its trace format,
1263 which means event data is very compact. For example, the average
1264 LTTng kernel event record weights about 32{nbsp}bytes. Thus, a
1265 sub-buffer size of 1{nbsp}MiB is considered big.
1266
1267 The previous situations highlight the major trade-off between a few big
1268 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1269 frequency vs. how much data is lost in overwrite mode. Assuming a
1270 constant event throughput and using the overwrite mode, the two
1271 following configurations have the same ring buffer total size:
1272
1273 [NOTE]
1274 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1275 ====
1276 {note-no-anim}
1277 ====
1278
1279 * **2 sub-buffers of 4{nbsp}MiB each**: Expect a very low sub-buffer
1280 switching frequency, but if a sub-buffer overwrite happens, half of
1281 the event records so far (4{nbsp}MiB) are definitely lost.
1282 * **8 sub-buffers of 1{nbsp}MiB each**: Expect 4{nbsp}times the tracer's
1283 overhead as the previous configuration, but if a sub-buffer
1284 overwrite happens, only the eighth of event records so far are
1285 definitely lost.
1286
1287 In discard mode, the sub-buffers count parameter is pointless: use two
1288 sub-buffers and set their size according to the requirements of your
1289 situation.
1290
1291
1292 [[channel-switch-timer]]
1293 ==== Switch timer period
1294
1295 The _switch timer period_ is an important configurable attribute of
1296 a channel to ensure periodic sub-buffer flushing.
1297
1298 When the _switch timer_ expires, a sub-buffer switch happens. You can
1299 set the switch timer period attribute when you
1300 <<enabling-disabling-channels,create a channel>> to ensure that event
1301 data is consumed and committed to trace files or to a distant relay
1302 daemon periodically in case of a low event throughput.
1303
1304 [NOTE]
1305 [role="docsvg-channel-switch-timer"]
1306 ====
1307 {note-no-anim}
1308 ====
1309
1310 This attribute is also convenient when you use big sub-buffers to cope
1311 with a sporadic high event throughput, even if the throughput is
1312 normally low.
1313
1314
1315 [[channel-read-timer]]
1316 ==== Read timer period
1317
1318 By default, the LTTng tracers use a notification mechanism to signal a
1319 full sub-buffer so that a consumer daemon can consume it. When such
1320 notifications must be avoided, for example in real-time applications,
1321 you can use the channel's _read timer_ instead. When the read timer
1322 fires, the <<lttng-consumerd,consumer daemon>> checks for full,
1323 consumable sub-buffers.
1324
1325
1326 [[tracefile-rotation]]
1327 ==== Trace file count and size
1328
1329 By default, trace files can grow as large as needed. You can set the
1330 maximum size of each trace file that a channel writes when you
1331 <<enabling-disabling-channels,create a channel>>. When the size of
1332 a trace file reaches the channel's fixed maximum size, LTTng creates
1333 another file to contain the next event records. LTTng appends a file
1334 count to each trace file name in this case.
1335
1336 If you set the trace file size attribute when you create a channel, the
1337 maximum number of trace files that LTTng creates is _unlimited_ by
1338 default. To limit them, you can also set a maximum number of trace
1339 files. When the number of trace files reaches the channel's fixed
1340 maximum count, the oldest trace file is overwritten. This mechanism is
1341 called _trace file rotation_.
1342
1343
1344 [[event]]
1345 === Instrumentation point, event rule, event, and event record
1346
1347 An _event rule_ is a set of conditions which must be **all** satisfied
1348 for LTTng to record an occuring event.
1349
1350 You set the conditions when you <<enabling-disabling-events,create
1351 an event rule>>.
1352
1353 You always attach an event rule to <<channel,channel>> when you create
1354 it.
1355
1356 When an event passes the conditions of an event rule, LTTng records it
1357 in one of the attached channel's sub-buffers.
1358
1359 The available conditions, as of LTTng{nbsp}{revision}, are:
1360
1361 * The event rule _is enabled_.
1362 * The instrumentation point's type _is{nbsp}T_.
1363 * The instrumentation point's name (sometimes called _event name_)
1364 _matches{nbsp}N_, but _is not{nbsp}E_.
1365 * The instrumentation point's log level _is as severe as{nbsp}L_, or
1366 _is exactly{nbsp}L_.
1367 * The fields of the event's payload _satisfy_ a filter
1368 expression{nbsp}__F__.
1369
1370 As you can see, all the conditions but the dynamic filter are related to
1371 the event rule's status or to the instrumentation point, not to the
1372 occurring events. This is why, without a filter, checking if an event
1373 passes an event rule is not a dynamic task: when you create or modify an
1374 event rule, all the tracers of its tracing domain enable or disable the
1375 instrumentation points themselves once. This is possible because the
1376 attributes of an instrumentation point (type, name, and log level) are
1377 defined statically. In other words, without a dynamic filter, the tracer
1378 _does not evaluate_ the arguments of an instrumentation point unless it
1379 matches an enabled event rule.
1380
1381 Note that, for LTTng to record an event, the <<channel,channel>> to
1382 which a matching event rule is attached must also be enabled, and the
1383 tracing session owning this channel must be active.
1384
1385 [role="img-100"]
1386 .Logical path from an instrumentation point to an event record.
1387 image::event-rule.png[]
1388
1389 .Event, event record, or event rule?
1390 ****
1391 With so many similar terms, it's easy to get confused.
1392
1393 An **event** is the consequence of the execution of an _instrumentation
1394 point_, like a tracepoint that you manually place in some source code,
1395 or a Linux kernel KProbe. An event is said to _occur_ at a specific
1396 time. Different actions can be taken upon the occurrence of an event,
1397 like record the event's payload to a buffer.
1398
1399 An **event record** is the representation of an event in a sub-buffer. A
1400 tracer is responsible for capturing the payload of an event, current
1401 context variables, the event's ID, and the event's timestamp. LTTng
1402 can append this sub-buffer to a trace file.
1403
1404 An **event rule** is a set of conditions which must all be satisfied for
1405 LTTng to record an occuring event. Events still occur without
1406 satisfying event rules, but LTTng does not record them.
1407 ****
1408
1409
1410 [[plumbing]]
1411 == Components of noch:{LTTng}
1412
1413 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1414 to call LTTng a simple _tool_ since it is composed of multiple
1415 interacting components. This section describes those components,
1416 explains their respective roles, and shows how they connect together to
1417 form the LTTng ecosystem.
1418
1419 The following diagram shows how the most important components of LTTng
1420 interact with user applications, the Linux kernel, and you:
1421
1422 [role="img-100"]
1423 .Control and trace data paths between LTTng components.
1424 image::plumbing.png[]
1425
1426 The LTTng project incorporates:
1427
1428 * **LTTng-tools**: Libraries and command-line interface to
1429 control tracing sessions.
1430 ** <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1431 ** <<lttng-consumerd,Consumer daemon>> (cmd:lttng-consumerd).
1432 ** <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1433 ** <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1434 ** <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1435 * **LTTng-UST**: Libraries and Java/Python packages to trace user
1436 applications.
1437 ** <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1438 headers to instrument and trace any native user application.
1439 ** <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1440 *** `liblttng-ust-libc-wrapper`
1441 *** `liblttng-ust-pthread-wrapper`
1442 *** `liblttng-ust-cyg-profile`
1443 *** `liblttng-ust-cyg-profile-fast`
1444 *** `liblttng-ust-dl`
1445 ** User space tracepoint provider source files generator command-line
1446 tool (man:lttng-gen-tp(1)).
1447 ** <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1448 Java applications using `java.util.logging` or
1449 Apache log4j 1.2 logging.
1450 ** <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1451 Python applications using the standard `logging` package.
1452 * **LTTng-modules**: <<lttng-modules,Linux kernel modules>> to trace
1453 the kernel.
1454 ** LTTng kernel tracer module.
1455 ** Tracing ring buffer kernel modules.
1456 ** Probe kernel modules.
1457 ** LTTng logger kernel module.
1458
1459
1460 [[lttng-cli]]
1461 === Tracing control command-line interface
1462
1463 [role="img-100"]
1464 .The tracing control command-line interface.
1465 image::plumbing-lttng-cli.png[]
1466
1467 The _man:lttng(1) command-line tool_ is the standard user interface to
1468 control LTTng <<tracing-session,tracing sessions>>. The cmd:lttng tool
1469 is part of LTTng-tools.
1470
1471 The cmd:lttng tool is linked with
1472 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1473 one or more <<lttng-sessiond,session daemons>> behind the scenes.
1474
1475 The cmd:lttng tool has a Git-like interface:
1476
1477 [role="term"]
1478 ----
1479 $ lttng <GENERAL OPTIONS> <COMMAND> <COMMAND OPTIONS>
1480 ----
1481
1482 The <<controlling-tracing,Tracing control>> section explores the
1483 available features of LTTng using the cmd:lttng tool.
1484
1485
1486 [[liblttng-ctl-lttng]]
1487 === Tracing control library
1488
1489 [role="img-100"]
1490 .The tracing control library.
1491 image::plumbing-liblttng-ctl.png[]
1492
1493 The _LTTng control library_, `liblttng-ctl`, is used to communicate
1494 with a <<lttng-sessiond,session daemon>> using a C API that hides the
1495 underlying protocol's details. `liblttng-ctl` is part of LTTng-tools.
1496
1497 The <<lttng-cli,cmd:lttng command-line tool>>
1498 is linked with `liblttng-ctl`.
1499
1500 You can use `liblttng-ctl` in C or $$C++$$ source code by including its
1501 "master" header:
1502
1503 [source,c]
1504 ----
1505 #include <lttng/lttng.h>
1506 ----
1507
1508 Some objects are referenced by name (C string), such as tracing
1509 sessions, but most of them require to create a handle first using
1510 `lttng_create_handle()`.
1511
1512 The best available developer documentation for `liblttng-ctl` is, as of
1513 LTTng{nbsp}{revision}, its installed header files. Every function and
1514 structure is thoroughly documented.
1515
1516
1517 [[lttng-ust]]
1518 === User space tracing library
1519
1520 [role="img-100"]
1521 .The user space tracing library.
1522 image::plumbing-liblttng-ust.png[]
1523
1524 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1525 is the LTTng user space tracer. It receives commands from a
1526 <<lttng-sessiond,session daemon>>, for example to
1527 enable and disable specific instrumentation points, and writes event
1528 records to ring buffers shared with a
1529 <<lttng-consumerd,consumer daemon>>.
1530 `liblttng-ust` is part of LTTng-UST.
1531
1532 Public C header files are installed beside `liblttng-ust` to
1533 instrument any <<c-application,C or $$C++$$ application>>.
1534
1535 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1536 packages, use their own library providing tracepoints which is
1537 linked with `liblttng-ust`.
1538
1539 An application or library does not have to initialize `liblttng-ust`
1540 manually: its constructor does the necessary tasks to properly register
1541 to a session daemon. The initialization phase also enables the
1542 instrumentation points matching the <<event,event rules>> that you
1543 already created.
1544
1545
1546 [[lttng-ust-agents]]
1547 === User space tracing agents
1548
1549 [role="img-100"]
1550 .The user space tracing agents.
1551 image::plumbing-lttng-ust-agents.png[]
1552
1553 The _LTTng-UST Java and Python agents_ are regular Java and Python
1554 packages which add LTTng tracing capabilities to the
1555 native logging frameworks. The LTTng-UST agents are part of LTTng-UST.
1556
1557 In the case of Java, the
1558 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1559 core logging facilities] and
1560 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2] are supported.
1561 Note that Apache Log4{nbsp}2 is not supported.
1562
1563 In the case of Python, the standard
1564 https://docs.python.org/3/library/logging.html[`logging`] package
1565 is supported. Both Python 2 and Python 3 modules can import the
1566 LTTng-UST Python agent package.
1567
1568 The applications using the LTTng-UST agents are in the
1569 `java.util.logging` (JUL),
1570 log4j, and Python <<domain,tracing domains>>.
1571
1572 Both agents use the same mechanism to trace the log statements. When an
1573 agent is initialized, it creates a log handler that attaches to the root
1574 logger. The agent also registers to a <<lttng-sessiond,session daemon>>.
1575 When the application executes a log statement, it is passed to the
1576 agent's log handler by the root logger. The agent's log handler calls a
1577 native function in a tracepoint provider package shared library linked
1578 with <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
1579 other fields, like its logger name and its log level. This native
1580 function contains a user space instrumentation point, hence tracing the
1581 log statement.
1582
1583 The log level condition of an
1584 <<event,event rule>> is considered when tracing
1585 a Java or a Python application, and it's compatible with the standard
1586 JUL, log4j, and Python log levels.
1587
1588
1589 [[lttng-modules]]
1590 === LTTng kernel modules
1591
1592 [role="img-100"]
1593 .The LTTng kernel modules.
1594 image::plumbing-lttng-modules.png[]
1595
1596 The _LTTng kernel modules_ are a set of Linux kernel modules
1597 which implement the kernel tracer of the LTTng project. The LTTng
1598 kernel modules are part of LTTng-modules.
1599
1600 The LTTng kernel modules include:
1601
1602 * A set of _probe_ modules.
1603 +
1604 Each module attaches to a specific subsystem
1605 of the Linux kernel using its tracepoint instrument points. There are
1606 also modules to attach to the entry and return points of the Linux
1607 system call functions.
1608
1609 * _Ring buffer_ modules.
1610 +
1611 A ring buffer implementation is provided as kernel modules. The LTTng
1612 kernel tracer writes to the ring buffer; a
1613 <<lttng-consumerd,consumer daemon>> reads from the ring buffer.
1614
1615 * The _LTTng kernel tracer_ module.
1616 * The _LTTng logger_ module.
1617 +
1618 The LTTng logger module implements the special path:{/proc/lttng-logger}
1619 file so that any executable can generate LTTng events by opening and
1620 writing to this file.
1621 +
1622 See <<proc-lttng-logger-abi,LTTng logger>>.
1623
1624 Generally, you do not have to load the LTTng kernel modules manually
1625 (using man:modprobe(8), for example): a root <<lttng-sessiond,session
1626 daemon>> loads the necessary modules when starting. If you have extra
1627 probe modules, you can specify to load them to the session daemon on
1628 the command line.
1629
1630 The LTTng kernel modules are installed in
1631 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
1632 the kernel release (see `uname --kernel-release`).
1633
1634
1635 [[lttng-sessiond]]
1636 === Session daemon
1637
1638 [role="img-100"]
1639 .The session daemon.
1640 image::plumbing-sessiond.png[]
1641
1642 The _session daemon_, man:lttng-sessiond(8), is a daemon responsible for
1643 managing tracing sessions and for controlling the various components of
1644 LTTng. The session daemon is part of LTTng-tools.
1645
1646 The session daemon sends control requests to and receives control
1647 responses from:
1648
1649 * The <<lttng-ust,user space tracing library>>.
1650 +
1651 Any instance of the user space tracing library first registers to
1652 a session daemon. Then, the session daemon can send requests to
1653 this instance, such as:
1654 +
1655 --
1656 ** Get the list of tracepoints.
1657 ** Share an <<event,event rule>> so that the user space tracing library
1658 can enable or disable tracepoints. Amongst the possible conditions
1659 of an event rule is a filter expression which `liblttng-ust` evalutes
1660 when an event occurs.
1661 ** Share <<channel,channel>> attributes and ring buffer locations.
1662 --
1663 +
1664 The session daemon and the user space tracing library use a Unix
1665 domain socket for their communication.
1666
1667 * The <<lttng-ust-agents,user space tracing agents>>.
1668 +
1669 Any instance of a user space tracing agent first registers to
1670 a session daemon. Then, the session daemon can send requests to
1671 this instance, such as:
1672 +
1673 --
1674 ** Get the list of loggers.
1675 ** Enable or disable a specific logger.
1676 --
1677 +
1678 The session daemon and the user space tracing agent use a TCP connection
1679 for their communication.
1680
1681 * The <<lttng-modules,LTTng kernel tracer>>.
1682 * The <<lttng-consumerd,consumer daemon>>.
1683 +
1684 The session daemon sends requests to the consumer daemon to instruct
1685 it where to send the trace data streams, amongst other information.
1686
1687 * The <<lttng-relayd,relay daemon>>.
1688
1689 The session daemon receives commands from the
1690 <<liblttng-ctl-lttng,tracing control library>>.
1691
1692 The root session daemon loads the appropriate
1693 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
1694 a <<lttng-consumerd,consumer daemon>> as soon as you create
1695 an <<event,event rule>>.
1696
1697 The session daemon does not send and receive trace data: this is the
1698 role of the <<lttng-consumerd,consumer daemon>> and
1699 <<lttng-relayd,relay daemon>>. It does, however, generate the
1700 http://diamon.org/ctf/[CTF] metadata stream.
1701
1702 Each Unix user can have its own session daemon instance. The
1703 tracing sessions managed by different session daemons are completely
1704 independent.
1705
1706 The root user's session daemon is the only one which is
1707 allowed to control the LTTng kernel tracer, and its spawned consumer
1708 daemon is the only one which is allowed to consume trace data from the
1709 LTTng kernel tracer. Note, however, that any Unix user which is a member
1710 of the <<tracing-group,tracing group>> is allowed
1711 to create <<channel,channels>> in the
1712 Linux kernel <<domain,tracing domain>>, and thus to trace the Linux
1713 kernel.
1714
1715 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
1716 session daemon when using its `create` command if none is currently
1717 running. You can also start the session daemon manually.
1718
1719
1720 [[lttng-consumerd]]
1721 === Consumer daemon
1722
1723 [role="img-100"]
1724 .The consumer daemon.
1725 image::plumbing-consumerd.png[]
1726
1727 The _consumer daemon_, cmd:lttng-consumerd, is a daemon which shares
1728 ring buffers with user applications or with the LTTng kernel modules to
1729 collect trace data and send it to some location (on disk or to a
1730 <<lttng-relayd,relay daemon>> over the network). The consumer daemon
1731 is part of LTTng-tools.
1732
1733 You do not start a consumer daemon manually: a consumer daemon is always
1734 spawned by a <<lttng-sessiond,session daemon>> as soon as you create an
1735 <<event,event rule>>, that is, before you start tracing. When you kill
1736 its owner session daemon, the consumer daemon also exits because it is
1737 the session daemon's child process. Command-line options of
1738 man:lttng-sessiond(8) target the consumer daemon process.
1739
1740 There are up to two running consumer daemons per Unix user, whereas only
1741 one session daemon can run per user. This is because each process can be
1742 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
1743 and 64-bit processes, it is more efficient to have separate
1744 corresponding 32-bit and 64-bit consumer daemons. The root user is an
1745 exception: it can have up to _three_ running consumer daemons: 32-bit
1746 and 64-bit instances for its user applications, and one more
1747 reserved for collecting kernel trace data.
1748
1749
1750 [[lttng-relayd]]
1751 === Relay daemon
1752
1753 [role="img-100"]
1754 .The relay daemon.
1755 image::plumbing-relayd.png[]
1756
1757 The _relay daemon_, man:lttng-relayd(8), is a daemon acting as a bridge
1758 between remote session and consumer daemons, local trace files, and a
1759 remote live trace viewer. The relay daemon is part of LTTng-tools.
1760
1761 The main purpose of the relay daemon is to implement a receiver of
1762 <<sending-trace-data-over-the-network,trace data over the network>>.
1763 This is useful when the target system does not have much file system
1764 space to record trace files locally.
1765
1766 The relay daemon is also a server to which a
1767 <<lttng-live,live trace viewer>> can
1768 connect. The live trace viewer sends requests to the relay daemon to
1769 receive trace data as the target system emits events. The
1770 communication protocol is named _LTTng live_; it is used over TCP
1771 connections.
1772
1773 Note that you can start the relay daemon on the target system directly.
1774 This is the setup of choice when the use case is to view events as
1775 the target system emits them without the need of a remote system.
1776
1777
1778 [[instrumenting]]
1779 == [[using-lttng]]Instrumentation
1780
1781 There are many examples of tracing and monitoring in our everyday life:
1782
1783 * You have access to real-time and historical weather reports and
1784 forecasts thanks to weather stations installed around the country.
1785 * You know your heart is safe thanks to an electrocardiogram.
1786 * You make sure not to drive your car too fast and to have enough fuel
1787 to reach your destination thanks to gauges visible on your dashboard.
1788
1789 All the previous examples have something in common: they rely on
1790 **instruments**. Without the electrodes attached to the surface of your
1791 body's skin, cardiac monitoring is futile.
1792
1793 LTTng, as a tracer, is no different from those real life examples. If
1794 you're about to trace a software system or, in other words, record its
1795 history of execution, you better have **instrumentation points** in the
1796 subject you're tracing, that is, the actual software.
1797
1798 Various ways were developed to instrument a piece of software for LTTng
1799 tracing. The most straightforward one is to manually place
1800 instrumentation points, called _tracepoints_, in the software's source
1801 code. It is also possible to add instrumentation points dynamically in
1802 the Linux kernel <<domain,tracing domain>>.
1803
1804 If you're only interested in tracing the Linux kernel, your
1805 instrumentation needs are probably already covered by LTTng's built-in
1806 <<lttng-modules,Linux kernel tracepoints>>. You may also wish to trace a
1807 user application which is already instrumented for LTTng tracing.
1808 In such cases, you can skip this whole section and read the topics of
1809 the <<controlling-tracing,Tracing control>> section.
1810
1811 Many methods are available to instrument a piece of software for LTTng
1812 tracing. They are:
1813
1814 * <<c-application,User space instrumentation for C and $$C++$$
1815 applications>>.
1816 * <<prebuilt-ust-helpers,Prebuilt user space tracing helpers>>.
1817 * <<java-application,User space Java agent>>.
1818 * <<python-application,User space Python agent>>.
1819 * <<proc-lttng-logger-abi,LTTng logger>>.
1820 * <<instrumenting-linux-kernel,LTTng kernel tracepoints>>.
1821
1822
1823 [[c-application]]
1824 === [[cxx-application]]User space instrumentation for C and $$C++$$ applications
1825
1826 The procedure to instrument a C or $$C++$$ user application with
1827 the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`, is:
1828
1829 . <<tracepoint-provider,Create the source files of a tracepoint provider
1830 package>>.
1831 . <<probing-the-application-source-code,Add tracepoints to
1832 the application's source code>>.
1833 . <<building-tracepoint-providers-and-user-application,Build and link
1834 a tracepoint provider package and the user application>>.
1835
1836 If you need quick, man:printf(3)-like instrumentation, you can skip
1837 those steps and use <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>>
1838 instead.
1839
1840 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
1841 instrument a user application with `liblttng-ust`.
1842
1843
1844 [[tracepoint-provider]]
1845 ==== Create the source files of a tracepoint provider package
1846
1847 A _tracepoint provider_ is a set of compiled functions which provide
1848 **tracepoints** to an application, the type of instrumentation point
1849 supported by LTTng-UST. Those functions can emit events with
1850 user-defined fields and serialize those events as event records to one
1851 or more LTTng-UST <<channel,channel>> sub-buffers. The `tracepoint()`
1852 macro, which you <<probing-the-application-source-code,insert in a user
1853 application's source code>>, calls those functions.
1854
1855 A _tracepoint provider package_ is an object file (`.o`) or a shared
1856 library (`.so`) which contains one or more tracepoint providers.
1857 Its source files are:
1858
1859 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
1860 * A <<tpp-source,tracepoint provider package source>> (`.c`).
1861
1862 A tracepoint provider package is dynamically linked with `liblttng-ust`,
1863 the LTTng user space tracer, at run time.
1864
1865 [role="img-100"]
1866 .User application linked with `liblttng-ust` and containing a tracepoint provider.
1867 image::ust-app.png[]
1868
1869 NOTE: If you need quick, man:printf(3)-like instrumentation, you can
1870 skip creating and using a tracepoint provider and use
1871 <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>> instead.
1872
1873
1874 [[tpp-header]]
1875 ===== Create a tracepoint provider header file template
1876
1877 A _tracepoint provider header file_ contains the tracepoint
1878 definitions of a tracepoint provider.
1879
1880 To create a tracepoint provider header file:
1881
1882 . Start from this template:
1883 +
1884 --
1885 [source,c]
1886 .Tracepoint provider header file template (`.h` file extension).
1887 ----
1888 #undef TRACEPOINT_PROVIDER
1889 #define TRACEPOINT_PROVIDER provider_name
1890
1891 #undef TRACEPOINT_INCLUDE
1892 #define TRACEPOINT_INCLUDE "./tp.h"
1893
1894 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1895 #define _TP_H
1896
1897 #include <lttng/tracepoint.h>
1898
1899 /*
1900 * Use TRACEPOINT_EVENT(), TRACEPOINT_EVENT_CLASS(),
1901 * TRACEPOINT_EVENT_INSTANCE(), and TRACEPOINT_LOGLEVEL() here.
1902 */
1903
1904 #endif /* _TP_H */
1905
1906 #include <lttng/tracepoint-event.h>
1907 ----
1908 --
1909
1910 . Replace:
1911 +
1912 * `provider_name` with the name of your tracepoint provider.
1913 * `"tp.h"` with the name of your tracepoint provider header file.
1914
1915 . Below the `#include <lttng/tracepoint.h>` line, put your
1916 <<defining-tracepoints,tracepoint definitions>>.
1917
1918 Your tracepoint provider name must be unique amongst all the possible
1919 tracepoint provider names used on the same target system. We
1920 suggest to include the name of your project or company in the name,
1921 for example, `org_lttng_my_project_tpp`.
1922
1923 TIP: [[lttng-gen-tp]]You can use the man:lttng-gen-tp(1) tool to create
1924 this boilerplate for you. When using cmd:lttng-gen-tp, all you need to
1925 write are the <<defining-tracepoints,tracepoint definitions>>.
1926
1927
1928 [[defining-tracepoints]]
1929 ===== Create a tracepoint definition
1930
1931 A _tracepoint definition_ defines, for a given tracepoint:
1932
1933 * Its **input arguments**. They are the macro parameters that the
1934 `tracepoint()` macro accepts for this particular tracepoint
1935 in the user application's source code.
1936 * Its **output event fields**. They are the sources of event fields
1937 that form the payload of any event that the execution of the
1938 `tracepoint()` macro emits for this particular tracepoint.
1939
1940 You can create a tracepoint definition by using the
1941 `TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
1942 line in the
1943 <<tpp-header,tracepoint provider header file template>>.
1944
1945 The syntax of the `TRACEPOINT_EVENT()` macro is:
1946
1947 [source,c]
1948 .`TRACEPOINT_EVENT()` macro syntax.
1949 ----
1950 TRACEPOINT_EVENT(
1951 /* Tracepoint provider name */
1952 provider_name,
1953
1954 /* Tracepoint name */
1955 tracepoint_name,
1956
1957 /* Input arguments */
1958 TP_ARGS(
1959 arguments
1960 ),
1961
1962 /* Output event fields */
1963 TP_FIELDS(
1964 fields
1965 )
1966 )
1967 ----
1968
1969 Replace:
1970
1971 * `provider_name` with your tracepoint provider name.
1972 * `tracepoint_name` with your tracepoint name.
1973 * `arguments` with the <<tpp-def-input-args,input arguments>>.
1974 * `fields` with the <<tpp-def-output-fields,output event field>>
1975 definitions.
1976
1977 This tracepoint emits events named `provider_name:tracepoint_name`.
1978
1979 [IMPORTANT]
1980 .Event name's length limitation
1981 ====
1982 The concatenation of the tracepoint provider name and the
1983 tracepoint name must not exceed **254 characters**. If it does, the
1984 instrumented application compiles and runs, but LTTng throws multiple
1985 warnings and you could experience serious issues.
1986 ====
1987
1988 [[tpp-def-input-args]]The syntax of the `TP_ARGS()` macro is:
1989
1990 [source,c]
1991 .`TP_ARGS()` macro syntax.
1992 ----
1993 TP_ARGS(
1994 type, arg_name
1995 )
1996 ----
1997
1998 Replace:
1999
2000 * `type` with the C type of the argument.
2001 * `arg_name` with the argument name.
2002
2003 You can repeat `type` and `arg_name` up to 10 times to have
2004 more than one argument.
2005
2006 .`TP_ARGS()` usage with three arguments.
2007 ====
2008 [source,c]
2009 ----
2010 TP_ARGS(
2011 int, count,
2012 float, ratio,
2013 const char*, query
2014 )
2015 ----
2016 ====
2017
2018 The `TP_ARGS()` and `TP_ARGS(void)` forms are valid to create a
2019 tracepoint definition with no input arguments.
2020
2021 [[tpp-def-output-fields]]The `TP_FIELDS()` macro contains a list of
2022 `ctf_*()` macros. Each `ctf_*()` macro defines one event field. See
2023 man:lttng-ust(3) for a complete description of the available `ctf_*()`
2024 macros. A `ctf_*()` macro specifies the type, size, and byte order of
2025 one event field.
2026
2027 Each `ctf_*()` macro takes an _argument expression_ parameter. This is a
2028 C expression that the tracer evalutes at the `tracepoint()` macro site
2029 in the application's source code. This expression provides a field's
2030 source of data. The argument expression can include input argument names
2031 listed in the `TP_ARGS()` macro.
2032
2033 Each `ctf_*()` macro also takes a _field name_ parameter. Field names
2034 must be unique within a given tracepoint definition.
2035
2036 Here's a complete tracepoint definition example:
2037
2038 .Tracepoint definition.
2039 ====
2040 The following tracepoint definition defines a tracepoint which takes
2041 three input arguments and has four output event fields.
2042
2043 [source,c]
2044 ----
2045 #include "my-custom-structure.h"
2046
2047 TRACEPOINT_EVENT(
2048 my_provider,
2049 my_tracepoint,
2050 TP_ARGS(
2051 const struct my_custom_structure*, my_custom_structure,
2052 float, ratio,
2053 const char*, query
2054 ),
2055 TP_FIELDS(
2056 ctf_string(query_field, query)
2057 ctf_float(double, ratio_field, ratio)
2058 ctf_integer(int, recv_size, my_custom_structure->recv_size)
2059 ctf_integer(int, send_size, my_custom_structure->send_size)
2060 )
2061 )
2062 ----
2063
2064 You can refer to this tracepoint definition with the `tracepoint()`
2065 macro in your application's source code like this:
2066
2067 [source,c]
2068 ----
2069 tracepoint(my_provider, my_tracepoint,
2070 my_structure, some_ratio, the_query);
2071 ----
2072 ====
2073
2074 NOTE: The LTTng tracer only evaluates tracepoint arguments at run time
2075 if they satisfy an enabled <<event,event rule>>.
2076
2077
2078 [[using-tracepoint-classes]]
2079 ===== Use a tracepoint class
2080
2081 A _tracepoint class_ is a class of tracepoints which share the same
2082 output event field definitions. A _tracepoint instance_ is one
2083 instance of such a defined tracepoint class, with its own tracepoint
2084 name.
2085
2086 The <<defining-tracepoints,`TRACEPOINT_EVENT()` macro>> is actually a
2087 shorthand which defines both a tracepoint class and a tracepoint
2088 instance at the same time.
2089
2090 When you build a tracepoint provider package, the C or $$C++$$ compiler
2091 creates one serialization function for each **tracepoint class**. A
2092 serialization function is responsible for serializing the event fields
2093 of a tracepoint to a sub-buffer when tracing.
2094
2095 For various performance reasons, when your situation requires multiple
2096 tracepoint definitions with different names, but with the same event
2097 fields, we recommend that you manually create a tracepoint class
2098 and instantiate as many tracepoint instances as needed. One positive
2099 effect of such a design, amongst other advantages, is that all
2100 tracepoint instances of the same tracepoint class reuse the same
2101 serialization function, thus reducing
2102 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2103
2104 .Use a tracepoint class and tracepoint instances.
2105 ====
2106 Consider the following three tracepoint definitions:
2107
2108 [source,c]
2109 ----
2110 TRACEPOINT_EVENT(
2111 my_app,
2112 get_account,
2113 TP_ARGS(
2114 int, userid,
2115 size_t, len
2116 ),
2117 TP_FIELDS(
2118 ctf_integer(int, userid, userid)
2119 ctf_integer(size_t, len, len)
2120 )
2121 )
2122
2123 TRACEPOINT_EVENT(
2124 my_app,
2125 get_settings,
2126 TP_ARGS(
2127 int, userid,
2128 size_t, len
2129 ),
2130 TP_FIELDS(
2131 ctf_integer(int, userid, userid)
2132 ctf_integer(size_t, len, len)
2133 )
2134 )
2135
2136 TRACEPOINT_EVENT(
2137 my_app,
2138 get_transaction,
2139 TP_ARGS(
2140 int, userid,
2141 size_t, len
2142 ),
2143 TP_FIELDS(
2144 ctf_integer(int, userid, userid)
2145 ctf_integer(size_t, len, len)
2146 )
2147 )
2148 ----
2149
2150 In this case, we create three tracepoint classes, with one implicit
2151 tracepoint instance for each of them: `get_account`, `get_settings`, and
2152 `get_transaction`. However, they all share the same event field names
2153 and types. Hence three identical, yet independent serialization
2154 functions are created when you build the tracepoint provider package.
2155
2156 A better design choice is to define a single tracepoint class and three
2157 tracepoint instances:
2158
2159 [source,c]
2160 ----
2161 /* The tracepoint class */
2162 TRACEPOINT_EVENT_CLASS(
2163 /* Tracepoint provider name */
2164 my_app,
2165
2166 /* Tracepoint class name */
2167 my_class,
2168
2169 /* Input arguments */
2170 TP_ARGS(
2171 int, userid,
2172 size_t, len
2173 ),
2174
2175 /* Output event fields */
2176 TP_FIELDS(
2177 ctf_integer(int, userid, userid)
2178 ctf_integer(size_t, len, len)
2179 )
2180 )
2181
2182 /* The tracepoint instances */
2183 TRACEPOINT_EVENT_INSTANCE(
2184 /* Tracepoint provider name */
2185 my_app,
2186
2187 /* Tracepoint class name */
2188 my_class,
2189
2190 /* Tracepoint name */
2191 get_account,
2192
2193 /* Input arguments */
2194 TP_ARGS(
2195 int, userid,
2196 size_t, len
2197 )
2198 )
2199 TRACEPOINT_EVENT_INSTANCE(
2200 my_app,
2201 my_class,
2202 get_settings,
2203 TP_ARGS(
2204 int, userid,
2205 size_t, len
2206 )
2207 )
2208 TRACEPOINT_EVENT_INSTANCE(
2209 my_app,
2210 my_class,
2211 get_transaction,
2212 TP_ARGS(
2213 int, userid,
2214 size_t, len
2215 )
2216 )
2217 ----
2218 ====
2219
2220
2221 [[assigning-log-levels]]
2222 ===== Assign a log level to a tracepoint definition
2223
2224 You can assign an optional _log level_ to a
2225 <<defining-tracepoints,tracepoint definition>>.
2226
2227 Assigning different levels of severity to tracepoint definitions can
2228 be useful: when you <<enabling-disabling-events,create an event rule>>,
2229 you can target tracepoints having a log level as severe as a specific
2230 value.
2231
2232 The concept of LTTng-UST log levels is similar to the levels found
2233 in typical logging frameworks:
2234
2235 * In a logging framework, the log level is given by the function
2236 or method name you use at the log statement site: `debug()`,
2237 `info()`, `warn()`, `error()`, and so on.
2238 * In LTTng-UST, you statically assign the log level to a tracepoint
2239 definition; any `tracepoint()` macro invocation which refers to
2240 this definition has this log level.
2241
2242 You can assign a log level to a tracepoint definition with the
2243 `TRACEPOINT_LOGLEVEL()` macro. You must use this macro _after_ the
2244 <<defining-tracepoints,`TRACEPOINT_EVENT()`>> or
2245 <<using-tracepoint-classes,`TRACEPOINT_INSTANCE()`>> macro for a given
2246 tracepoint.
2247
2248 The syntax of the `TRACEPOINT_LOGLEVEL()` macro is:
2249
2250 [source,c]
2251 .`TRACEPOINT_LOGLEVEL()` macro syntax.
2252 ----
2253 TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2254 ----
2255
2256 Replace:
2257
2258 * `provider_name` with the tracepoint provider name.
2259 * `tracepoint_name` with the tracepoint name.
2260 * `log_level` with the log level to assign to the tracepoint
2261 definition named `tracepoint_name` in the `provider_name`
2262 tracepoint provider.
2263 +
2264 See man:lttng-ust(3) for a list of available log level names.
2265
2266 .Assign the `TRACE_DEBUG_UNIT` log level to a tracepoint definition.
2267 ====
2268 [source,c]
2269 ----
2270 /* Tracepoint definition */
2271 TRACEPOINT_EVENT(
2272 my_app,
2273 get_transaction,
2274 TP_ARGS(
2275 int, userid,
2276 size_t, len
2277 ),
2278 TP_FIELDS(
2279 ctf_integer(int, userid, userid)
2280 ctf_integer(size_t, len, len)
2281 )
2282 )
2283
2284 /* Log level assignment */
2285 TRACEPOINT_LOGLEVEL(my_app, get_transaction, TRACE_DEBUG_UNIT)
2286 ----
2287 ====
2288
2289
2290 [[tpp-source]]
2291 ===== Create a tracepoint provider package source file
2292
2293 A _tracepoint provider package source file_ is a C source file which
2294 includes a <<tpp-header,tracepoint provider header file>> to expand its
2295 macros into event serialization and other functions.
2296
2297 You can always use the following tracepoint provider package source
2298 file template:
2299
2300 [source,c]
2301 .Tracepoint provider package source file template.
2302 ----
2303 #define TRACEPOINT_CREATE_PROBES
2304
2305 #include "tp.h"
2306 ----
2307
2308 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2309 header file>> name. You may also include more than one tracepoint
2310 provider header file here to create a tracepoint provider package
2311 holding more than one tracepoint providers.
2312
2313
2314 [[probing-the-application-source-code]]
2315 ==== Add tracepoints to an application's source code
2316
2317 Once you <<tpp-header,create a tracepoint provider header file>>, you
2318 can use the `tracepoint()` macro in your application's
2319 source code to insert the tracepoints that this header
2320 <<defining-tracepoints,defines>>.
2321
2322 The `tracepoint()` macro takes at least two parameters: the tracepoint
2323 provider name and the tracepoint name. The corresponding tracepoint
2324 definition defines the other parameters.
2325
2326 .`tracepoint()` usage.
2327 ====
2328 The following <<defining-tracepoints,tracepoint definition>> defines a
2329 tracepoint which takes two input arguments and has two output event
2330 fields.
2331
2332 [source,c]
2333 .Tracepoint provider header file.
2334 ----
2335 #include "my-custom-structure.h"
2336
2337 TRACEPOINT_EVENT(
2338 my_provider,
2339 my_tracepoint,
2340 TP_ARGS(
2341 int, argc,
2342 const char*, cmd_name
2343 ),
2344 TP_FIELDS(
2345 ctf_string(cmd_name, cmd_name)
2346 ctf_integer(int, number_of_args, argc)
2347 )
2348 )
2349 ----
2350
2351 You can refer to this tracepoint definition with the `tracepoint()`
2352 macro in your application's source code like this:
2353
2354 [source,c]
2355 .Application's source file.
2356 ----
2357 #include "tp.h"
2358
2359 int main(int argc, char* argv[])
2360 {
2361 tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2362
2363 return 0;
2364 }
2365 ----
2366
2367 Note how the application's source code includes
2368 the tracepoint provider header file containing the tracepoint
2369 definitions to use, path:{tp.h}.
2370 ====
2371
2372 .`tracepoint()` usage with a complex tracepoint definition.
2373 ====
2374 Consider this complex tracepoint definition, where multiple event
2375 fields refer to the same input arguments in their argument expression
2376 parameter:
2377
2378 [source,c]
2379 .Tracepoint provider header file.
2380 ----
2381 /* For `struct stat` */
2382 #include <sys/types.h>
2383 #include <sys/stat.h>
2384 #include <unistd.h>
2385
2386 TRACEPOINT_EVENT(
2387 my_provider,
2388 my_tracepoint,
2389 TP_ARGS(
2390 int, my_int_arg,
2391 char*, my_str_arg,
2392 struct stat*, st
2393 ),
2394 TP_FIELDS(
2395 ctf_integer(int, my_constant_field, 23 + 17)
2396 ctf_integer(int, my_int_arg_field, my_int_arg)
2397 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2398 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2399 my_str_arg[2] + my_str_arg[3])
2400 ctf_string(my_str_arg_field, my_str_arg)
2401 ctf_integer_hex(off_t, size_field, st->st_size)
2402 ctf_float(double, size_dbl_field, (double) st->st_size)
2403 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2404 size_t, strlen(my_str_arg) / 2)
2405 )
2406 )
2407 ----
2408
2409 You can refer to this tracepoint definition with the `tracepoint()`
2410 macro in your application's source code like this:
2411
2412 [source,c]
2413 .Application's source file.
2414 ----
2415 #define TRACEPOINT_DEFINE
2416 #include "tp.h"
2417
2418 int main(void)
2419 {
2420 struct stat s;
2421
2422 stat("/etc/fstab", &s);
2423 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2424
2425 return 0;
2426 }
2427 ----
2428
2429 If you look at the event record that LTTng writes when tracing this
2430 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2431 it should look like this:
2432
2433 .Event record fields
2434 |====
2435 |Field's name |Field's value
2436 |`my_constant_field` |40
2437 |`my_int_arg_field` |23
2438 |`my_int_arg_field2` |529
2439 |`sum4_field` |389
2440 |`my_str_arg_field` |`Hello, World!`
2441 |`size_field` |0x12d
2442 |`size_dbl_field` |301.0
2443 |`half_my_str_arg_field` |`Hello,`
2444 |====
2445 ====
2446
2447 Sometimes, the arguments you pass to `tracepoint()` are expensive to
2448 compute--they use the call stack, for example. To avoid this
2449 computation when the tracepoint is disabled, you can use the
2450 `tracepoint_enabled()` and `do_tracepoint()` macros.
2451
2452 The syntax of the `tracepoint_enabled()` and `do_tracepoint()` macros
2453 is:
2454
2455 [source,c]
2456 .`tracepoint_enabled()` and `do_tracepoint()` macros syntax.
2457 ----
2458 tracepoint_enabled(provider_name, tracepoint_name)
2459 do_tracepoint(provider_name, tracepoint_name, ...)
2460 ----
2461
2462 Replace:
2463
2464 * `provider_name` with the tracepoint provider name.
2465 * `tracepoint_name` with the tracepoint name.
2466
2467 `tracepoint_enabled()` returns a non-zero value if the tracepoint named
2468 `tracepoint_name` from the provider named `provider_name` is enabled
2469 **at run time**.
2470
2471 `do_tracepoint()` is like `tracepoint()`, except that it doesn't check
2472 if the tracepoint is enabled. Using `tracepoint()` with
2473 `tracepoint_enabled()` is dangerous since `tracepoint()` also contains
2474 the `tracepoint_enabled()` check, thus a race condition is
2475 possible in this situation:
2476
2477 [source,c]
2478 .Possible race condition when using `tracepoint_enabled()` with `tracepoint()`.
2479 ----
2480 if (tracepoint_enabled(my_provider, my_tracepoint)) {
2481 stuff = prepare_stuff();
2482 }
2483
2484 tracepoint(my_provider, my_tracepoint, stuff);
2485 ----
2486
2487 If the tracepoint is enabled after the condition, then `stuff` is not
2488 prepared: the emitted event will either contain wrong data, or the whole
2489 application could crash (segmentation fault, for example).
2490
2491 NOTE: Neither `tracepoint_enabled()` nor `do_tracepoint()` have an
2492 `STAP_PROBEV()` call. If you need it, you must emit
2493 this call yourself.
2494
2495
2496 [[building-tracepoint-providers-and-user-application]]
2497 ==== Build and link a tracepoint provider package and an application
2498
2499 Once you have one or more <<tpp-header,tracepoint provider header
2500 files>> and a <<tpp-source,tracepoint provider package source file>>,
2501 you can create the tracepoint provider package by compiling its source
2502 file. From here, multiple build and run scenarios are possible. The
2503 following table shows common application and library configurations
2504 along with the required command lines to achieve them.
2505
2506 In the following diagrams, we use the following file names:
2507
2508 `app`::
2509 Executable application.
2510
2511 `app.o`::
2512 Application's object file.
2513
2514 `tpp.o`::
2515 Tracepoint provider package object file.
2516
2517 `tpp.a`::
2518 Tracepoint provider package archive file.
2519
2520 `libtpp.so`::
2521 Tracepoint provider package shared object file.
2522
2523 `emon.o`::
2524 User library object file.
2525
2526 `libemon.so`::
2527 User library shared object file.
2528
2529 We use the following symbols in the diagrams of table below:
2530
2531 [role="img-100"]
2532 .Symbols used in the build scenario diagrams.
2533 image::ust-sit-symbols.png[]
2534
2535 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
2536 variable in the following instructions.
2537
2538 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
2539 .Common tracepoint provider package scenarios.
2540 |====
2541 |Scenario |Instructions
2542
2543 |
2544 The instrumented application is statically linked with
2545 the tracepoint provider package object.
2546
2547 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
2548
2549 |
2550 include::../common/ust-sit-step-tp-o.txt[]
2551
2552 To build the instrumented application:
2553
2554 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2555 +
2556 --
2557 [source,c]
2558 ----
2559 #define TRACEPOINT_DEFINE
2560 ----
2561 --
2562
2563 . Compile the application source file:
2564 +
2565 --
2566 [role="term"]
2567 ----
2568 $ gcc -c app.c
2569 ----
2570 --
2571
2572 . Build the application:
2573 +
2574 --
2575 [role="term"]
2576 ----
2577 $ gcc -o app app.o tpp.o -llttng-ust -ldl
2578 ----
2579 --
2580
2581 To run the instrumented application:
2582
2583 * Start the application:
2584 +
2585 --
2586 [role="term"]
2587 ----
2588 $ ./app
2589 ----
2590 --
2591
2592 |
2593 The instrumented application is statically linked with the
2594 tracepoint provider package archive file.
2595
2596 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
2597
2598 |
2599 To create the tracepoint provider package archive file:
2600
2601 . Compile the <<tpp-source,tracepoint provider package source file>>:
2602 +
2603 --
2604 [role="term"]
2605 ----
2606 $ gcc -I. -c tpp.c
2607 ----
2608 --
2609
2610 . Create the tracepoint provider package archive file:
2611 +
2612 --
2613 [role="term"]
2614 ----
2615 $ ar rcs tpp.a tpp.o
2616 ----
2617 --
2618
2619 To build the instrumented application:
2620
2621 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2622 +
2623 --
2624 [source,c]
2625 ----
2626 #define TRACEPOINT_DEFINE
2627 ----
2628 --
2629
2630 . Compile the application source file:
2631 +
2632 --
2633 [role="term"]
2634 ----
2635 $ gcc -c app.c
2636 ----
2637 --
2638
2639 . Build the application:
2640 +
2641 --
2642 [role="term"]
2643 ----
2644 $ gcc -o app app.o tpp.a -llttng-ust -ldl
2645 ----
2646 --
2647
2648 To run the instrumented application:
2649
2650 * Start the application:
2651 +
2652 --
2653 [role="term"]
2654 ----
2655 $ ./app
2656 ----
2657 --
2658
2659 |
2660 The instrumented application is linked with the tracepoint provider
2661 package shared object.
2662
2663 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
2664
2665 |
2666 include::../common/ust-sit-step-tp-so.txt[]
2667
2668 To build the instrumented application:
2669
2670 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2671 +
2672 --
2673 [source,c]
2674 ----
2675 #define TRACEPOINT_DEFINE
2676 ----
2677 --
2678
2679 . Compile the application source file:
2680 +
2681 --
2682 [role="term"]
2683 ----
2684 $ gcc -c app.c
2685 ----
2686 --
2687
2688 . Build the application:
2689 +
2690 --
2691 [role="term"]
2692 ----
2693 $ gcc -o app app.o -ldl -L. -ltpp
2694 ----
2695 --
2696
2697 To run the instrumented application:
2698
2699 * Start the application:
2700 +
2701 --
2702 [role="term"]
2703 ----
2704 $ ./app
2705 ----
2706 --
2707
2708 |
2709 The tracepoint provider package shared object is preloaded before the
2710 instrumented application starts.
2711
2712 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
2713
2714 |
2715 include::../common/ust-sit-step-tp-so.txt[]
2716
2717 To build the instrumented application:
2718
2719 . In path:{app.c}, before including path:{tpp.h}, add the
2720 following lines:
2721 +
2722 --
2723 [source,c]
2724 ----
2725 #define TRACEPOINT_DEFINE
2726 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2727 ----
2728 --
2729
2730 . Compile the application source file:
2731 +
2732 --
2733 [role="term"]
2734 ----
2735 $ gcc -c app.c
2736 ----
2737 --
2738
2739 . Build the application:
2740 +
2741 --
2742 [role="term"]
2743 ----
2744 $ gcc -o app app.o -ldl
2745 ----
2746 --
2747
2748 To run the instrumented application with tracing support:
2749
2750 * Preload the tracepoint provider package shared object and
2751 start the application:
2752 +
2753 --
2754 [role="term"]
2755 ----
2756 $ LD_PRELOAD=./libtpp.so ./app
2757 ----
2758 --
2759
2760 To run the instrumented application without tracing support:
2761
2762 * Start the application:
2763 +
2764 --
2765 [role="term"]
2766 ----
2767 $ ./app
2768 ----
2769 --
2770
2771 |
2772 The instrumented application dynamically loads the tracepoint provider
2773 package shared object.
2774
2775 See the <<dlclose-warning,warning about `dlclose()`>>.
2776
2777 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
2778
2779 |
2780 include::../common/ust-sit-step-tp-so.txt[]
2781
2782 To build the instrumented application:
2783
2784 . In path:{app.c}, before including path:{tpp.h}, add the
2785 following lines:
2786 +
2787 --
2788 [source,c]
2789 ----
2790 #define TRACEPOINT_DEFINE
2791 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2792 ----
2793 --
2794
2795 . Compile the application source file:
2796 +
2797 --
2798 [role="term"]
2799 ----
2800 $ gcc -c app.c
2801 ----
2802 --
2803
2804 . Build the application:
2805 +
2806 --
2807 [role="term"]
2808 ----
2809 $ gcc -o app app.o -ldl
2810 ----
2811 --
2812
2813 To run the instrumented application:
2814
2815 * Start the application:
2816 +
2817 --
2818 [role="term"]
2819 ----
2820 $ ./app
2821 ----
2822 --
2823
2824 |
2825 The application is linked with the instrumented user library.
2826
2827 The instrumented user library is statically linked with the tracepoint
2828 provider package object file.
2829
2830 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
2831
2832 |
2833 include::../common/ust-sit-step-tp-o-fpic.txt[]
2834
2835 To build the instrumented user library:
2836
2837 . In path:{emon.c}, before including path:{tpp.h}, add the
2838 following line:
2839 +
2840 --
2841 [source,c]
2842 ----
2843 #define TRACEPOINT_DEFINE
2844 ----
2845 --
2846
2847 . Compile the user library source file:
2848 +
2849 --
2850 [role="term"]
2851 ----
2852 $ gcc -I. -fpic -c emon.c
2853 ----
2854 --
2855
2856 . Build the user library shared object:
2857 +
2858 --
2859 [role="term"]
2860 ----
2861 $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
2862 ----
2863 --
2864
2865 To build the application:
2866
2867 . Compile the application source file:
2868 +
2869 --
2870 [role="term"]
2871 ----
2872 $ gcc -c app.c
2873 ----
2874 --
2875
2876 . Build the application:
2877 +
2878 --
2879 [role="term"]
2880 ----
2881 $ gcc -o app app.o -L. -lemon
2882 ----
2883 --
2884
2885 To run the application:
2886
2887 * Start the application:
2888 +
2889 --
2890 [role="term"]
2891 ----
2892 $ ./app
2893 ----
2894 --
2895
2896 |
2897 The application is linked with the instrumented user library.
2898
2899 The instrumented user library is linked with the tracepoint provider
2900 package shared object.
2901
2902 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
2903
2904 |
2905 include::../common/ust-sit-step-tp-so.txt[]
2906
2907 To build the instrumented user library:
2908
2909 . In path:{emon.c}, before including path:{tpp.h}, add the
2910 following line:
2911 +
2912 --
2913 [source,c]
2914 ----
2915 #define TRACEPOINT_DEFINE
2916 ----
2917 --
2918
2919 . Compile the user library source file:
2920 +
2921 --
2922 [role="term"]
2923 ----
2924 $ gcc -I. -fpic -c emon.c
2925 ----
2926 --
2927
2928 . Build the user library shared object:
2929 +
2930 --
2931 [role="term"]
2932 ----
2933 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
2934 ----
2935 --
2936
2937 To build the application:
2938
2939 . Compile the application source file:
2940 +
2941 --
2942 [role="term"]
2943 ----
2944 $ gcc -c app.c
2945 ----
2946 --
2947
2948 . Build the application:
2949 +
2950 --
2951 [role="term"]
2952 ----
2953 $ gcc -o app app.o -L. -lemon
2954 ----
2955 --
2956
2957 To run the application:
2958
2959 * Start the application:
2960 +
2961 --
2962 [role="term"]
2963 ----
2964 $ ./app
2965 ----
2966 --
2967
2968 |
2969 The tracepoint provider package shared object is preloaded before the
2970 application starts.
2971
2972 The application is linked with the instrumented user library.
2973
2974 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
2975
2976 |
2977 include::../common/ust-sit-step-tp-so.txt[]
2978
2979 To build the instrumented user library:
2980
2981 . In path:{emon.c}, before including path:{tpp.h}, add the
2982 following lines:
2983 +
2984 --
2985 [source,c]
2986 ----
2987 #define TRACEPOINT_DEFINE
2988 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2989 ----
2990 --
2991
2992 . Compile the user library source file:
2993 +
2994 --
2995 [role="term"]
2996 ----
2997 $ gcc -I. -fpic -c emon.c
2998 ----
2999 --
3000
3001 . Build the user library shared object:
3002 +
3003 --
3004 [role="term"]
3005 ----
3006 $ gcc -shared -o libemon.so emon.o -ldl
3007 ----
3008 --
3009
3010 To build the application:
3011
3012 . Compile the application source file:
3013 +
3014 --
3015 [role="term"]
3016 ----
3017 $ gcc -c app.c
3018 ----
3019 --
3020
3021 . Build the application:
3022 +
3023 --
3024 [role="term"]
3025 ----
3026 $ gcc -o app app.o -L. -lemon
3027 ----
3028 --
3029
3030 To run the application with tracing support:
3031
3032 * Preload the tracepoint provider package shared object and
3033 start the application:
3034 +
3035 --
3036 [role="term"]
3037 ----
3038 $ LD_PRELOAD=./libtpp.so ./app
3039 ----
3040 --
3041
3042 To run the application without tracing support:
3043
3044 * Start the application:
3045 +
3046 --
3047 [role="term"]
3048 ----
3049 $ ./app
3050 ----
3051 --
3052
3053 |
3054 The application is linked with the instrumented user library.
3055
3056 The instrumented user library dynamically loads the tracepoint provider
3057 package shared object.
3058
3059 See the <<dlclose-warning,warning about `dlclose()`>>.
3060
3061 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3062
3063 |
3064 include::../common/ust-sit-step-tp-so.txt[]
3065
3066 To build the instrumented user library:
3067
3068 . In path:{emon.c}, before including path:{tpp.h}, add the
3069 following lines:
3070 +
3071 --
3072 [source,c]
3073 ----
3074 #define TRACEPOINT_DEFINE
3075 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3076 ----
3077 --
3078
3079 . Compile the user library source file:
3080 +
3081 --
3082 [role="term"]
3083 ----
3084 $ gcc -I. -fpic -c emon.c
3085 ----
3086 --
3087
3088 . Build the user library shared object:
3089 +
3090 --
3091 [role="term"]
3092 ----
3093 $ gcc -shared -o libemon.so emon.o -ldl
3094 ----
3095 --
3096
3097 To build the application:
3098
3099 . Compile the application source file:
3100 +
3101 --
3102 [role="term"]
3103 ----
3104 $ gcc -c app.c
3105 ----
3106 --
3107
3108 . Build the application:
3109 +
3110 --
3111 [role="term"]
3112 ----
3113 $ gcc -o app app.o -L. -lemon
3114 ----
3115 --
3116
3117 To run the application:
3118
3119 * Start the application:
3120 +
3121 --
3122 [role="term"]
3123 ----
3124 $ ./app
3125 ----
3126 --
3127
3128 |
3129 The application dynamically loads the instrumented user library.
3130
3131 The instrumented user library is linked with the tracepoint provider
3132 package shared object.
3133
3134 See the <<dlclose-warning,warning about `dlclose()`>>.
3135
3136 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3137
3138 |
3139 include::../common/ust-sit-step-tp-so.txt[]
3140
3141 To build the instrumented user library:
3142
3143 . In path:{emon.c}, before including path:{tpp.h}, add the
3144 following line:
3145 +
3146 --
3147 [source,c]
3148 ----
3149 #define TRACEPOINT_DEFINE
3150 ----
3151 --
3152
3153 . Compile the user library source file:
3154 +
3155 --
3156 [role="term"]
3157 ----
3158 $ gcc -I. -fpic -c emon.c
3159 ----
3160 --
3161
3162 . Build the user library shared object:
3163 +
3164 --
3165 [role="term"]
3166 ----
3167 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3168 ----
3169 --
3170
3171 To build the application:
3172
3173 . Compile the application source file:
3174 +
3175 --
3176 [role="term"]
3177 ----
3178 $ gcc -c app.c
3179 ----
3180 --
3181
3182 . Build the application:
3183 +
3184 --
3185 [role="term"]
3186 ----
3187 $ gcc -o app app.o -ldl -L. -lemon
3188 ----
3189 --
3190
3191 To run the application:
3192
3193 * Start the application:
3194 +
3195 --
3196 [role="term"]
3197 ----
3198 $ ./app
3199 ----
3200 --
3201
3202 |
3203 The application dynamically loads the instrumented user library.
3204
3205 The instrumented user library dynamically loads the tracepoint provider
3206 package shared object.
3207
3208 See the <<dlclose-warning,warning about `dlclose()`>>.
3209
3210 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3211
3212 |
3213 include::../common/ust-sit-step-tp-so.txt[]
3214
3215 To build the instrumented user library:
3216
3217 . In path:{emon.c}, before including path:{tpp.h}, add the
3218 following lines:
3219 +
3220 --
3221 [source,c]
3222 ----
3223 #define TRACEPOINT_DEFINE
3224 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3225 ----
3226 --
3227
3228 . Compile the user library source file:
3229 +
3230 --
3231 [role="term"]
3232 ----
3233 $ gcc -I. -fpic -c emon.c
3234 ----
3235 --
3236
3237 . Build the user library shared object:
3238 +
3239 --
3240 [role="term"]
3241 ----
3242 $ gcc -shared -o libemon.so emon.o -ldl
3243 ----
3244 --
3245
3246 To build the application:
3247
3248 . Compile the application source file:
3249 +
3250 --
3251 [role="term"]
3252 ----
3253 $ gcc -c app.c
3254 ----
3255 --
3256
3257 . Build the application:
3258 +
3259 --
3260 [role="term"]
3261 ----
3262 $ gcc -o app app.o -ldl -L. -lemon
3263 ----
3264 --
3265
3266 To run the application:
3267
3268 * Start the application:
3269 +
3270 --
3271 [role="term"]
3272 ----
3273 $ ./app
3274 ----
3275 --
3276
3277 |
3278 The tracepoint provider package shared object is preloaded before the
3279 application starts.
3280
3281 The application dynamically loads the instrumented user library.
3282
3283 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3284
3285 |
3286 include::../common/ust-sit-step-tp-so.txt[]
3287
3288 To build the instrumented user library:
3289
3290 . In path:{emon.c}, before including path:{tpp.h}, add the
3291 following lines:
3292 +
3293 --
3294 [source,c]
3295 ----
3296 #define TRACEPOINT_DEFINE
3297 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3298 ----
3299 --
3300
3301 . Compile the user library source file:
3302 +
3303 --
3304 [role="term"]
3305 ----
3306 $ gcc -I. -fpic -c emon.c
3307 ----
3308 --
3309
3310 . Build the user library shared object:
3311 +
3312 --
3313 [role="term"]
3314 ----
3315 $ gcc -shared -o libemon.so emon.o -ldl
3316 ----
3317 --
3318
3319 To build the application:
3320
3321 . Compile the application source file:
3322 +
3323 --
3324 [role="term"]
3325 ----
3326 $ gcc -c app.c
3327 ----
3328 --
3329
3330 . Build the application:
3331 +
3332 --
3333 [role="term"]
3334 ----
3335 $ gcc -o app app.o -L. -lemon
3336 ----
3337 --
3338
3339 To run the application with tracing support:
3340
3341 * Preload the tracepoint provider package shared object and
3342 start the application:
3343 +
3344 --
3345 [role="term"]
3346 ----
3347 $ LD_PRELOAD=./libtpp.so ./app
3348 ----
3349 --
3350
3351 To run the application without tracing support:
3352
3353 * Start the application:
3354 +
3355 --
3356 [role="term"]
3357 ----
3358 $ ./app
3359 ----
3360 --
3361
3362 |
3363 The application is statically linked with the tracepoint provider
3364 package object file.
3365
3366 The application is linked with the instrumented user library.
3367
3368 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3369
3370 |
3371 include::../common/ust-sit-step-tp-o.txt[]
3372
3373 To build the instrumented user library:
3374
3375 . In path:{emon.c}, before including path:{tpp.h}, add the
3376 following line:
3377 +
3378 --
3379 [source,c]
3380 ----
3381 #define TRACEPOINT_DEFINE
3382 ----
3383 --
3384
3385 . Compile the user library source file:
3386 +
3387 --
3388 [role="term"]
3389 ----
3390 $ gcc -I. -fpic -c emon.c
3391 ----
3392 --
3393
3394 . Build the user library shared object:
3395 +
3396 --
3397 [role="term"]
3398 ----
3399 $ gcc -shared -o libemon.so emon.o
3400 ----
3401 --
3402
3403 To build the application:
3404
3405 . Compile the application source file:
3406 +
3407 --
3408 [role="term"]
3409 ----
3410 $ gcc -c app.c
3411 ----
3412 --
3413
3414 . Build the application:
3415 +
3416 --
3417 [role="term"]
3418 ----
3419 $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3420 ----
3421 --
3422
3423 To run the instrumented application:
3424
3425 * Start the application:
3426 +
3427 --
3428 [role="term"]
3429 ----
3430 $ ./app
3431 ----
3432 --
3433
3434 |
3435 The application is statically linked with the tracepoint provider
3436 package object file.
3437
3438 The application dynamically loads the instrumented user library.
3439
3440 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3441
3442 |
3443 include::../common/ust-sit-step-tp-o.txt[]
3444
3445 To build the application:
3446
3447 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3448 +
3449 --
3450 [source,c]
3451 ----
3452 #define TRACEPOINT_DEFINE
3453 ----
3454 --
3455
3456 . Compile the application source file:
3457 +
3458 --
3459 [role="term"]
3460 ----
3461 $ gcc -c app.c
3462 ----
3463 --
3464
3465 . Build the application:
3466 +
3467 --
3468 [role="term"]
3469 ----
3470 $ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3471 -llttng-ust -ldl
3472 ----
3473 --
3474 +
3475 The `--export-dynamic` option passed to the linker is necessary for the
3476 dynamically loaded library to ``see'' the tracepoint symbols defined in
3477 the application.
3478
3479 To build the instrumented user library:
3480
3481 . Compile the user library source file:
3482 +
3483 --
3484 [role="term"]
3485 ----
3486 $ gcc -I. -fpic -c emon.c
3487 ----
3488 --
3489
3490 . Build the user library shared object:
3491 +
3492 --
3493 [role="term"]
3494 ----
3495 $ gcc -shared -o libemon.so emon.o
3496 ----
3497 --
3498
3499 To run the application:
3500
3501 * Start the application:
3502 +
3503 --
3504 [role="term"]
3505 ----
3506 $ ./app
3507 ----
3508 --
3509 |====
3510
3511 [[dlclose-warning]]
3512 [IMPORTANT]
3513 .Do not use man:dlclose(3) on a tracepoint provider package
3514 ====
3515 Never use man:dlclose(3) on any shared object which:
3516
3517 * Is linked with, statically or dynamically, a tracepoint provider
3518 package.
3519 * Calls man:dlopen(3) itself to dynamically open a tracepoint provider
3520 package shared object.
3521
3522 This is currently considered **unsafe** due to a lack of reference
3523 counting from LTTng-UST to the shared object.
3524
3525 A known workaround (available since glibc 2.2) is to use the
3526 `RTLD_NODELETE` flag when calling man:dlopen(3) initially. This has the
3527 effect of not unloading the loaded shared object, even if man:dlclose(3)
3528 is called.
3529
3530 You can also preload the tracepoint provider package shared object with
3531 the env:LD_PRELOAD environment variable to overcome this limitation.
3532 ====
3533
3534
3535 [[using-lttng-ust-with-daemons]]
3536 ===== Use noch:{LTTng-UST} with daemons
3537
3538 If your instrumented application calls man:fork(2), man:clone(2),
3539 or BSD's man:rfork(2), without a following man:exec(3)-family
3540 system call, you must preload the path:{liblttng-ust-fork.so} shared
3541 object when you start the application.
3542
3543 [role="term"]
3544 ----
3545 $ LD_PRELOAD=liblttng-ust-fork.so ./my-app
3546 ----
3547
3548 If your tracepoint provider package is
3549 a shared library which you also preload, you must put both
3550 shared objects in env:LD_PRELOAD:
3551
3552 [role="term"]
3553 ----
3554 $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
3555 ----
3556
3557
3558 [role="since-2.9"]
3559 [[liblttng-ust-fd]]
3560 ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
3561
3562 If your instrumented application closes one or more file descriptors
3563 which it did not open itself, you must preload the
3564 path:{liblttng-ust-fd.so} shared object when you start the application:
3565
3566 [role="term"]
3567 ----
3568 $ LD_PRELOAD=liblttng-ust-fd.so ./my-app
3569 ----
3570
3571 Typical use cases include closing all the file descriptors after
3572 man:fork(2) or man:rfork(2) and buggy applications doing
3573 ``double closes''.
3574
3575
3576 [[lttng-ust-pkg-config]]
3577 ===== Use noch:{pkg-config}
3578
3579 On some distributions, LTTng-UST ships with a
3580 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
3581 metadata file. If this is your case, then you can use cmd:pkg-config to
3582 build an application on the command line:
3583
3584 [role="term"]
3585 ----
3586 $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
3587 ----
3588
3589
3590 [[instrumenting-32-bit-app-on-64-bit-system]]
3591 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
3592
3593 In order to trace a 32-bit application running on a 64-bit system,
3594 LTTng must use a dedicated 32-bit
3595 <<lttng-consumerd,consumer daemon>>.
3596
3597 The following steps show how to build and install a 32-bit consumer
3598 daemon, which is _not_ part of the default 64-bit LTTng build, how to
3599 build and install the 32-bit LTTng-UST libraries, and how to build and
3600 link an instrumented 32-bit application in that context.
3601
3602 To build a 32-bit instrumented application for a 64-bit target system,
3603 assuming you have a fresh target system with no installed Userspace RCU
3604 or LTTng packages:
3605
3606 . Download, build, and install a 32-bit version of Userspace RCU:
3607 +
3608 --
3609 [role="term"]
3610 ----
3611 $ cd $(mktemp -d) &&
3612 wget http://lttng.org/files/urcu/userspace-rcu-latest-0.9.tar.bz2 &&
3613 tar -xf userspace-rcu-latest-0.9.tar.bz2 &&
3614 cd userspace-rcu-0.9.* &&
3615 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
3616 make &&
3617 sudo make install &&
3618 sudo ldconfig
3619 ----
3620 --
3621
3622 . Using your distribution's package manager, or from source, install
3623 the following 32-bit versions of the following dependencies of
3624 LTTng-tools and LTTng-UST:
3625 +
3626 --
3627 * https://sourceforge.net/projects/libuuid/[libuuid]
3628 * http://directory.fsf.org/wiki/Popt[popt]
3629 * http://www.xmlsoft.org/[libxml2]
3630 --
3631
3632 . Download, build, and install a 32-bit version of the latest
3633 LTTng-UST{nbsp}{revision}:
3634 +
3635 --
3636 [role="term"]
3637 ----
3638 $ cd $(mktemp -d) &&
3639 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.10.tar.bz2 &&
3640 tar -xf lttng-ust-latest-2.10.tar.bz2 &&
3641 cd lttng-ust-2.10.* &&
3642 ./configure --libdir=/usr/local/lib32 \
3643 CFLAGS=-m32 CXXFLAGS=-m32 \
3644 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
3645 make &&
3646 sudo make install &&
3647 sudo ldconfig
3648 ----
3649 --
3650 +
3651 [NOTE]
3652 ====
3653 Depending on your distribution,
3654 32-bit libraries could be installed at a different location than
3655 `/usr/lib32`. For example, Debian is known to install
3656 some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
3657
3658 In this case, make sure to set `LDFLAGS` to all the
3659 relevant 32-bit library paths, for example:
3660
3661 [role="term"]
3662 ----
3663 $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
3664 ----
3665 ====
3666
3667 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
3668 the 32-bit consumer daemon:
3669 +
3670 --
3671 [role="term"]
3672 ----
3673 $ cd $(mktemp -d) &&
3674 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
3675 tar -xf lttng-tools-latest-2.10.tar.bz2 &&
3676 cd lttng-tools-2.10.* &&
3677 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3678 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
3679 --disable-bin-lttng --disable-bin-lttng-crash \
3680 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
3681 make &&
3682 cd src/bin/lttng-consumerd &&
3683 sudo make install &&
3684 sudo ldconfig
3685 ----
3686 --
3687
3688 . From your distribution or from source,
3689 <<installing-lttng,install>> the 64-bit versions of
3690 LTTng-UST and Userspace RCU.
3691 . Download, build, and install the 64-bit version of the
3692 latest LTTng-tools{nbsp}{revision}:
3693 +
3694 --
3695 [role="term"]
3696 ----
3697 $ cd $(mktemp -d) &&
3698 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.10.tar.bz2 &&
3699 tar -xf lttng-tools-latest-2.10.tar.bz2 &&
3700 cd lttng-tools-2.10.* &&
3701 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
3702 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
3703 make &&
3704 sudo make install &&
3705 sudo ldconfig
3706 ----
3707 --
3708
3709 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
3710 when linking your 32-bit application:
3711 +
3712 ----
3713 -m32 -L/usr/lib32 -L/usr/local/lib32 \
3714 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
3715 ----
3716 +
3717 For example, let's rebuild the quick start example in
3718 <<tracing-your-own-user-application,Trace a user application>> as an
3719 instrumented 32-bit application:
3720 +
3721 --
3722 [role="term"]
3723 ----
3724 $ gcc -m32 -c -I. hello-tp.c
3725 $ gcc -m32 -c hello.c
3726 $ gcc -m32 -o hello hello.o hello-tp.o \
3727 -L/usr/lib32 -L/usr/local/lib32 \
3728 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
3729 -llttng-ust -ldl
3730 ----
3731 --
3732
3733 No special action is required to execute the 32-bit application and
3734 to trace it: use the command-line man:lttng(1) tool as usual.
3735
3736
3737 [role="since-2.5"]
3738 [[tracef]]
3739 ==== Use `tracef()`
3740
3741 man:tracef(3) is a small LTTng-UST API designed for quick,
3742 man:printf(3)-like instrumentation without the burden of
3743 <<tracepoint-provider,creating>> and
3744 <<building-tracepoint-providers-and-user-application,building>>
3745 a tracepoint provider package.
3746
3747 To use `tracef()` in your application:
3748
3749 . In the C or C++ source files where you need to use `tracef()`,
3750 include `<lttng/tracef.h>`:
3751 +
3752 --
3753 [source,c]
3754 ----
3755 #include <lttng/tracef.h>
3756 ----
3757 --
3758
3759 . In the application's source code, use `tracef()` like you would use
3760 man:printf(3):
3761 +
3762 --
3763 [source,c]
3764 ----
3765 /* ... */
3766
3767 tracef("my message: %d (%s)", my_integer, my_string);
3768
3769 /* ... */
3770 ----
3771 --
3772
3773 . Link your application with `liblttng-ust`:
3774 +
3775 --
3776 [role="term"]
3777 ----
3778 $ gcc -o app app.c -llttng-ust
3779 ----
3780 --
3781
3782 To trace the events that `tracef()` calls emit:
3783
3784 * <<enabling-disabling-events,Create an event rule>> which matches the
3785 `lttng_ust_tracef:*` event name:
3786 +
3787 --
3788 [role="term"]
3789 ----
3790 $ lttng enable-event --userspace 'lttng_ust_tracef:*'
3791 ----
3792 --
3793
3794 [IMPORTANT]
3795 .Limitations of `tracef()`
3796 ====
3797 The `tracef()` utility function was developed to make user space tracing
3798 super simple, albeit with notable disadvantages compared to
3799 <<defining-tracepoints,user-defined tracepoints>>:
3800
3801 * All the emitted events have the same tracepoint provider and
3802 tracepoint names, respectively `lttng_ust_tracef` and `event`.
3803 * There is no static type checking.
3804 * The only event record field you actually get, named `msg`, is a string
3805 potentially containing the values you passed to `tracef()`
3806 using your own format string. This also means that you cannot filter
3807 events with a custom expression at run time because there are no
3808 isolated fields.
3809 * Since `tracef()` uses the C standard library's man:vasprintf(3)
3810 function behind the scenes to format the strings at run time, its
3811 expected performance is lower than with user-defined tracepoints,
3812 which do not require a conversion to a string.
3813
3814 Taking this into consideration, `tracef()` is useful for some quick
3815 prototyping and debugging, but you should not consider it for any
3816 permanent and serious applicative instrumentation.
3817 ====
3818
3819
3820 [role="since-2.7"]
3821 [[tracelog]]
3822 ==== Use `tracelog()`
3823
3824 The man:tracelog(3) API is very similar to <<tracef,`tracef()`>>, with
3825 the difference that it accepts an additional log level parameter.
3826
3827 The goal of `tracelog()` is to ease the migration from logging to
3828 tracing.
3829
3830 To use `tracelog()` in your application:
3831
3832 . In the C or C++ source files where you need to use `tracelog()`,
3833 include `<lttng/tracelog.h>`:
3834 +
3835 --
3836 [source,c]
3837 ----
3838 #include <lttng/tracelog.h>
3839 ----
3840 --
3841
3842 . In the application's source code, use `tracelog()` like you would use
3843 man:printf(3), except for the first parameter which is the log
3844 level:
3845 +
3846 --
3847 [source,c]
3848 ----
3849 /* ... */
3850
3851 tracelog(TRACE_WARNING, "my message: %d (%s)",
3852 my_integer, my_string);
3853
3854 /* ... */
3855 ----
3856 --
3857 +
3858 See man:lttng-ust(3) for a list of available log level names.
3859
3860 . Link your application with `liblttng-ust`:
3861 +
3862 --
3863 [role="term"]
3864 ----
3865 $ gcc -o app app.c -llttng-ust
3866 ----
3867 --
3868
3869 To trace the events that `tracelog()` calls emit with a log level
3870 _as severe as_ a specific log level:
3871
3872 * <<enabling-disabling-events,Create an event rule>> which matches the
3873 `lttng_ust_tracelog:*` event name and a minimum level
3874 of severity:
3875 +
3876 --
3877 [role="term"]
3878 ----
3879 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
3880 --loglevel=TRACE_WARNING
3881 ----
3882 --
3883
3884 To trace the events that `tracelog()` calls emit with a
3885 _specific log level_:
3886
3887 * Create an event rule which matches the `lttng_ust_tracelog:*`
3888 event name and a specific log level:
3889 +
3890 --
3891 [role="term"]
3892 ----
3893 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
3894 --loglevel-only=TRACE_INFO
3895 ----
3896 --
3897
3898
3899 [[prebuilt-ust-helpers]]
3900 === Prebuilt user space tracing helpers
3901
3902 The LTTng-UST package provides a few helpers in the form of preloadable
3903 shared objects which automatically instrument system functions and
3904 calls.
3905
3906 The helper shared objects are normally found in dir:{/usr/lib}. If you
3907 built LTTng-UST <<building-from-source,from source>>, they are probably
3908 located in dir:{/usr/local/lib}.
3909
3910 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
3911 are:
3912
3913 path:{liblttng-ust-libc-wrapper.so}::
3914 path:{liblttng-ust-pthread-wrapper.so}::
3915 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
3916 memory and POSIX threads function tracing>>.
3917
3918 path:{liblttng-ust-cyg-profile.so}::
3919 path:{liblttng-ust-cyg-profile-fast.so}::
3920 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
3921
3922 path:{liblttng-ust-dl.so}::
3923 <<liblttng-ust-dl,Dynamic linker tracing>>.
3924
3925 To use a user space tracing helper with any user application:
3926
3927 * Preload the helper shared object when you start the application:
3928 +
3929 --
3930 [role="term"]
3931 ----
3932 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
3933 ----
3934 --
3935 +
3936 You can preload more than one helper:
3937 +
3938 --
3939 [role="term"]
3940 ----
3941 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
3942 ----
3943 --
3944
3945
3946 [role="since-2.3"]
3947 [[liblttng-ust-libc-pthread-wrapper]]
3948 ==== Instrument C standard library memory and POSIX threads functions
3949
3950 The path:{liblttng-ust-libc-wrapper.so} and
3951 path:{liblttng-ust-pthread-wrapper.so} helpers
3952 add instrumentation to some C standard library and POSIX
3953 threads functions.
3954
3955 [role="growable"]
3956 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
3957 |====
3958 |TP provider name |TP name |Instrumented function
3959
3960 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
3961 |`calloc` |man:calloc(3)
3962 |`realloc` |man:realloc(3)
3963 |`free` |man:free(3)
3964 |`memalign` |man:memalign(3)
3965 |`posix_memalign` |man:posix_memalign(3)
3966 |====
3967
3968 [role="growable"]
3969 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
3970 |====
3971 |TP provider name |TP name |Instrumented function
3972
3973 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
3974 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
3975 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
3976 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
3977 |====
3978
3979 When you preload the shared object, it replaces the functions listed
3980 in the previous tables by wrappers which contain tracepoints and call
3981 the replaced functions.
3982
3983
3984 [[liblttng-ust-cyg-profile]]
3985 ==== Instrument function entry and exit
3986
3987 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
3988 to the entry and exit points of functions.
3989
3990 man:gcc(1) and man:clang(1) have an option named
3991 https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
3992 which generates instrumentation calls for entry and exit to functions.
3993 The LTTng-UST function tracing helpers,
3994 path:{liblttng-ust-cyg-profile.so} and
3995 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
3996 to add tracepoints to the two generated functions (which contain
3997 `cyg_profile` in their names, hence the helper's name).
3998
3999 To use the LTTng-UST function tracing helper, the source files to
4000 instrument must be built using the `-finstrument-functions` compiler
4001 flag.
4002
4003 There are two versions of the LTTng-UST function tracing helper:
4004
4005 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4006 that you should only use when it can be _guaranteed_ that the
4007 complete event stream is recorded without any lost event record.
4008 Any kind of duplicate information is left out.
4009 +
4010 Assuming no event record is lost, having only the function addresses on
4011 entry is enough to create a call graph, since an event record always
4012 contains the ID of the CPU that generated it.
4013 +
4014 You can use a tool like man:addr2line(1) to convert function addresses
4015 back to source file names and line numbers.
4016
4017 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4018 which also works in use cases where event records might get discarded or
4019 not recorded from application startup.
4020 In these cases, the trace analyzer needs more information to be
4021 able to reconstruct the program flow.
4022
4023 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4024 points of this helper.
4025
4026 All the tracepoints that this helper provides have the
4027 log level `TRACE_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4028
4029 TIP: It's sometimes a good idea to limit the number of source files that
4030 you compile with the `-finstrument-functions` option to prevent LTTng
4031 from writing an excessive amount of trace data at run time. When using
4032 man:gcc(1), you can use the
4033 `-finstrument-functions-exclude-function-list` option to avoid
4034 instrument entries and exits of specific function names.
4035
4036
4037 [role="since-2.4"]
4038 [[liblttng-ust-dl]]
4039 ==== Instrument the dynamic linker
4040
4041 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4042 man:dlopen(3) and man:dlclose(3) function calls.
4043
4044 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4045 of this helper.
4046
4047
4048 [role="since-2.4"]
4049 [[java-application]]
4050 === User space Java agent
4051
4052 You can instrument any Java application which uses one of the following
4053 logging frameworks:
4054
4055 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4056 (JUL) core logging facilities.
4057 * http://logging.apache.org/log4j/1.2/[**Apache log4j 1.2**], since
4058 LTTng 2.6. Note that Apache Log4j{nbsp}2 is not supported.
4059
4060 [role="img-100"]
4061 .LTTng-UST Java agent imported by a Java application.
4062 image::java-app.png[]
4063
4064 Note that the methods described below are new in LTTng{nbsp}2.8.
4065 Previous LTTng versions use another technique.
4066
4067 NOTE: We use http://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4068 and https://ci.lttng.org/[continuous integration], thus this version is
4069 directly supported. However, the LTTng-UST Java agent is also tested
4070 with OpenJDK{nbsp}7.
4071
4072
4073 [role="since-2.8"]
4074 [[jul]]
4075 ==== Use the LTTng-UST Java agent for `java.util.logging`
4076
4077 To use the LTTng-UST Java agent in a Java application which uses
4078 `java.util.logging` (JUL):
4079
4080 . In the Java application's source code, import the LTTng-UST
4081 log handler package for `java.util.logging`:
4082 +
4083 --
4084 [source,java]
4085 ----
4086 import org.lttng.ust.agent.jul.LttngLogHandler;
4087 ----
4088 --
4089
4090 . Create an LTTng-UST JUL log handler:
4091 +
4092 --
4093 [source,java]
4094 ----
4095 Handler lttngUstLogHandler = new LttngLogHandler();
4096 ----
4097 --
4098
4099 . Add this handler to the JUL loggers which should emit LTTng events:
4100 +
4101 --
4102 [source,java]
4103 ----
4104 Logger myLogger = Logger.getLogger("some-logger");
4105
4106 myLogger.addHandler(lttngUstLogHandler);
4107 ----
4108 --
4109
4110 . Use `java.util.logging` log statements and configuration as usual.
4111 The loggers with an attached LTTng-UST log handler can emit
4112 LTTng events.
4113
4114 . Before exiting the application, remove the LTTng-UST log handler from
4115 the loggers attached to it and call its `close()` method:
4116 +
4117 --
4118 [source,java]
4119 ----
4120 myLogger.removeHandler(lttngUstLogHandler);
4121 lttngUstLogHandler.close();
4122 ----
4123 --
4124 +
4125 This is not strictly necessary, but it is recommended for a clean
4126 disposal of the handler's resources.
4127
4128 . Include the LTTng-UST Java agent's common and JUL-specific JAR files,
4129 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4130 in the
4131 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4132 path] when you build the Java application.
4133 +
4134 The JAR files are typically located in dir:{/usr/share/java}.
4135 +
4136 IMPORTANT: The LTTng-UST Java agent must be
4137 <<installing-lttng,installed>> for the logging framework your
4138 application uses.
4139
4140 .Use the LTTng-UST Java agent for `java.util.logging`.
4141 ====
4142 [source,java]
4143 .path:{Test.java}
4144 ----
4145 import java.io.IOException;
4146 import java.util.logging.Handler;
4147 import java.util.logging.Logger;
4148 import org.lttng.ust.agent.jul.LttngLogHandler;
4149
4150 public class Test
4151 {
4152 private static final int answer = 42;
4153
4154 public static void main(String[] argv) throws Exception
4155 {
4156 // Create a logger
4157 Logger logger = Logger.getLogger("jello");
4158
4159 // Create an LTTng-UST log handler
4160 Handler lttngUstLogHandler = new LttngLogHandler();
4161
4162 // Add the LTTng-UST log handler to our logger
4163 logger.addHandler(lttngUstLogHandler);
4164
4165 // Log at will!
4166 logger.info("some info");
4167 logger.warning("some warning");
4168 Thread.sleep(500);
4169 logger.finer("finer information; the answer is " + answer);
4170 Thread.sleep(123);
4171 logger.severe("error!");
4172
4173 // Not mandatory, but cleaner
4174 logger.removeHandler(lttngUstLogHandler);
4175 lttngUstLogHandler.close();
4176 }
4177 }
4178 ----
4179
4180 Build this example:
4181
4182 [role="term"]
4183 ----
4184 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4185 ----
4186
4187 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4188 <<enabling-disabling-events,create an event rule>> matching the
4189 `jello` JUL logger, and <<basic-tracing-session-control,start tracing>>:
4190
4191 [role="term"]
4192 ----
4193 $ lttng create
4194 $ lttng enable-event --jul jello
4195 $ lttng start
4196 ----
4197
4198 Run the compiled class:
4199
4200 [role="term"]
4201 ----
4202 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4203 ----
4204
4205 <<basic-tracing-session-control,Stop tracing>> and inspect the
4206 recorded events:
4207
4208 [role="term"]
4209 ----
4210 $ lttng stop
4211 $ lttng view
4212 ----
4213 ====
4214
4215 In the resulting trace, an <<event,event record>> generated by a Java
4216 application using `java.util.logging` is named `lttng_jul:event` and
4217 has the following fields:
4218
4219 `msg`::
4220 Log record's message.
4221
4222 `logger_name`::
4223 Logger name.
4224
4225 `class_name`::
4226 Name of the class in which the log statement was executed.
4227
4228 `method_name`::
4229 Name of the method in which the log statement was executed.
4230
4231 `long_millis`::
4232 Logging time (timestamp in milliseconds).
4233
4234 `int_loglevel`::
4235 Log level integer value.
4236
4237 `int_threadid`::
4238 ID of the thread in which the log statement was executed.
4239
4240 You can use the opt:lttng-enable-event(1):--loglevel or
4241 opt:lttng-enable-event(1):--loglevel-only option of the
4242 man:lttng-enable-event(1) command to target a range of JUL log levels
4243 or a specific JUL log level.
4244
4245
4246 [role="since-2.8"]
4247 [[log4j]]
4248 ==== Use the LTTng-UST Java agent for Apache log4j
4249
4250 To use the LTTng-UST Java agent in a Java application which uses
4251 Apache log4j 1.2:
4252
4253 . In the Java application's source code, import the LTTng-UST
4254 log appender package for Apache log4j:
4255 +
4256 --
4257 [source,java]
4258 ----
4259 import org.lttng.ust.agent.log4j.LttngLogAppender;
4260 ----
4261 --
4262
4263 . Create an LTTng-UST log4j log appender:
4264 +
4265 --
4266 [source,java]
4267 ----
4268 Appender lttngUstLogAppender = new LttngLogAppender();
4269 ----
4270 --
4271
4272 . Add this appender to the log4j loggers which should emit LTTng events:
4273 +
4274 --
4275 [source,java]
4276 ----
4277 Logger myLogger = Logger.getLogger("some-logger");
4278
4279 myLogger.addAppender(lttngUstLogAppender);
4280 ----
4281 --
4282
4283 . Use Apache log4j log statements and configuration as usual. The
4284 loggers with an attached LTTng-UST log appender can emit LTTng events.
4285
4286 . Before exiting the application, remove the LTTng-UST log appender from
4287 the loggers attached to it and call its `close()` method:
4288 +
4289 --
4290 [source,java]
4291 ----
4292 myLogger.removeAppender(lttngUstLogAppender);
4293 lttngUstLogAppender.close();
4294 ----
4295 --
4296 +
4297 This is not strictly necessary, but it is recommended for a clean
4298 disposal of the appender's resources.
4299
4300 . Include the LTTng-UST Java agent's common and log4j-specific JAR
4301 files, path:{lttng-ust-agent-common.jar} and
4302 path:{lttng-ust-agent-log4j.jar}, in the
4303 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4304 path] when you build the Java application.
4305 +
4306 The JAR files are typically located in dir:{/usr/share/java}.
4307 +
4308 IMPORTANT: The LTTng-UST Java agent must be
4309 <<installing-lttng,installed>> for the logging framework your
4310 application uses.
4311
4312 .Use the LTTng-UST Java agent for Apache log4j.
4313 ====
4314 [source,java]
4315 .path:{Test.java}
4316 ----
4317 import org.apache.log4j.Appender;
4318 import org.apache.log4j.Logger;
4319 import org.lttng.ust.agent.log4j.LttngLogAppender;
4320
4321 public class Test
4322 {
4323 private static final int answer = 42;
4324
4325 public static void main(String[] argv) throws Exception
4326 {
4327 // Create a logger
4328 Logger logger = Logger.getLogger("jello");
4329
4330 // Create an LTTng-UST log appender
4331 Appender lttngUstLogAppender = new LttngLogAppender();
4332
4333 // Add the LTTng-UST log appender to our logger
4334 logger.addAppender(lttngUstLogAppender);
4335
4336 // Log at will!
4337 logger.info("some info");
4338 logger.warn("some warning");
4339 Thread.sleep(500);
4340 logger.debug("debug information; the answer is " + answer);
4341 Thread.sleep(123);
4342 logger.fatal("error!");
4343
4344 // Not mandatory, but cleaner
4345 logger.removeAppender(lttngUstLogAppender);
4346 lttngUstLogAppender.close();
4347 }
4348 }
4349
4350 ----
4351
4352 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4353 file):
4354
4355 [role="term"]
4356 ----
4357 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4358 ----
4359
4360 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4361 <<enabling-disabling-events,create an event rule>> matching the
4362 `jello` log4j logger, and <<basic-tracing-session-control,start tracing>>:
4363
4364 [role="term"]
4365 ----
4366 $ lttng create
4367 $ lttng enable-event --log4j jello
4368 $ lttng start
4369 ----
4370
4371 Run the compiled class:
4372
4373 [role="term"]
4374 ----
4375 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4376 ----
4377
4378 <<basic-tracing-session-control,Stop tracing>> and inspect the
4379 recorded events:
4380
4381 [role="term"]
4382 ----
4383 $ lttng stop
4384 $ lttng view
4385 ----
4386 ====
4387
4388 In the resulting trace, an <<event,event record>> generated by a Java
4389 application using log4j is named `lttng_log4j:event` and
4390 has the following fields:
4391
4392 `msg`::
4393 Log record's message.
4394
4395 `logger_name`::
4396 Logger name.
4397
4398 `class_name`::
4399 Name of the class in which the log statement was executed.
4400
4401 `method_name`::
4402 Name of the method in which the log statement was executed.
4403
4404 `filename`::
4405 Name of the file in which the executed log statement is located.
4406
4407 `line_number`::
4408 Line number at which the log statement was executed.
4409
4410 `timestamp`::
4411 Logging timestamp.
4412
4413 `int_loglevel`::
4414 Log level integer value.
4415
4416 `thread_name`::
4417 Name of the Java thread in which the log statement was executed.
4418
4419 You can use the opt:lttng-enable-event(1):--loglevel or
4420 opt:lttng-enable-event(1):--loglevel-only option of the
4421 man:lttng-enable-event(1) command to target a range of Apache log4j log levels
4422 or a specific log4j log level.
4423
4424
4425 [role="since-2.8"]
4426 [[java-application-context]]
4427 ==== Provide application-specific context fields in a Java application
4428
4429 A Java application-specific context field is a piece of state provided
4430 by the application which <<adding-context,you can add>>, using the
4431 man:lttng-add-context(1) command, to each <<event,event record>>
4432 produced by the log statements of this application.
4433
4434 For example, a given object might have a current request ID variable.
4435 You can create a context information retriever for this object and
4436 assign a name to this current request ID. You can then, using the
4437 man:lttng-add-context(1) command, add this context field by name to
4438 the JUL or log4j <<channel,channel>>.
4439
4440 To provide application-specific context fields in a Java application:
4441
4442 . In the Java application's source code, import the LTTng-UST
4443 Java agent context classes and interfaces:
4444 +
4445 --
4446 [source,java]
4447 ----
4448 import org.lttng.ust.agent.context.ContextInfoManager;
4449 import org.lttng.ust.agent.context.IContextInfoRetriever;
4450 ----
4451 --
4452
4453 . Create a context information retriever class, that is, a class which
4454 implements the `IContextInfoRetriever` interface:
4455 +
4456 --
4457 [source,java]
4458 ----
4459 class MyContextInfoRetriever implements IContextInfoRetriever
4460 {
4461 @Override
4462 public Object retrieveContextInfo(String key)
4463 {
4464 if (key.equals("intCtx")) {
4465 return (short) 17;
4466 } else if (key.equals("strContext")) {
4467 return "context value!";
4468 } else {
4469 return null;
4470 }
4471 }
4472 }
4473 ----
4474 --
4475 +
4476 This `retrieveContextInfo()` method is the only member of the
4477 `IContextInfoRetriever` interface. Its role is to return the current
4478 value of a state by name to create a context field. The names of the
4479 context fields and which state variables they return depends on your
4480 specific scenario.
4481 +
4482 All primitive types and objects are supported as context fields.
4483 When `retrieveContextInfo()` returns an object, the context field
4484 serializer calls its `toString()` method to add a string field to
4485 event records. The method can also return `null`, which means that
4486 no context field is available for the required name.
4487
4488 . Register an instance of your context information retriever class to
4489 the context information manager singleton:
4490 +
4491 --
4492 [source,java]
4493 ----
4494 IContextInfoRetriever cir = new MyContextInfoRetriever();
4495 ContextInfoManager cim = ContextInfoManager.getInstance();
4496 cim.registerContextInfoRetriever("retrieverName", cir);
4497 ----
4498 --
4499
4500 . Before exiting the application, remove your context information
4501 retriever from the context information manager singleton:
4502 +
4503 --
4504 [source,java]
4505 ----
4506 ContextInfoManager cim = ContextInfoManager.getInstance();
4507 cim.unregisterContextInfoRetriever("retrieverName");
4508 ----
4509 --
4510 +
4511 This is not strictly necessary, but it is recommended for a clean
4512 disposal of some manager's resources.
4513
4514 . Build your Java application with LTTng-UST Java agent support as
4515 usual, following the procedure for either the <<jul,JUL>> or
4516 <<log4j,Apache log4j>> framework.
4517
4518
4519 .Provide application-specific context fields in a Java application.
4520 ====
4521 [source,java]
4522 .path:{Test.java}
4523 ----
4524 import java.util.logging.Handler;
4525 import java.util.logging.Logger;
4526 import org.lttng.ust.agent.jul.LttngLogHandler;
4527 import org.lttng.ust.agent.context.ContextInfoManager;
4528 import org.lttng.ust.agent.context.IContextInfoRetriever;
4529
4530 public class Test
4531 {
4532 // Our context information retriever class
4533 private static class MyContextInfoRetriever
4534 implements IContextInfoRetriever
4535 {
4536 @Override
4537 public Object retrieveContextInfo(String key) {
4538 if (key.equals("intCtx")) {
4539 return (short) 17;
4540 } else if (key.equals("strContext")) {
4541 return "context value!";
4542 } else {
4543 return null;
4544 }
4545 }
4546 }
4547
4548 private static final int answer = 42;
4549
4550 public static void main(String args[]) throws Exception
4551 {
4552 // Get the context information manager instance
4553 ContextInfoManager cim = ContextInfoManager.getInstance();
4554
4555 // Create and register our context information retriever
4556 IContextInfoRetriever cir = new MyContextInfoRetriever();
4557 cim.registerContextInfoRetriever("myRetriever", cir);
4558
4559 // Create a logger
4560 Logger logger = Logger.getLogger("jello");
4561
4562 // Create an LTTng-UST log handler
4563 Handler lttngUstLogHandler = new LttngLogHandler();
4564
4565 // Add the LTTng-UST log handler to our logger
4566 logger.addHandler(lttngUstLogHandler);
4567
4568 // Log at will!
4569 logger.info("some info");
4570 logger.warning("some warning");
4571 Thread.sleep(500);
4572 logger.finer("finer information; the answer is " + answer);
4573 Thread.sleep(123);
4574 logger.severe("error!");
4575
4576 // Not mandatory, but cleaner
4577 logger.removeHandler(lttngUstLogHandler);
4578 lttngUstLogHandler.close();
4579 cim.unregisterContextInfoRetriever("myRetriever");
4580 }
4581 }
4582 ----
4583
4584 Build this example:
4585
4586 [role="term"]
4587 ----
4588 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4589 ----
4590
4591 <<creating-destroying-tracing-sessions,Create a tracing session>>
4592 and <<enabling-disabling-events,create an event rule>> matching the
4593 `jello` JUL logger:
4594
4595 [role="term"]
4596 ----
4597 $ lttng create
4598 $ lttng enable-event --jul jello
4599 ----
4600
4601 <<adding-context,Add the application-specific context fields>> to the
4602 JUL channel:
4603
4604 [role="term"]
4605 ----
4606 $ lttng add-context --jul --type='$app.myRetriever:intCtx'
4607 $ lttng add-context --jul --type='$app.myRetriever:strContext'
4608 ----
4609
4610 <<basic-tracing-session-control,Start tracing>>:
4611
4612 [role="term"]
4613 ----
4614 $ lttng start
4615 ----
4616
4617 Run the compiled class:
4618
4619 [role="term"]
4620 ----
4621 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4622 ----
4623
4624 <<basic-tracing-session-control,Stop tracing>> and inspect the
4625 recorded events:
4626
4627 [role="term"]
4628 ----
4629 $ lttng stop
4630 $ lttng view
4631 ----
4632 ====
4633
4634
4635 [role="since-2.7"]
4636 [[python-application]]
4637 === User space Python agent
4638
4639 You can instrument a Python 2 or Python 3 application which uses the
4640 standard https://docs.python.org/3/library/logging.html[`logging`]
4641 package.
4642
4643 Each log statement emits an LTTng event once the
4644 application module imports the
4645 <<lttng-ust-agents,LTTng-UST Python agent>> package.
4646
4647 [role="img-100"]
4648 .A Python application importing the LTTng-UST Python agent.
4649 image::python-app.png[]
4650
4651 To use the LTTng-UST Python agent:
4652
4653 . In the Python application's source code, import the LTTng-UST Python
4654 agent:
4655 +
4656 --
4657 [source,python]
4658 ----
4659 import lttngust
4660 ----
4661 --
4662 +
4663 The LTTng-UST Python agent automatically adds its logging handler to the
4664 root logger at import time.
4665 +
4666 Any log statement that the application executes before this import does
4667 not emit an LTTng event.
4668 +
4669 IMPORTANT: The LTTng-UST Python agent must be
4670 <<installing-lttng,installed>>.
4671
4672 . Use log statements and logging configuration as usual.
4673 Since the LTTng-UST Python agent adds a handler to the _root_
4674 logger, you can trace any log statement from any logger.
4675
4676 .Use the LTTng-UST Python agent.
4677 ====
4678 [source,python]
4679 .path:{test.py}
4680 ----
4681 import lttngust
4682 import logging
4683 import time
4684
4685
4686 def example():
4687 logging.basicConfig()
4688 logger = logging.getLogger('my-logger')
4689
4690 while True:
4691 logger.debug('debug message')
4692 logger.info('info message')
4693 logger.warn('warn message')
4694 logger.error('error message')
4695 logger.critical('critical message')
4696 time.sleep(1)
4697
4698
4699 if __name__ == '__main__':
4700 example()
4701 ----
4702
4703 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
4704 logging handler which prints to the standard error stream, is not
4705 strictly required for LTTng-UST tracing to work, but in versions of
4706 Python preceding 3.2, you could see a warning message which indicates
4707 that no handler exists for the logger `my-logger`.
4708
4709 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4710 <<enabling-disabling-events,create an event rule>> matching the
4711 `my-logger` Python logger, and <<basic-tracing-session-control,start
4712 tracing>>:
4713
4714 [role="term"]
4715 ----
4716 $ lttng create
4717 $ lttng enable-event --python my-logger
4718 $ lttng start
4719 ----
4720
4721 Run the Python script:
4722
4723 [role="term"]
4724 ----
4725 $ python test.py
4726 ----
4727
4728 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
4729 events:
4730
4731 [role="term"]
4732 ----
4733 $ lttng stop
4734 $ lttng view
4735 ----
4736 ====
4737
4738 In the resulting trace, an <<event,event record>> generated by a Python
4739 application is named `lttng_python:event` and has the following fields:
4740
4741 `asctime`::
4742 Logging time (string).
4743
4744 `msg`::
4745 Log record's message.
4746
4747 `logger_name`::
4748 Logger name.
4749
4750 `funcName`::
4751 Name of the function in which the log statement was executed.
4752
4753 `lineno`::
4754 Line number at which the log statement was executed.
4755
4756 `int_loglevel`::
4757 Log level integer value.
4758
4759 `thread`::
4760 ID of the Python thread in which the log statement was executed.
4761
4762 `threadName`::
4763 Name of the Python thread in which the log statement was executed.
4764
4765 You can use the opt:lttng-enable-event(1):--loglevel or
4766 opt:lttng-enable-event(1):--loglevel-only option of the
4767 man:lttng-enable-event(1) command to target a range of Python log levels
4768 or a specific Python log level.
4769
4770 When an application imports the LTTng-UST Python agent, the agent tries
4771 to register to a <<lttng-sessiond,session daemon>>. Note that you must
4772 <<start-sessiond,start the session daemon>> _before_ you run the Python
4773 application. If a session daemon is found, the agent tries to register
4774 to it during 5{nbsp}seconds, after which the application continues
4775 without LTTng tracing support. You can override this timeout value with
4776 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
4777 (milliseconds).
4778
4779 If the session daemon stops while a Python application with an imported
4780 LTTng-UST Python agent runs, the agent retries to connect and to
4781 register to a session daemon every 3{nbsp}seconds. You can override this
4782 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
4783 variable.
4784
4785
4786 [role="since-2.5"]
4787 [[proc-lttng-logger-abi]]
4788 === LTTng logger
4789
4790 The `lttng-tracer` Linux kernel module, part of
4791 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger file
4792 path:{/proc/lttng-logger} when it's loaded. Any application can write
4793 text data to this file to emit an LTTng event.
4794
4795 [role="img-100"]
4796 .An application writes to the LTTng logger file to emit an LTTng event.
4797 image::lttng-logger.png[]
4798
4799 The LTTng logger is the quickest method--not the most efficient,
4800 however--to add instrumentation to an application. It is designed
4801 mostly to instrument shell scripts:
4802
4803 [role="term"]
4804 ----
4805 $ echo "Some message, some $variable" > /proc/lttng-logger
4806 ----
4807
4808 Any event that the LTTng logger emits is named `lttng_logger` and
4809 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
4810 other instrumentation points in the kernel tracing domain, **any Unix
4811 user** can <<enabling-disabling-events,create an event rule>> which
4812 matches its event name, not only the root user or users in the
4813 <<tracing-group,tracing group>>.
4814
4815 To use the LTTng logger:
4816
4817 * From any application, write text data to the path:{/proc/lttng-logger}
4818 file.
4819
4820 The `msg` field of `lttng_logger` event records contains the
4821 recorded message.
4822
4823 NOTE: The maximum message length of an LTTng logger event is
4824 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
4825 than one event to contain the remaining data.
4826
4827 You should not use the LTTng logger to trace a user application which
4828 can be instrumented in a more efficient way, namely:
4829
4830 * <<c-application,C and $$C++$$ applications>>.
4831 * <<java-application,Java applications>>.
4832 * <<python-application,Python applications>>.
4833
4834 .Use the LTTng logger.
4835 ====
4836 [source,bash]
4837 .path:{test.bash}
4838 ----
4839 echo 'Hello, World!' > /proc/lttng-logger
4840 sleep 2
4841 df --human-readable --print-type / > /proc/lttng-logger
4842 ----
4843
4844 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4845 <<enabling-disabling-events,create an event rule>> matching the
4846 `lttng_logger` Linux kernel tracepoint, and
4847 <<basic-tracing-session-control,start tracing>>:
4848
4849 [role="term"]
4850 ----
4851 $ lttng create
4852 $ lttng enable-event --kernel lttng_logger
4853 $ lttng start
4854 ----
4855
4856 Run the Bash script:
4857
4858 [role="term"]
4859 ----
4860 $ bash test.bash
4861 ----
4862
4863 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
4864 events:
4865
4866 [role="term"]
4867 ----
4868 $ lttng stop
4869 $ lttng view
4870 ----
4871 ====
4872
4873
4874 [[instrumenting-linux-kernel]]
4875 === LTTng kernel tracepoints
4876
4877 NOTE: This section shows how to _add_ instrumentation points to the
4878 Linux kernel. The kernel's subsystems are already thoroughly
4879 instrumented at strategic places for LTTng when you
4880 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
4881 package.
4882
4883 ////
4884 There are two methods to instrument the Linux kernel:
4885
4886 . <<linux-add-lttng-layer,Add an LTTng layer>> over an existing ftrace
4887 tracepoint which uses the `TRACE_EVENT()` API.
4888 +
4889 Choose this if you want to instrumentation a Linux kernel tree with an
4890 instrumentation point compatible with ftrace, perf, and SystemTap.
4891
4892 . Use an <<linux-lttng-tracepoint-event,LTTng-only approach>> to
4893 instrument an out-of-tree kernel module.
4894 +
4895 Choose this if you don't need ftrace, perf, or SystemTap support.
4896 ////
4897
4898
4899 [[linux-add-lttng-layer]]
4900 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
4901
4902 This section shows how to add an LTTng layer to existing ftrace
4903 instrumentation using the `TRACE_EVENT()` API.
4904
4905 This section does not document the `TRACE_EVENT()` macro. You can
4906 read the following articles to learn more about this API:
4907
4908 * http://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part 1)]
4909 * http://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part 2)]
4910 * http://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part 3)]
4911
4912 The following procedure assumes that your ftrace tracepoints are
4913 correctly defined in their own header and that they are created in
4914 one source file using the `CREATE_TRACE_POINTS` definition.
4915
4916 To add an LTTng layer over an existing ftrace tracepoint:
4917
4918 . Make sure the following kernel configuration options are
4919 enabled:
4920 +
4921 --
4922 * `CONFIG_MODULES`
4923 * `CONFIG_KALLSYMS`
4924 * `CONFIG_HIGH_RES_TIMERS`
4925 * `CONFIG_TRACEPOINTS`
4926 --
4927
4928 . Build the Linux source tree with your custom ftrace tracepoints.
4929 . Boot the resulting Linux image on your target system.
4930 +
4931 Confirm that the tracepoints exist by looking for their names in the
4932 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
4933 is your subsystem's name.
4934
4935 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
4936 +
4937 --
4938 [role="term"]
4939 ----
4940 $ cd $(mktemp -d) &&
4941 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.10.tar.bz2 &&
4942 tar -xf lttng-modules-latest-2.10.tar.bz2 &&
4943 cd lttng-modules-2.10.*
4944 ----
4945 --
4946
4947 . In dir:{instrumentation/events/lttng-module}, relative to the root
4948 of the LTTng-modules source tree, create a header file named
4949 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
4950 LTTng-modules tracepoint definitions using the LTTng-modules
4951 macros in it.
4952 +
4953 Start with this template:
4954 +
4955 --
4956 [source,c]
4957 .path:{instrumentation/events/lttng-module/my_subsys.h}
4958 ----
4959 #undef TRACE_SYSTEM
4960 #define TRACE_SYSTEM my_subsys
4961
4962 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
4963 #define _LTTNG_MY_SUBSYS_H
4964
4965 #include "../../../probes/lttng-tracepoint-event.h"
4966 #include <linux/tracepoint.h>
4967
4968 LTTNG_TRACEPOINT_EVENT(
4969 /*
4970 * Format is identical to TRACE_EVENT()'s version for the three
4971 * following macro parameters:
4972 */
4973 my_subsys_my_event,
4974 TP_PROTO(int my_int, const char *my_string),
4975 TP_ARGS(my_int, my_string),
4976
4977 /* LTTng-modules specific macros */
4978 TP_FIELDS(
4979 ctf_integer(int, my_int_field, my_int)
4980 ctf_string(my_bar_field, my_bar)
4981 )
4982 )
4983
4984 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
4985
4986 #include "../../../probes/define_trace.h"
4987 ----
4988 --
4989 +
4990 The entries in the `TP_FIELDS()` section are the list of fields for the
4991 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
4992 ftrace's `TRACE_EVENT()` macro.
4993 +
4994 See <<lttng-modules-tp-fields,Tracepoint fields macros>> for a
4995 complete description of the available `ctf_*()` macros.
4996
4997 . Create the LTTng-modules probe's kernel module C source file,
4998 +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
4999 subsystem name:
5000 +
5001 --
5002 [source,c]
5003 .path:{probes/lttng-probe-my-subsys.c}
5004 ----
5005 #include <linux/module.h>
5006 #include "../lttng-tracer.h"
5007
5008 /*
5009 * Build-time verification of mismatch between mainline
5010 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5011 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5012 */
5013 #include <trace/events/my_subsys.h>
5014
5015 /* Create LTTng tracepoint probes */
5016 #define LTTNG_PACKAGE_BUILD
5017 #define CREATE_TRACE_POINTS
5018 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5019
5020 #include "../instrumentation/events/lttng-module/my_subsys.h"
5021
5022 MODULE_LICENSE("GPL and additional rights");
5023 MODULE_AUTHOR("Your name <your-email>");
5024 MODULE_DESCRIPTION("LTTng my_subsys probes");
5025 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5026 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5027 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5028 LTTNG_MODULES_EXTRAVERSION);
5029 ----
5030 --
5031
5032 . Edit path:{probes/KBuild} and add your new kernel module object
5033 next to the existing ones:
5034 +
5035 --
5036 [source,make]
5037 .path:{probes/KBuild}
5038 ----
5039 # ...
5040
5041 obj-m += lttng-probe-module.o
5042 obj-m += lttng-probe-power.o
5043
5044 obj-m += lttng-probe-my-subsys.o
5045
5046 # ...
5047 ----
5048 --
5049
5050 . Build and install the LTTng kernel modules:
5051 +
5052 --
5053 [role="term"]
5054 ----
5055 $ make KERNELDIR=/path/to/linux
5056 # make modules_install && depmod -a
5057 ----
5058 --
5059 +
5060 Replace `/path/to/linux` with the path to the Linux source tree where
5061 you defined and used tracepoints with ftrace's `TRACE_EVENT()` macro.
5062
5063 Note that you can also use the
5064 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5065 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5066 C code that need to be executed before the event fields are recorded.
5067
5068 The best way to learn how to use the previous LTTng-modules macros is to
5069 inspect the existing LTTng-modules tracepoint definitions in the
5070 dir:{instrumentation/events/lttng-module} header files. Compare them
5071 with the Linux kernel mainline versions in the
5072 dir:{include/trace/events} directory of the Linux source tree.
5073
5074
5075 [role="since-2.7"]
5076 [[lttng-tracepoint-event-code]]
5077 ===== Use custom C code to access the data for tracepoint fields
5078
5079 Although we recommended to always use the
5080 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5081 the arguments and fields of an LTTng-modules tracepoint when possible,
5082 sometimes you need a more complex process to access the data that the
5083 tracer records as event record fields. In other words, you need local
5084 variables and multiple C{nbsp}statements instead of simple
5085 argument-based expressions that you pass to the
5086 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5087
5088 You can use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5089 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5090 a block of C{nbsp}code to be executed before LTTng records the fields.
5091 The structure of this macro is:
5092
5093 [source,c]
5094 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5095 ----
5096 LTTNG_TRACEPOINT_EVENT_CODE(
5097 /*
5098 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5099 * version for the following three macro parameters:
5100 */
5101 my_subsys_my_event,
5102 TP_PROTO(int my_int, const char *my_string),
5103 TP_ARGS(my_int, my_string),
5104
5105 /* Declarations of custom local variables */
5106 TP_locvar(
5107 int a = 0;
5108 unsigned long b = 0;
5109 const char *name = "(undefined)";
5110 struct my_struct *my_struct;
5111 ),
5112
5113 /*
5114 * Custom code which uses both tracepoint arguments
5115 * (in TP_ARGS()) and local variables (in TP_locvar()).
5116 *
5117 * Local variables are actually members of a structure pointed
5118 * to by the special variable tp_locvar.
5119 */
5120 TP_code(
5121 if (my_int) {
5122 tp_locvar->a = my_int + 17;
5123 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5124 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5125 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5126 put_my_struct(tp_locvar->my_struct);
5127
5128 if (tp_locvar->b) {
5129 tp_locvar->a = 1;
5130 }
5131 }
5132 ),
5133
5134 /*
5135 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5136 * version for this, except that tp_locvar members can be
5137 * used in the argument expression parameters of
5138 * the ctf_*() macros.
5139 */
5140 TP_FIELDS(
5141 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5142 ctf_integer(int, my_struct_a, tp_locvar->a)
5143 ctf_string(my_string_field, my_string)
5144 ctf_string(my_struct_name, tp_locvar->name)
5145 )
5146 )
5147 ----
5148
5149 IMPORTANT: The C code defined in `TP_code()` must not have any side
5150 effects when executed. In particular, the code must not allocate
5151 memory or get resources without deallocating this memory or putting
5152 those resources afterwards.
5153
5154
5155 [[instrumenting-linux-kernel-tracing]]
5156 ==== Load and unload a custom probe kernel module
5157
5158 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5159 kernel module>> in the kernel before it can emit LTTng events.
5160
5161 To load the default probe kernel modules and a custom probe kernel
5162 module:
5163
5164 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5165 probe modules to load when starting a root <<lttng-sessiond,session
5166 daemon>>:
5167 +
5168 --
5169 .Load the `my_subsys`, `usb`, and the default probe modules.
5170 ====
5171 [role="term"]
5172 ----
5173 # lttng-sessiond --extra-kmod-probes=my_subsys,usb
5174 ----
5175 ====
5176 --
5177 +
5178 You only need to pass the subsystem name, not the whole kernel module
5179 name.
5180
5181 To load _only_ a given custom probe kernel module:
5182
5183 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5184 modules to load when starting a root session daemon:
5185 +
5186 --
5187 .Load only the `my_subsys` and `usb` probe modules.
5188 ====
5189 [role="term"]
5190 ----
5191 # lttng-sessiond --kmod-probes=my_subsys,usb
5192 ----
5193 ====
5194 --
5195
5196 To confirm that a probe module is loaded:
5197
5198 * Use man:lsmod(8):
5199 +
5200 --
5201 [role="term"]
5202 ----
5203 $ lsmod | grep lttng_probe_usb
5204 ----
5205 --
5206
5207 To unload the loaded probe modules:
5208
5209 * Kill the session daemon with `SIGTERM`:
5210 +
5211 --
5212 [role="term"]
5213 ----
5214 # pkill lttng-sessiond
5215 ----
5216 --
5217 +
5218 You can also use man:modprobe(8)'s `--remove` option if the session
5219 daemon terminates abnormally.
5220
5221
5222 [[controlling-tracing]]
5223 == Tracing control
5224
5225 Once an application or a Linux kernel is
5226 <<instrumenting,instrumented>> for LTTng tracing,
5227 you can _trace_ it.
5228
5229 This section is divided in topics on how to use the various
5230 <<plumbing,components of LTTng>>, in particular the <<lttng-cli,cmd:lttng
5231 command-line tool>>, to _control_ the LTTng daemons and tracers.
5232
5233 NOTE: In the following subsections, we refer to an man:lttng(1) command
5234 using its man page name. For example, instead of _Run the `create`
5235 command to..._, we use _Run the man:lttng-create(1) command to..._.
5236
5237
5238 [[start-sessiond]]
5239 === Start a session daemon
5240
5241 In some situations, you need to run a <<lttng-sessiond,session daemon>>
5242 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5243 command-line tool.
5244
5245 You will see the following error when you run a command while no session
5246 daemon is running:
5247
5248 ----
5249 Error: No session daemon is available
5250 ----
5251
5252 The only command that automatically runs a session daemon is
5253 man:lttng-create(1), which you use to
5254 <<creating-destroying-tracing-sessions,create a tracing session>>. While
5255 this is most of the time the first operation that you do, sometimes it's
5256 not. Some examples are:
5257
5258 * <<list-instrumentation-points,List the available instrumentation points>>.
5259 * <<saving-loading-tracing-session,Load a tracing session configuration>>.
5260
5261 [[tracing-group]] Each Unix user must have its own running session
5262 daemon to trace user applications. The session daemon that the root user
5263 starts is the only one allowed to control the LTTng kernel tracer. Users
5264 that are part of the _tracing group_ can control the root session
5265 daemon. The default tracing group name is `tracing`; you can set it to
5266 something else with the opt:lttng-sessiond(8):--group option when you
5267 start the root session daemon.
5268
5269 To start a user session daemon:
5270
5271 * Run man:lttng-sessiond(8):
5272 +
5273 --
5274 [role="term"]
5275 ----
5276 $ lttng-sessiond --daemonize
5277 ----
5278 --
5279
5280 To start the root session daemon:
5281
5282 * Run man:lttng-sessiond(8) as the root user:
5283 +
5284 --
5285 [role="term"]
5286 ----
5287 # lttng-sessiond --daemonize
5288 ----
5289 --
5290
5291 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5292 start the session daemon in foreground.
5293
5294 To stop a session daemon, use man:kill(1) on its process ID (standard
5295 `TERM` signal).
5296
5297 Note that some Linux distributions could manage the LTTng session daemon
5298 as a service. In this case, you should use the service manager to
5299 start, restart, and stop session daemons.
5300
5301
5302 [[creating-destroying-tracing-sessions]]
5303 === Create and destroy a tracing session
5304
5305 Almost all the LTTng control operations happen in the scope of
5306 a <<tracing-session,tracing session>>, which is the dialogue between the
5307 <<lttng-sessiond,session daemon>> and you.
5308
5309 To create a tracing session with a generated name:
5310
5311 * Use the man:lttng-create(1) command:
5312 +
5313 --
5314 [role="term"]
5315 ----
5316 $ lttng create
5317 ----
5318 --
5319
5320 The created tracing session's name is `auto` followed by the
5321 creation date.
5322
5323 To create a tracing session with a specific name:
5324
5325 * Use the optional argument of the man:lttng-create(1) command:
5326 +
5327 --
5328 [role="term"]
5329 ----
5330 $ lttng create my-session
5331 ----
5332 --
5333 +
5334 Replace `my-session` with the specific tracing session name.
5335
5336 LTTng appends the creation date to the created tracing session's name.
5337
5338 LTTng writes the traces of a tracing session in
5339 +$LTTNG_HOME/lttng-trace/__name__+ by default, where +__name__+ is the
5340 name of the tracing session. Note that the env:LTTNG_HOME environment
5341 variable defaults to `$HOME` if not set.
5342
5343 To output LTTng traces to a non-default location:
5344
5345 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command:
5346 +
5347 --
5348 [role="term"]
5349 ----
5350 $ lttng create my-session --output=/tmp/some-directory
5351 ----
5352 --
5353
5354 You may create as many tracing sessions as you wish.
5355
5356 To list all the existing tracing sessions for your Unix user:
5357
5358 * Use the man:lttng-list(1) command:
5359 +
5360 --
5361 [role="term"]
5362 ----
5363 $ lttng list
5364 ----
5365 --
5366
5367 When you create a tracing session, it is set as the _current tracing
5368 session_. The following man:lttng(1) commands operate on the current
5369 tracing session when you don't specify one:
5370
5371 [role="list-3-cols"]
5372 * `add-context`
5373 * `destroy`
5374 * `disable-channel`
5375 * `disable-event`
5376 * `enable-channel`
5377 * `enable-event`
5378 * `load`
5379 * `regenerate`
5380 * `save`
5381 * `snapshot`
5382 * `start`
5383 * `stop`
5384 * `track`
5385 * `untrack`
5386 * `view`
5387
5388 To change the current tracing session:
5389
5390 * Use the man:lttng-set-session(1) command:
5391 +
5392 --
5393 [role="term"]
5394 ----
5395 $ lttng set-session new-session
5396 ----
5397 --
5398 +
5399 Replace `new-session` by the name of the new current tracing session.
5400
5401 When you are done tracing in a given tracing session, you can destroy
5402 it. This operation frees the resources taken by the tracing session
5403 to destroy; it does not destroy the trace data that LTTng wrote for
5404 this tracing session.
5405
5406 To destroy the current tracing session:
5407
5408 * Use the man:lttng-destroy(1) command:
5409 +
5410 --
5411 [role="term"]
5412 ----
5413 $ lttng destroy
5414 ----
5415 --
5416
5417 The man:lttng-destroy(1) command also runs the man:lttng-stop(1)
5418 command implicitly (see <<basic-tracing-session-control,Start and stop a
5419 tracing session>>). You need to stop tracing to make LTTng flush the
5420 remaining trace data and make the trace readable.
5421
5422
5423 [[list-instrumentation-points]]
5424 === List the available instrumentation points
5425
5426 The <<lttng-sessiond,session daemon>> can query the running instrumented
5427 user applications and the Linux kernel to get a list of available
5428 instrumentation points. For the Linux kernel <<domain,tracing domain>>,
5429 they are tracepoints and system calls. For the user space tracing
5430 domain, they are tracepoints. For the other tracing domains, they are
5431 logger names.
5432
5433 To list the available instrumentation points:
5434
5435 * Use the man:lttng-list(1) command with the requested tracing domain's
5436 option amongst:
5437 +
5438 --
5439 * opt:lttng-list(1):--kernel: Linux kernel tracepoints (your Unix user
5440 must be a root user, or it must be a member of the
5441 <<tracing-group,tracing group>>).
5442 * opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall: Linux
5443 kernel system calls (your Unix user must be a root user, or it must be
5444 a member of the tracing group).
5445 * opt:lttng-list(1):--userspace: user space tracepoints.
5446 * opt:lttng-list(1):--jul: `java.util.logging` loggers.
5447 * opt:lttng-list(1):--log4j: Apache log4j loggers.
5448 * opt:lttng-list(1):--python: Python loggers.
5449 --
5450
5451 .List the available user space tracepoints.
5452 ====
5453 [role="term"]
5454 ----
5455 $ lttng list --userspace
5456 ----
5457 ====
5458
5459 .List the available Linux kernel system call tracepoints.
5460 ====
5461 [role="term"]
5462 ----
5463 $ lttng list --kernel --syscall
5464 ----
5465 ====
5466
5467
5468 [[enabling-disabling-events]]
5469 === Create and enable an event rule
5470
5471 Once you <<creating-destroying-tracing-sessions,create a tracing
5472 session>>, you can create <<event,event rules>> with the
5473 man:lttng-enable-event(1) command.
5474
5475 You specify each condition with a command-line option. The available
5476 condition options are shown in the following table.
5477
5478 [role="growable",cols="asciidoc,asciidoc,default"]
5479 .Condition command-line options for the man:lttng-enable-event(1) command.
5480 |====
5481 |Option |Description |Applicable tracing domains
5482
5483 |
5484 One of:
5485
5486 . `--syscall`
5487 . +--probe=__ADDR__+
5488 . +--function=__ADDR__+
5489
5490 |
5491 Instead of using the default _tracepoint_ instrumentation type, use:
5492
5493 . A Linux system call.
5494 . A Linux https://lwn.net/Articles/132196/[KProbe] (symbol or address).
5495 . The entry and return points of a Linux function (symbol or address).
5496
5497 |Linux kernel.
5498
5499 |First positional argument.
5500
5501 |
5502 Tracepoint or system call name. In the case of a Linux KProbe or
5503 function, this is a custom name given to the event rule. With the
5504 JUL, log4j, and Python domains, this is a logger name.
5505
5506 With a tracepoint, logger, or system call name, you can use the special
5507 `*` globbing character to match anything (for example, `sched_*`,
5508 `my_comp*:*msg_*`).
5509
5510 |All.
5511
5512 |
5513 One of:
5514
5515 . +--loglevel=__LEVEL__+
5516 . +--loglevel-only=__LEVEL__+
5517
5518 |
5519 . Match only tracepoints or log statements with a logging level at
5520 least as severe as +__LEVEL__+.
5521 . Match only tracepoints or log statements with a logging level
5522 equal to +__LEVEL__+.
5523
5524 See man:lttng-enable-event(1) for the list of available logging level
5525 names.
5526
5527 |User space, JUL, log4j, and Python.
5528
5529 |+--exclude=__EXCLUSIONS__+
5530
5531 |
5532 When you use a `*` character at the end of the tracepoint or logger
5533 name (first positional argument), exclude the specific names in the
5534 comma-delimited list +__EXCLUSIONS__+.
5535
5536 |
5537 User space, JUL, log4j, and Python.
5538
5539 |+--filter=__EXPR__+
5540
5541 |
5542 Match only events which satisfy the expression +__EXPR__+.
5543
5544 See man:lttng-enable-event(1) to learn more about the syntax of a
5545 filter expression.
5546
5547 |All.
5548
5549 |====
5550
5551 You attach an event rule to a <<channel,channel>> on creation. If you do
5552 not specify the channel with the opt:lttng-enable-event(1):--channel
5553 option, and if the event rule to create is the first in its
5554 <<domain,tracing domain>> for a given tracing session, then LTTng
5555 creates a _default channel_ for you. This default channel is reused in
5556 subsequent invocations of the man:lttng-enable-event(1) command for the
5557 same tracing domain.
5558
5559 An event rule is always enabled at creation time.
5560
5561 The following examples show how you can combine the previous
5562 command-line options to create simple to more complex event rules.
5563
5564 .Create an event rule targetting a Linux kernel tracepoint (default channel).
5565 ====
5566 [role="term"]
5567 ----
5568 $ lttng enable-event --kernel sched_switch
5569 ----
5570 ====
5571
5572 .Create an event rule matching four Linux kernel system calls (default channel).
5573 ====
5574 [role="term"]
5575 ----
5576 $ lttng enable-event --kernel --syscall open,write,read,close
5577 ----
5578 ====
5579
5580 .Create event rules matching tracepoints with filter expressions (default channel).
5581 ====
5582 [role="term"]
5583 ----
5584 $ lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
5585 ----
5586
5587 [role="term"]
5588 ----
5589 $ lttng enable-event --kernel --all \
5590 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
5591 ----
5592
5593 [role="term"]
5594 ----
5595 $ lttng enable-event --jul my_logger \
5596 --filter='$app.retriever:cur_msg_id > 3'
5597 ----
5598
5599 IMPORTANT: Make sure to always quote the filter string when you
5600 use man:lttng(1) from a shell.
5601 ====
5602
5603 .Create an event rule matching any user space tracepoint of a given tracepoint provider with a log level range (default channel).
5604 ====
5605 [role="term"]
5606 ----
5607 $ lttng enable-event --userspace my_app:'*' --loglevel=TRACE_INFO
5608 ----
5609
5610 IMPORTANT: Make sure to always quote the wildcard character when you
5611 use man:lttng(1) from a shell.
5612 ====
5613
5614 .Create an event rule matching multiple Python loggers with a wildcard and with exclusions (default channel).
5615 ====
5616 [role="term"]
5617 ----
5618 $ lttng enable-event --python my-app.'*' \
5619 --exclude='my-app.module,my-app.hello'
5620 ----
5621 ====
5622
5623 .Create an event rule matching any Apache log4j logger with a specific log level (default channel).
5624 ====
5625 [role="term"]
5626 ----
5627 $ lttng enable-event --log4j --all --loglevel-only=LOG4J_WARN
5628 ----
5629 ====
5630
5631 .Create an event rule attached to a specific channel matching a specific user space tracepoint provider and tracepoint.
5632 ====
5633 [role="term"]
5634 ----
5635 $ lttng enable-event --userspace my_app:my_tracepoint --channel=my-channel
5636 ----
5637 ====
5638
5639 The event rules of a given channel form a whitelist: as soon as an
5640 emitted event passes one of them, LTTng can record the event. For
5641 example, an event named `my_app:my_tracepoint` emitted from a user space
5642 tracepoint with a `TRACE_ERROR` log level passes both of the following
5643 rules:
5644
5645 [role="term"]
5646 ----
5647 $ lttng enable-event --userspace my_app:my_tracepoint
5648 $ lttng enable-event --userspace my_app:my_tracepoint \
5649 --loglevel=TRACE_INFO
5650 ----
5651
5652 The second event rule is redundant: the first one includes
5653 the second one.
5654
5655
5656 [[disable-event-rule]]
5657 === Disable an event rule
5658
5659 To disable an event rule that you <<enabling-disabling-events,created>>
5660 previously, use the man:lttng-disable-event(1) command. This command
5661 disables _all_ the event rules (of a given tracing domain and channel)
5662 which match an instrumentation point. The other conditions are not
5663 supported as of LTTng{nbsp}{revision}.
5664
5665 The LTTng tracer does not record an emitted event which passes
5666 a _disabled_ event rule.
5667
5668 .Disable an event rule matching a Python logger (default channel).
5669 ====
5670 [role="term"]
5671 ----
5672 $ lttng disable-event --python my-logger
5673 ----
5674 ====
5675
5676 .Disable an event rule matching all `java.util.logging` loggers (default channel).
5677 ====
5678 [role="term"]
5679 ----
5680 $ lttng disable-event --jul '*'
5681 ----
5682 ====
5683
5684 .Disable _all_ the event rules of the default channel.
5685 ====
5686 The opt:lttng-disable-event(1):--all-events option is not, like the
5687 opt:lttng-enable-event(1):--all option of man:lttng-enable-event(1), the
5688 equivalent of the event name `*` (wildcard): it disables _all_ the event
5689 rules of a given channel.
5690
5691 [role="term"]
5692 ----
5693 $ lttng disable-event --jul --all-events
5694 ----
5695 ====
5696
5697 NOTE: You cannot delete an event rule once you create it.
5698
5699
5700 [[status]]
5701 === Get the status of a tracing session
5702
5703 To get the status of the current tracing session, that is, its
5704 parameters, its channels, event rules, and their attributes:
5705
5706 * Use the man:lttng-status(1) command:
5707 +
5708 --
5709 [role="term"]
5710 ----
5711 $ lttng status
5712 ----
5713 --
5714 +
5715
5716 To get the status of any tracing session:
5717
5718 * Use the man:lttng-list(1) command with the tracing session's name:
5719 +
5720 --
5721 [role="term"]
5722 ----
5723 $ lttng list my-session
5724 ----
5725 --
5726 +
5727 Replace `my-session` with the desired tracing session's name.
5728
5729
5730 [[basic-tracing-session-control]]
5731 === Start and stop a tracing session
5732
5733 Once you <<creating-destroying-tracing-sessions,create a tracing
5734 session>> and
5735 <<enabling-disabling-events,create one or more event rules>>,
5736 you can start and stop the tracers for this tracing session.
5737
5738 To start tracing in the current tracing session:
5739
5740 * Use the man:lttng-start(1) command:
5741 +
5742 --
5743 [role="term"]
5744 ----
5745 $ lttng start
5746 ----
5747 --
5748
5749 LTTng is very flexible: you can launch user applications before
5750 or after the you start the tracers. The tracers only record the events
5751 if they pass enabled event rules and if they occur while the tracers are
5752 started.
5753
5754 To stop tracing in the current tracing session:
5755
5756 * Use the man:lttng-stop(1) command:
5757 +
5758 --
5759 [role="term"]
5760 ----
5761 $ lttng stop
5762 ----
5763 --
5764 +
5765 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
5766 records>> or lost sub-buffers since the last time you ran
5767 man:lttng-start(1), warnings are printed when you run the
5768 man:lttng-stop(1) command.
5769
5770 IMPORTANT: You need to stop tracing to make LTTng flush the remaining
5771 trace data and make the trace readable. Note that the
5772 man:lttng-destroy(1) command (see
5773 <<creating-destroying-tracing-sessions,Create and destroy a tracing
5774 session>>) also runs the man:lttng-stop(1) command implicitly.
5775
5776
5777 [[enabling-disabling-channels]]
5778 === Create a channel
5779
5780 Once you create a tracing session, you can create a <<channel,channel>>
5781 with the man:lttng-enable-channel(1) command.
5782
5783 Note that LTTng automatically creates a default channel when, for a
5784 given <<domain,tracing domain>>, no channels exist and you
5785 <<enabling-disabling-events,create>> the first event rule. This default
5786 channel is named `channel0` and its attributes are set to reasonable
5787 values. Therefore, you only need to create a channel when you need
5788 non-default attributes.
5789
5790 You specify each non-default channel attribute with a command-line
5791 option when you use the man:lttng-enable-channel(1) command. The
5792 available command-line options are:
5793
5794 [role="growable",cols="asciidoc,asciidoc"]
5795 .Command-line options for the man:lttng-enable-channel(1) command.
5796 |====
5797 |Option |Description
5798
5799 |`--overwrite`
5800
5801 |
5802 Use the _overwrite_
5803 <<channel-overwrite-mode-vs-discard-mode,event loss mode>> instead of
5804 the default _discard_ mode.
5805
5806 |`--buffers-pid` (user space tracing domain only)
5807
5808 |
5809 Use the per-process <<channel-buffering-schemes,buffering scheme>>
5810 instead of the default per-user buffering scheme.
5811
5812 |+--subbuf-size=__SIZE__+
5813
5814 |
5815 Allocate sub-buffers of +__SIZE__+ bytes (power of two), for each CPU,
5816 either for each Unix user (default), or for each instrumented process.
5817
5818 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
5819
5820 |+--num-subbuf=__COUNT__+
5821
5822 |
5823 Allocate +__COUNT__+ sub-buffers (power of two), for each CPU, either
5824 for each Unix user (default), or for each instrumented process.
5825
5826 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
5827
5828 |+--tracefile-size=__SIZE__+
5829
5830 |
5831 Set the maximum size of each trace file that this channel writes within
5832 a stream to +__SIZE__+ bytes instead of no maximum.
5833
5834 See <<tracefile-rotation,Trace file count and size>>.
5835
5836 |+--tracefile-count=__COUNT__+
5837
5838 |
5839 Limit the number of trace files that this channel creates to
5840 +__COUNT__+ channels instead of no limit.
5841
5842 See <<tracefile-rotation,Trace file count and size>>.
5843
5844 |+--switch-timer=__PERIODUS__+
5845
5846 |
5847 Set the <<channel-switch-timer,switch timer period>>
5848 to +__PERIODUS__+{nbsp}µs.
5849
5850 |+--read-timer=__PERIODUS__+
5851
5852 |
5853 Set the <<channel-read-timer,read timer period>>
5854 to +__PERIODUS__+{nbsp}µs.
5855
5856 |[[opt-blocking-timeout]]+--blocking-timeout=__TIMEOUTUS__+
5857
5858 |
5859 Set the timeout of user space applications which load LTTng-UST
5860 in blocking mode to +__TIMEOUTUS__+:
5861
5862 0 (default)::
5863 Never block (non-blocking mode).
5864
5865 `inf`::
5866 Block forever until space is available in a sub-buffer to record
5867 the event.
5868
5869 __n__, a positive value::
5870 Wait for at most __n__ µs when trying to write into a sub-buffer.
5871
5872 Note that, for this option to have any effect on an instrumented
5873 user space application, you need to run the application with a set
5874 env:LTTNG_UST_ALLOW_BLOCKING environment variable.
5875
5876 |+--output=__TYPE__+ (Linux kernel tracing domain only)
5877
5878 |
5879 Set the channel's output type to +__TYPE__+, either `mmap` or `splice`.
5880
5881 |====
5882
5883 You can only create a channel in the Linux kernel and user space
5884 <<domain,tracing domains>>: other tracing domains have their own channel
5885 created on the fly when <<enabling-disabling-events,creating event
5886 rules>>.
5887
5888 [IMPORTANT]
5889 ====
5890 Because of a current LTTng limitation, you must create all channels
5891 _before_ you <<basic-tracing-session-control,start tracing>> in a given
5892 tracing session, that is, before the first time you run
5893 man:lttng-start(1).
5894
5895 Since LTTng automatically creates a default channel when you use the
5896 man:lttng-enable-event(1) command with a specific tracing domain, you
5897 cannot, for example, create a Linux kernel event rule, start tracing,
5898 and then create a user space event rule, because no user space channel
5899 exists yet and it's too late to create one.
5900
5901 For this reason, make sure to configure your channels properly
5902 before starting the tracers for the first time!
5903 ====
5904
5905 The following examples show how you can combine the previous
5906 command-line options to create simple to more complex channels.
5907
5908 .Create a Linux kernel channel with default attributes.
5909 ====
5910 [role="term"]
5911 ----
5912 $ lttng enable-channel --kernel my-channel
5913 ----
5914 ====
5915
5916 .Create a user space channel with 4 sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
5917 ====
5918 [role="term"]
5919 ----
5920 $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
5921 --buffers-pid my-channel
5922 ----
5923 ====
5924
5925 .[[blocking-timeout-example]]Create a default user space channel with an infinite blocking timeout.
5926 ====
5927 <<creating-destroying-tracing-sessions,Create a tracing-session>>,
5928 create the channel, <<enabling-disabling-events,create an event rule>>,
5929 and <<basic-tracing-session-control,start tracing>>:
5930
5931 [role="term"]
5932 ----
5933 $ lttng create
5934 $ lttng enable-channel --userspace --blocking-timeout=inf blocking-channel
5935 $ lttng enable-event --userspace --channel=blocking-channel --all
5936 $ lttng start
5937 ----
5938
5939 Run an application instrumented with LTTng-UST and allow it to block:
5940
5941 [role="term"]
5942 ----
5943 $ LTTNG_UST_ALLOW_BLOCKING=1 my-app
5944 ----
5945 ====
5946
5947 .Create a Linux kernel channel which rotates 8 trace files of 4{nbsp}MiB each for each stream
5948 ====
5949 [role="term"]
5950 ----
5951 $ lttng enable-channel --kernel --tracefile-count=8 \
5952 --tracefile-size=4194304 my-channel
5953 ----
5954 ====
5955
5956 .Create a user space channel in overwrite (or _flight recorder_) mode.
5957 ====
5958 [role="term"]
5959 ----
5960 $ lttng enable-channel --userspace --overwrite my-channel
5961 ----
5962 ====
5963
5964 You can <<enabling-disabling-events,create>> the same event rule in
5965 two different channels:
5966
5967 [role="term"]
5968 ----
5969 $ lttng enable-event --userspace --channel=my-channel app:tp
5970 $ lttng enable-event --userspace --channel=other-channel app:tp
5971 ----
5972
5973 If both channels are enabled, when a tracepoint named `app:tp` is
5974 reached, LTTng records two events, one for each channel.
5975
5976
5977 [[disable-channel]]
5978 === Disable a channel
5979
5980 To disable a specific channel that you <<enabling-disabling-channels,created>>
5981 previously, use the man:lttng-disable-channel(1) command.
5982
5983 .Disable a specific Linux kernel channel.
5984 ====
5985 [role="term"]
5986 ----
5987 $ lttng disable-channel --kernel my-channel
5988 ----
5989 ====
5990
5991 The state of a channel precedes the individual states of event rules
5992 attached to it: event rules which belong to a disabled channel, even if
5993 they are enabled, are also considered disabled.
5994
5995
5996 [[adding-context]]
5997 === Add context fields to a channel
5998
5999 Event record fields in trace files provide important information about
6000 events that occured previously, but sometimes some external context may
6001 help you solve a problem faster. Examples of context fields are:
6002
6003 * The **process ID**, **thread ID**, **process name**, and
6004 **process priority** of the thread in which the event occurs.
6005 * The **hostname** of the system on which the event occurs.
6006 * The current values of many possible **performance counters** using
6007 perf, for example:
6008 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6009 ** Cache misses.
6010 ** Branch instructions, misses, and loads.
6011 ** CPU faults.
6012 * Any context defined at the application level (supported for the
6013 JUL and log4j <<domain,tracing domains>>).
6014
6015 To get the full list of available context fields, see
6016 `lttng add-context --list`. Some context fields are reserved for a
6017 specific <<domain,tracing domain>> (Linux kernel or user space).
6018
6019 You add context fields to <<channel,channels>>. All the events
6020 that a channel with added context fields records contain those fields.
6021
6022 To add context fields to one or all the channels of a given tracing
6023 session:
6024
6025 * Use the man:lttng-add-context(1) command.
6026
6027 .Add context fields to all the channels of the current tracing session.
6028 ====
6029 The following command line adds the virtual process identifier and
6030 the per-thread CPU cycles count fields to all the user space channels
6031 of the current tracing session.
6032
6033 [role="term"]
6034 ----
6035 $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6036 ----
6037 ====
6038
6039 .Add performance counter context fields by raw ID
6040 ====
6041 See man:lttng-add-context(1) for the exact format of the context field
6042 type, which is partly compatible with the format used in
6043 man:perf-record(1).
6044
6045 [role="term"]
6046 ----
6047 $ lttng add-context --userspace --type=perf:thread:raw:r0110:test
6048 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6049 ----
6050 ====
6051
6052 .Add a context field to a specific channel.
6053 ====
6054 The following command line adds the thread identifier context field
6055 to the Linux kernel channel named `my-channel` in the current
6056 tracing session.
6057
6058 [role="term"]
6059 ----
6060 $ lttng add-context --kernel --channel=my-channel --type=tid
6061 ----
6062 ====
6063
6064 .Add an application-specific context field to a specific channel.
6065 ====
6066 The following command line adds the `cur_msg_id` context field of the
6067 `retriever` context retriever for all the instrumented
6068 <<java-application,Java applications>> recording <<event,event records>>
6069 in the channel named `my-channel`:
6070
6071 [role="term"]
6072 ----
6073 $ lttng add-context --kernel --channel=my-channel \
6074 --type='$app:retriever:cur_msg_id'
6075 ----
6076
6077 IMPORTANT: Make sure to always quote the `$` character when you
6078 use man:lttng-add-context(1) from a shell.
6079 ====
6080
6081 NOTE: You cannot remove context fields from a channel once you add it.
6082
6083
6084 [role="since-2.7"]
6085 [[pid-tracking]]
6086 === Track process IDs
6087
6088 It's often useful to allow only specific process IDs (PIDs) to emit
6089 events. For example, you may wish to record all the system calls made by
6090 a given process (Ă  la http://linux.die.net/man/1/strace[strace]).
6091
6092 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6093 purpose. Both commands operate on a whitelist of process IDs. You _add_
6094 entries to this whitelist with the man:lttng-track(1) command and remove
6095 entries with the man:lttng-untrack(1) command. Any process which has one
6096 of the PIDs in the whitelist is allowed to emit LTTng events which pass
6097 an enabled <<event,event rule>>.
6098
6099 NOTE: The PID tracker tracks the _numeric process IDs_. Should a
6100 process with a given tracked ID exit and another process be given this
6101 ID, then the latter would also be allowed to emit events.
6102
6103 .Track and untrack process IDs.
6104 ====
6105 For the sake of the following example, assume the target system has 16
6106 possible PIDs.
6107
6108 When you
6109 <<creating-destroying-tracing-sessions,create a tracing session>>,
6110 the whitelist contains all the possible PIDs:
6111
6112 [role="img-100"]
6113 .All PIDs are tracked.
6114 image::track-all.png[]
6115
6116 When the whitelist is full and you use the man:lttng-track(1) command to
6117 specify some PIDs to track, LTTng first clears the whitelist, then it
6118 tracks the specific PIDs. After:
6119
6120 [role="term"]
6121 ----
6122 $ lttng track --pid=3,4,7,10,13
6123 ----
6124
6125 the whitelist is:
6126
6127 [role="img-100"]
6128 .PIDs 3, 4, 7, 10, and 13 are tracked.
6129 image::track-3-4-7-10-13.png[]
6130
6131 You can add more PIDs to the whitelist afterwards:
6132
6133 [role="term"]
6134 ----
6135 $ lttng track --pid=1,15,16
6136 ----
6137
6138 The result is:
6139
6140 [role="img-100"]
6141 .PIDs 1, 15, and 16 are added to the whitelist.
6142 image::track-1-3-4-7-10-13-15-16.png[]
6143
6144 The man:lttng-untrack(1) command removes entries from the PID tracker's
6145 whitelist. Given the previous example, the following command:
6146
6147 [role="term"]
6148 ----
6149 $ lttng untrack --pid=3,7,10,13
6150 ----
6151
6152 leads to this whitelist:
6153
6154 [role="img-100"]
6155 .PIDs 3, 7, 10, and 13 are removed from the whitelist.
6156 image::track-1-4-15-16.png[]
6157
6158 LTTng can track all possible PIDs again using the
6159 opt:lttng-track(1):--all option:
6160
6161 [role="term"]
6162 ----
6163 $ lttng track --pid --all
6164 ----
6165
6166 The result is, again:
6167
6168 [role="img-100"]
6169 .All PIDs are tracked.
6170 image::track-all.png[]
6171 ====
6172
6173 .Track only specific PIDs
6174 ====
6175 A very typical use case with PID tracking is to start with an empty
6176 whitelist, then <<basic-tracing-session-control,start the tracers>>, and
6177 then add PIDs manually while tracers are active. You can accomplish this
6178 by using the opt:lttng-untrack(1):--all option of the
6179 man:lttng-untrack(1) command to clear the whitelist after you
6180 <<creating-destroying-tracing-sessions,create a tracing session>>:
6181
6182 [role="term"]
6183 ----
6184 $ lttng untrack --pid --all
6185 ----
6186
6187 gives:
6188
6189 [role="img-100"]
6190 .No PIDs are tracked.
6191 image::untrack-all.png[]
6192
6193 If you trace with this whitelist configuration, the tracer records no
6194 events for this <<domain,tracing domain>> because no processes are
6195 tracked. You can use the man:lttng-track(1) command as usual to track
6196 specific PIDs, for example:
6197
6198 [role="term"]
6199 ----
6200 $ lttng track --pid=6,11
6201 ----
6202
6203 Result:
6204
6205 [role="img-100"]
6206 .PIDs 6 and 11 are tracked.
6207 image::track-6-11.png[]
6208 ====
6209
6210
6211 [role="since-2.5"]
6212 [[saving-loading-tracing-session]]
6213 === Save and load tracing session configurations
6214
6215 Configuring a <<tracing-session,tracing session>> can be long. Some of
6216 the tasks involved are:
6217
6218 * <<enabling-disabling-channels,Create channels>> with
6219 specific attributes.
6220 * <<adding-context,Add context fields>> to specific channels.
6221 * <<enabling-disabling-events,Create event rules>> with specific log
6222 level and filter conditions.
6223
6224 If you use LTTng to solve real world problems, chances are you have to
6225 record events using the same tracing session setup over and over,
6226 modifying a few variables each time in your instrumented program
6227 or environment. To avoid constant tracing session reconfiguration,
6228 the man:lttng(1) command-line tool can save and load tracing session
6229 configurations to/from XML files.
6230
6231 To save a given tracing session configuration:
6232
6233 * Use the man:lttng-save(1) command:
6234 +
6235 --
6236 [role="term"]
6237 ----
6238 $ lttng save my-session
6239 ----
6240 --
6241 +
6242 Replace `my-session` with the name of the tracing session to save.
6243
6244 LTTng saves tracing session configurations to
6245 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6246 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. Use
6247 the opt:lttng-save(1):--output-path option to change this destination
6248 directory.
6249
6250 LTTng saves all configuration parameters, for example:
6251
6252 * The tracing session name.
6253 * The trace data output path.
6254 * The channels with their state and all their attributes.
6255 * The context fields you added to channels.
6256 * The event rules with their state, log level and filter conditions.
6257
6258 To load a tracing session:
6259
6260 * Use the man:lttng-load(1) command:
6261 +
6262 --
6263 [role="term"]
6264 ----
6265 $ lttng load my-session
6266 ----
6267 --
6268 +
6269 Replace `my-session` with the name of the tracing session to load.
6270
6271 When LTTng loads a configuration, it restores your saved tracing session
6272 as if you just configured it manually.
6273
6274 See man:lttng(1) for the complete list of command-line options. You
6275 can also save and load all many sessions at a time, and decide in which
6276 directory to output the XML files.
6277
6278
6279 [[sending-trace-data-over-the-network]]
6280 === Send trace data over the network
6281
6282 LTTng can send the recorded trace data to a remote system over the
6283 network instead of writing it to the local file system.
6284
6285 To send the trace data over the network:
6286
6287 . On the _remote_ system (which can also be the target system),
6288 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6289 +
6290 --
6291 [role="term"]
6292 ----
6293 $ lttng-relayd
6294 ----
6295 --
6296
6297 . On the _target_ system, create a tracing session configured to
6298 send trace data over the network:
6299 +
6300 --
6301 [role="term"]
6302 ----
6303 $ lttng create my-session --set-url=net://remote-system
6304 ----
6305 --
6306 +
6307 Replace `remote-system` by the host name or IP address of the
6308 remote system. See man:lttng-create(1) for the exact URL format.
6309
6310 . On the target system, use the man:lttng(1) command-line tool as usual.
6311 When tracing is active, the target's consumer daemon sends sub-buffers
6312 to the relay daemon running on the remote system instead of flushing
6313 them to the local file system. The relay daemon writes the received
6314 packets to the local file system.
6315
6316 The relay daemon writes trace files to
6317 +$LTTNG_HOME/lttng-traces/__hostname__/__session__+ by default, where
6318 +__hostname__+ is the host name of the target system and +__session__+
6319 is the tracing session name. Note that the env:LTTNG_HOME environment
6320 variable defaults to `$HOME` if not set. Use the
6321 opt:lttng-relayd(8):--output option of man:lttng-relayd(8) to write
6322 trace files to another base directory.
6323
6324
6325 [role="since-2.4"]
6326 [[lttng-live]]
6327 === View events as LTTng emits them (noch:{LTTng} live)
6328
6329 LTTng live is a network protocol implemented by the <<lttng-relayd,relay
6330 daemon>> (man:lttng-relayd(8)) to allow compatible trace viewers to
6331 display events as LTTng emits them on the target system while tracing is
6332 active.
6333
6334 The relay daemon creates a _tee_: it forwards the trace data to both
6335 the local file system and to connected live viewers:
6336
6337 [role="img-90"]
6338 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live viewer.
6339 image::live.png[]
6340
6341 To use LTTng live:
6342
6343 . On the _target system_, create a <<tracing-session,tracing session>>
6344 in _live mode_:
6345 +
6346 --
6347 [role="term"]
6348 ----
6349 $ lttng create my-session --live
6350 ----
6351 --
6352 +
6353 This spawns a local relay daemon.
6354
6355 . Start the live viewer and configure it to connect to the relay
6356 daemon. For example, with http://diamon.org/babeltrace[Babeltrace]:
6357 +
6358 --
6359 [role="term"]
6360 ----
6361 $ babeltrace --input-format=lttng-live \
6362 net://localhost/host/hostname/my-session
6363 ----
6364 --
6365 +
6366 Replace:
6367 +
6368 --
6369 * `hostname` with the host name of the target system.
6370 * `my-session` with the name of the tracing session to view.
6371 --
6372
6373 . Configure the tracing session as usual with the man:lttng(1)
6374 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6375
6376 You can list the available live tracing sessions with Babeltrace:
6377
6378 [role="term"]
6379 ----
6380 $ babeltrace --input-format=lttng-live net://localhost
6381 ----
6382
6383 You can start the relay daemon on another system. In this case, you need
6384 to specify the relay daemon's URL when you create the tracing session
6385 with the opt:lttng-create(1):--set-url option. You also need to replace
6386 `localhost` in the procedure above with the host name of the system on
6387 which the relay daemon is running.
6388
6389 See man:lttng-create(1) and man:lttng-relayd(8) for the complete list of
6390 command-line options.
6391
6392
6393 [role="since-2.3"]
6394 [[taking-a-snapshot]]
6395 === Take a snapshot of the current sub-buffers of a tracing session
6396
6397 The normal behavior of LTTng is to append full sub-buffers to growing
6398 trace data files. This is ideal to keep a full history of the events
6399 that occurred on the target system, but it can
6400 represent too much data in some situations. For example, you may wish
6401 to trace your application continuously until some critical situation
6402 happens, in which case you only need the latest few recorded
6403 events to perform the desired analysis, not multi-gigabyte trace files.
6404
6405 With the man:lttng-snapshot(1) command, you can take a snapshot of the
6406 current sub-buffers of a given <<tracing-session,tracing session>>.
6407 LTTng can write the snapshot to the local file system or send it over
6408 the network.
6409
6410 To take a snapshot:
6411
6412 . Create a tracing session in _snapshot mode_:
6413 +
6414 --
6415 [role="term"]
6416 ----
6417 $ lttng create my-session --snapshot
6418 ----
6419 --
6420 +
6421 The <<channel-overwrite-mode-vs-discard-mode,event loss mode>> of
6422 <<channel,channels>> created in this mode is automatically set to
6423 _overwrite_ (flight recorder mode).
6424
6425 . Configure the tracing session as usual with the man:lttng(1)
6426 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6427
6428 . **Optional**: When you need to take a snapshot,
6429 <<basic-tracing-session-control,stop tracing>>.
6430 +
6431 You can take a snapshot when the tracers are active, but if you stop
6432 them first, you are sure that the data in the sub-buffers does not
6433 change before you actually take the snapshot.
6434
6435 . Take a snapshot:
6436 +
6437 --
6438 [role="term"]
6439 ----
6440 $ lttng snapshot record --name=my-first-snapshot
6441 ----
6442 --
6443 +
6444 LTTng writes the current sub-buffers of all the current tracing
6445 session's channels to trace files on the local file system. Those trace
6446 files have `my-first-snapshot` in their name.
6447
6448 There is no difference between the format of a normal trace file and the
6449 format of a snapshot: viewers of LTTng traces also support LTTng
6450 snapshots.
6451
6452 By default, LTTng writes snapshot files to the path shown by
6453 `lttng snapshot list-output`. You can change this path or decide to send
6454 snapshots over the network using either:
6455
6456 . An output path or URL that you specify when you create the
6457 tracing session.
6458 . An snapshot output path or URL that you add using
6459 `lttng snapshot add-output`
6460 . An output path or URL that you provide directly to the
6461 `lttng snapshot record` command.
6462
6463 Method 3 overrides method 2, which overrides method 1. When you
6464 specify a URL, a relay daemon must listen on a remote system (see
6465 <<sending-trace-data-over-the-network,Send trace data over the network>>).
6466
6467
6468 [role="since-2.6"]
6469 [[mi]]
6470 === Use the machine interface
6471
6472 With any command of the man:lttng(1) command-line tool, you can set the
6473 opt:lttng(1):--mi option to `xml` (before the command name) to get an
6474 XML machine interface output, for example:
6475
6476 [role="term"]
6477 ----
6478 $ lttng --mi=xml enable-event --kernel --syscall open
6479 ----
6480
6481 A schema definition (XSD) is
6482 https://github.com/lttng/lttng-tools/blob/stable-2.10/src/common/mi-lttng-3.0.xsd[available]
6483 to ease the integration with external tools as much as possible.
6484
6485
6486 [role="since-2.8"]
6487 [[metadata-regenerate]]
6488 === Regenerate the metadata of an LTTng trace
6489
6490 An LTTng trace, which is a http://diamon.org/ctf[CTF] trace, has both
6491 data stream files and a metadata file. This metadata file contains,
6492 amongst other things, information about the offset of the clock sources
6493 used to timestamp <<event,event records>> when tracing.
6494
6495 If, once a <<tracing-session,tracing session>> is
6496 <<basic-tracing-session-control,started>>, a major
6497 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
6498 happens, the trace's clock offset also needs to be updated. You
6499 can use the `metadata` item of the man:lttng-regenerate(1) command
6500 to do so.
6501
6502 The main use case of this command is to allow a system to boot with
6503 an incorrect wall time and trace it with LTTng before its wall time
6504 is corrected. Once the system is known to be in a state where its
6505 wall time is correct, it can run `lttng regenerate metadata`.
6506
6507 To regenerate the metadata of an LTTng trace:
6508
6509 * Use the `metadata` item of the man:lttng-regenerate(1) command:
6510 +
6511 --
6512 [role="term"]
6513 ----
6514 $ lttng regenerate metadata
6515 ----
6516 --
6517
6518 [IMPORTANT]
6519 ====
6520 `lttng regenerate metadata` has the following limitations:
6521
6522 * Tracing session <<creating-destroying-tracing-sessions,created>>
6523 in non-live mode.
6524 * User space <<channel,channels>>, if any, are using
6525 <<channel-buffering-schemes,per-user buffering>>.
6526 ====
6527
6528
6529 [role="since-2.9"]
6530 [[regenerate-statedump]]
6531 === Regenerate the state dump of a tracing session
6532
6533 The LTTng kernel and user space tracers generate state dump
6534 <<event,event records>> when the application starts or when you
6535 <<basic-tracing-session-control,start a tracing session>>. An analysis
6536 can use the state dump event records to set an initial state before it
6537 builds the rest of the state from the following event records.
6538 http://tracecompass.org/[Trace Compass] is a notable example of an
6539 application which uses the state dump of an LTTng trace.
6540
6541 When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
6542 state dump event records are not included in the snapshot because they
6543 were recorded to a sub-buffer that has been consumed or overwritten
6544 already.
6545
6546 You can use the `lttng regenerate statedump` command to emit the state
6547 dump event records again.
6548
6549 To regenerate the state dump of the current tracing session, provided
6550 create it in snapshot mode, before you take a snapshot:
6551
6552 . Use the `statedump` item of the man:lttng-regenerate(1) command:
6553 +
6554 --
6555 [role="term"]
6556 ----
6557 $ lttng regenerate statedump
6558 ----
6559 --
6560
6561 . <<basic-tracing-session-control,Stop the tracing session>>:
6562 +
6563 --
6564 [role="term"]
6565 ----
6566 $ lttng stop
6567 ----
6568 --
6569
6570 . <<taking-a-snapshot,Take a snapshot>>:
6571 +
6572 --
6573 [role="term"]
6574 ----
6575 $ lttng snapshot record --name=my-snapshot
6576 ----
6577 --
6578
6579 Depending on the event throughput, you should run steps 1 and 2
6580 as closely as possible.
6581
6582 NOTE: To record the state dump events, you need to
6583 <<enabling-disabling-events,create event rules>> which enable them.
6584 LTTng-UST state dump tracepoints start with `lttng_ust_statedump:`.
6585 LTTng-modules state dump tracepoints start with `lttng_statedump_`.
6586
6587
6588 [role="since-2.7"]
6589 [[persistent-memory-file-systems]]
6590 === Record trace data on persistent memory file systems
6591
6592 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory]
6593 (NVRAM) is random-access memory that retains its information when power
6594 is turned off (non-volatile). Systems with such memory can store data
6595 structures in RAM and retrieve them after a reboot, without flushing
6596 to typical _storage_.
6597
6598 Linux supports NVRAM file systems thanks to either
6599 http://pramfs.sourceforge.net/[PRAMFS] or
6600 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
6601 (requires Linux 4.1+).
6602
6603 This section does not describe how to operate such file systems;
6604 we assume that you have a working persistent memory file system.
6605
6606 When you create a <<tracing-session,tracing session>>, you can specify
6607 the path of the shared memory holding the sub-buffers. If you specify a
6608 location on an NVRAM file system, then you can retrieve the latest
6609 recorded trace data when the system reboots after a crash.
6610
6611 To record trace data on a persistent memory file system and retrieve the
6612 trace data after a system crash:
6613
6614 . Create a tracing session with a sub-buffer shared memory path located
6615 on an NVRAM file system:
6616 +
6617 --
6618 [role="term"]
6619 ----
6620 $ lttng create my-session --shm-path=/path/to/shm
6621 ----
6622 --
6623
6624 . Configure the tracing session as usual with the man:lttng(1)
6625 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6626
6627 . After a system crash, use the man:lttng-crash(1) command-line tool to
6628 view the trace data recorded on the NVRAM file system:
6629 +
6630 --
6631 [role="term"]
6632 ----
6633 $ lttng-crash /path/to/shm
6634 ----
6635 --
6636
6637 The binary layout of the ring buffer files is not exactly the same as
6638 the trace files layout. This is why you need to use man:lttng-crash(1)
6639 instead of your preferred trace viewer directly.
6640
6641 To convert the ring buffer files to LTTng trace files:
6642
6643 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
6644 +
6645 --
6646 [role="term"]
6647 ----
6648 $ lttng-crash --extract=/path/to/trace /path/to/shm
6649 ----
6650 --
6651
6652
6653 [role="since-2.10"]
6654 [[notif-trigger-api]]
6655 === Get notified when a channel's buffer usage is too high or too low
6656
6657 With LTTng's $$C/C++$$ notification and trigger API, your user
6658 application can get notified when the buffer usage of one or more
6659 <<channel,channels>> becomes too low or too high. You can use this API
6660 and enable or disable <<event,event rules>> during tracing to avoid
6661 <<channel-overwrite-mode-vs-discard-mode,discarded event records>>.
6662
6663 .Have a user application get notified when an LTTng channel's buffer usage is too high.
6664 ====
6665 In this example, we create and build an application which gets notified
6666 when the buffer usage of a specific LTTng channel is higher than
6667 75{nbsp}%. We only print that it is the case in the example, but we
6668 could as well use the API of <<liblttng-ctl-lttng,`liblttng-ctl`>> to
6669 disable event rules when this happens.
6670
6671 . Create the application's C source file:
6672 +
6673 --
6674 [source,c]
6675 .path:{notif-app.c}
6676 ----
6677 #include <stdio.h>
6678 #include <assert.h>
6679 #include <lttng/domain.h>
6680 #include <lttng/action/action.h>
6681 #include <lttng/action/notify.h>
6682 #include <lttng/condition/condition.h>
6683 #include <lttng/condition/buffer-usage.h>
6684 #include <lttng/condition/evaluation.h>
6685 #include <lttng/notification/channel.h>
6686 #include <lttng/notification/notification.h>
6687 #include <lttng/trigger/trigger.h>
6688 #include <lttng/endpoint.h>
6689
6690 int main(int argc, char *argv[])
6691 {
6692 int exit_status = 0;
6693 struct lttng_notification_channel *notification_channel;
6694 struct lttng_condition *condition;
6695 struct lttng_action *action;
6696 struct lttng_trigger *trigger;
6697 const char *tracing_session_name;
6698 const char *channel_name;
6699
6700 assert(argc >= 3);
6701 tracing_session_name = argv[1];
6702 channel_name = argv[2];
6703
6704 /*
6705 * Create a notification channel. A notification channel
6706 * connects the user application to the LTTng session daemon.
6707 * This notification channel can be used to listen to various
6708 * types of notifications.
6709 */
6710 notification_channel = lttng_notification_channel_create(
6711 lttng_session_daemon_notification_endpoint);
6712
6713 /*
6714 * Create a "high buffer usage" condition. In this case, the
6715 * condition is reached when the buffer usage is greater than or
6716 * equal to 75 %. We create the condition for a specific tracing
6717 * session name, channel name, and for the user space tracing
6718 * domain.
6719 *
6720 * The "low buffer usage" condition type also exists.
6721 */
6722 condition = lttng_condition_buffer_usage_high_create();
6723 lttng_condition_buffer_usage_set_threshold_ratio(condition, .75);
6724 lttng_condition_buffer_usage_set_session_name(
6725 condition, tracing_session_name);
6726 lttng_condition_buffer_usage_set_channel_name(condition,
6727 channel_name);
6728 lttng_condition_buffer_usage_set_domain_type(condition,
6729 LTTNG_DOMAIN_UST);
6730
6731 /*
6732 * Create an action (get a notification) to take when the
6733 * condition created above is reached.
6734 */
6735 action = lttng_action_notify_create();
6736
6737 /*
6738 * Create a trigger. A trigger associates a condition to an
6739 * action: the action is executed when the condition is reached.
6740 */
6741 trigger = lttng_trigger_create(condition, action);
6742
6743 /* Register the trigger to LTTng. */
6744 lttng_register_trigger(trigger);
6745
6746 /*
6747 * Now that we have registered a trigger, a notification will be
6748 * emitted everytime its condition is met. To receive this
6749 * notification, we must subscribe to notifications that match
6750 * the same condition.
6751 */
6752 lttng_notification_channel_subscribe(notification_channel,
6753 condition);
6754
6755 /*
6756 * Notification loop. You can put this in a dedicated thread to
6757 * avoid blocking the main thread.
6758 */
6759 for (;;) {
6760 struct lttng_notification *notification;
6761 enum lttng_notification_channel_status status;
6762 const struct lttng_evaluation *notification_evaluation;
6763 const struct lttng_condition *notification_condition;
6764 double buffer_usage;
6765
6766 /* Receive the next notification. */
6767 status = lttng_notification_channel_get_next_notification(
6768 notification_channel, &notification);
6769
6770 switch (status) {
6771 case LTTNG_NOTIFICATION_CHANNEL_STATUS_OK:
6772 break;
6773 case LTTNG_NOTIFICATION_CHANNEL_STATUS_NOTIFICATIONS_DROPPED:
6774 /*
6775 * The session daemon can drop notifications if
6776 * a monitoring application is not consuming the
6777 * notifications fast enough.
6778 */
6779 continue;
6780 case LTTNG_NOTIFICATION_CHANNEL_STATUS_CLOSED:
6781 /*
6782 * The notification channel has been closed by the
6783 * session daemon. This is typically caused by a session
6784 * daemon shutting down.
6785 */
6786 goto end;
6787 default:
6788 /* Unhandled conditions or errors. */
6789 exit_status = 1;
6790 goto end;
6791 }
6792
6793 /*
6794 * A notification provides, amongst other things:
6795 *
6796 * * The condition that caused this notification to be
6797 * emitted.
6798 * * The condition evaluation, which provides more
6799 * specific information on the evaluation of the
6800 * condition.
6801 *
6802 * The condition evaluation provides the buffer usage
6803 * value at the moment the condition was reached.
6804 */
6805 notification_condition = lttng_notification_get_condition(
6806 notification);
6807 notification_evaluation = lttng_notification_get_evaluation(
6808 notification);
6809
6810 /* We're subscribed to only one condition. */
6811 assert(lttng_condition_get_type(notification_condition) ==
6812 LTTNG_CONDITION_TYPE_BUFFER_USAGE_HIGH);
6813
6814 /*
6815 * Get the exact sampled buffer usage from the
6816 * condition evaluation.
6817 */
6818 lttng_evaluation_buffer_usage_get_usage_ratio(
6819 notification_evaluation, &buffer_usage);
6820
6821 /*
6822 * At this point, instead of printing a message, we
6823 * could do something to reduce the channel's buffer
6824 * usage, like disable specific events.
6825 */
6826 printf("Buffer usage is %f %% in tracing session \"%s\", "
6827 "user space channel \"%s\".\n", buffer_usage * 100,
6828 tracing_session_name, channel_name);
6829 lttng_notification_destroy(notification);
6830 }
6831
6832 end:
6833 lttng_action_destroy(action);
6834 lttng_condition_destroy(condition);
6835 lttng_trigger_destroy(trigger);
6836 lttng_notification_channel_destroy(notification_channel);
6837 return exit_status;
6838 }
6839 ----
6840 --
6841
6842 . Build the `notif-app` application, linking it to `liblttng-ctl`:
6843 +
6844 --
6845 [role="term"]
6846 ----
6847 $ gcc -o notif-app notif-app.c -llttng-ctl
6848 ----
6849 --
6850
6851 . <<creating-destroying-tracing-sessions,Create a tracing session>>,
6852 <<enabling-disabling-events,create an event rule>> matching all the
6853 user space tracepoints, and
6854 <<basic-tracing-session-control,start tracing>>:
6855 +
6856 --
6857 [role="term"]
6858 ----
6859 $ lttng create my-session
6860 $ lttng enable-event --userspace --all
6861 $ lttng start
6862 ----
6863 --
6864 +
6865 If you create the channel manually with the man:lttng-enable-channel(1)
6866 command, you can control how frequently are the current values of the
6867 channel's properties sampled to evaluate user conditions with the
6868 opt:lttng-enable-channel(1):--monitor-timer option.
6869
6870 . Run the `notif-app` application. This program accepts the
6871 <<tracing-session,tracing session>> name and the user space channel
6872 name as its two first arguments. The channel which LTTng automatically
6873 creates with the man:lttng-enable-event(1) command above is named
6874 `channel0`:
6875 +
6876 --
6877 [role="term"]
6878 ----
6879 $ ./notif-app my-session channel0
6880 ----
6881 --
6882
6883 . In another terminal, run an application with a very high event
6884 throughput so that the 75{nbsp}% buffer usage condition is reached.
6885 +
6886 In the first terminal, the application should print lines like this:
6887 +
6888 ----
6889 Buffer usage is 81.45197 % in tracing session "my-session", user space
6890 channel "channel0".
6891 ----
6892 +
6893 If you don't see anything, try modifying the condition in
6894 path:{notif-app.c} to a lower value (0.1, for example), rebuilding it
6895 (step 2) and running it again (step 4).
6896 ====
6897
6898
6899 [[reference]]
6900 == Reference
6901
6902 [[lttng-modules-ref]]
6903 === noch:{LTTng-modules}
6904
6905
6906 [role="since-2.9"]
6907 [[lttng-tracepoint-enum]]
6908 ==== `LTTNG_TRACEPOINT_ENUM()` usage
6909
6910 Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
6911
6912 [source,c]
6913 ----
6914 LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
6915 ----
6916
6917 Replace:
6918
6919 * `name` with the name of the enumeration (C identifier, unique
6920 amongst all the defined enumerations).
6921 * `entries` with a list of enumeration entries.
6922
6923 The available enumeration entry macros are:
6924
6925 +ctf_enum_value(__name__, __value__)+::
6926 Entry named +__name__+ mapped to the integral value +__value__+.
6927
6928 +ctf_enum_range(__name__, __begin__, __end__)+::
6929 Entry named +__name__+ mapped to the range of integral values between
6930 +__begin__+ (included) and +__end__+ (included).
6931
6932 +ctf_enum_auto(__name__)+::
6933 Entry named +__name__+ mapped to the integral value following the
6934 last mapping's value.
6935 +
6936 The last value of a `ctf_enum_value()` entry is its +__value__+
6937 parameter.
6938 +
6939 The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
6940 +
6941 If `ctf_enum_auto()` is the first entry in the list, its integral
6942 value is 0.
6943
6944 Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
6945 to use a defined enumeration as a tracepoint field.
6946
6947 .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
6948 ====
6949 [source,c]
6950 ----
6951 LTTNG_TRACEPOINT_ENUM(
6952 my_enum,
6953 TP_ENUM_VALUES(
6954 ctf_enum_auto("AUTO: EXPECT 0")
6955 ctf_enum_value("VALUE: 23", 23)
6956 ctf_enum_value("VALUE: 27", 27)
6957 ctf_enum_auto("AUTO: EXPECT 28")
6958 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
6959 ctf_enum_auto("AUTO: EXPECT 304")
6960 )
6961 )
6962 ----
6963 ====
6964
6965
6966 [role="since-2.7"]
6967 [[lttng-modules-tp-fields]]
6968 ==== Tracepoint fields macros (for `TP_FIELDS()`)
6969
6970 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
6971 tracepoint fields, which must be listed within `TP_FIELDS()` in
6972 `LTTNG_TRACEPOINT_EVENT()`, are:
6973
6974 [role="func-desc growable",cols="asciidoc,asciidoc"]
6975 .Available macros to define LTTng-modules tracepoint fields
6976 |====
6977 |Macro |Description and parameters
6978
6979 |
6980 +ctf_integer(__t__, __n__, __e__)+
6981
6982 +ctf_integer_nowrite(__t__, __n__, __e__)+
6983
6984 +ctf_user_integer(__t__, __n__, __e__)+
6985
6986 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
6987 |
6988 Standard integer, displayed in base 10.
6989
6990 +__t__+::
6991 Integer C type (`int`, `long`, `size_t`, ...).
6992
6993 +__n__+::
6994 Field name.
6995
6996 +__e__+::
6997 Argument expression.
6998
6999 |
7000 +ctf_integer_hex(__t__, __n__, __e__)+
7001
7002 +ctf_user_integer_hex(__t__, __n__, __e__)+
7003 |
7004 Standard integer, displayed in base 16.
7005
7006 +__t__+::
7007 Integer C type.
7008
7009 +__n__+::
7010 Field name.
7011
7012 +__e__+::
7013 Argument expression.
7014
7015 |+ctf_integer_oct(__t__, __n__, __e__)+
7016 |
7017 Standard integer, displayed in base 8.
7018
7019 +__t__+::
7020 Integer C type.
7021
7022 +__n__+::
7023 Field name.
7024
7025 +__e__+::
7026 Argument expression.
7027
7028 |
7029 +ctf_integer_network(__t__, __n__, __e__)+
7030
7031 +ctf_user_integer_network(__t__, __n__, __e__)+
7032 |
7033 Integer in network byte order (big-endian), displayed in base 10.
7034
7035 +__t__+::
7036 Integer C type.
7037
7038 +__n__+::
7039 Field name.
7040
7041 +__e__+::
7042 Argument expression.
7043
7044 |
7045 +ctf_integer_network_hex(__t__, __n__, __e__)+
7046
7047 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
7048 |
7049 Integer in network byte order, displayed in base 16.
7050
7051 +__t__+::
7052 Integer C type.
7053
7054 +__n__+::
7055 Field name.
7056
7057 +__e__+::
7058 Argument expression.
7059
7060 |
7061 +ctf_enum(__N__, __t__, __n__, __e__)+
7062
7063 +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
7064
7065 +ctf_user_enum(__N__, __t__, __n__, __e__)+
7066
7067 +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
7068 |
7069 Enumeration.
7070
7071 +__N__+::
7072 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
7073
7074 +__t__+::
7075 Integer C type (`int`, `long`, `size_t`, ...).
7076
7077 +__n__+::
7078 Field name.
7079
7080 +__e__+::
7081 Argument expression.
7082
7083 |
7084 +ctf_string(__n__, __e__)+
7085
7086 +ctf_string_nowrite(__n__, __e__)+
7087
7088 +ctf_user_string(__n__, __e__)+
7089
7090 +ctf_user_string_nowrite(__n__, __e__)+
7091 |
7092 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
7093
7094 +__n__+::
7095 Field name.
7096
7097 +__e__+::
7098 Argument expression.
7099
7100 |
7101 +ctf_array(__t__, __n__, __e__, __s__)+
7102
7103 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
7104
7105 +ctf_user_array(__t__, __n__, __e__, __s__)+
7106
7107 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
7108 |
7109 Statically-sized array of integers.
7110
7111 +__t__+::
7112 Array element C type.
7113
7114 +__n__+::
7115 Field name.
7116
7117 +__e__+::
7118 Argument expression.
7119
7120 +__s__+::
7121 Number of elements.
7122
7123 |
7124 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
7125
7126 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7127
7128 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
7129
7130 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7131 |
7132 Statically-sized array of bits.
7133
7134 The type of +__e__+ must be an integer type. +__s__+ is the number
7135 of elements of such type in +__e__+, not the number of bits.
7136
7137 +__t__+::
7138 Array element C type.
7139
7140 +__n__+::
7141 Field name.
7142
7143 +__e__+::
7144 Argument expression.
7145
7146 +__s__+::
7147 Number of elements.
7148
7149 |
7150 +ctf_array_text(__t__, __n__, __e__, __s__)+
7151
7152 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
7153
7154 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
7155
7156 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
7157 |
7158 Statically-sized array, printed as text.
7159
7160 The string does not need to be null-terminated.
7161
7162 +__t__+::
7163 Array element C type (always `char`).
7164
7165 +__n__+::
7166 Field name.
7167
7168 +__e__+::
7169 Argument expression.
7170
7171 +__s__+::
7172 Number of elements.
7173
7174 |
7175 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
7176
7177 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7178
7179 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
7180
7181 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7182 |
7183 Dynamically-sized array of integers.
7184
7185 The type of +__E__+ must be unsigned.
7186
7187 +__t__+::
7188 Array element C type.
7189
7190 +__n__+::
7191 Field name.
7192
7193 +__e__+::
7194 Argument expression.
7195
7196 +__T__+::
7197 Length expression C type.
7198
7199 +__E__+::
7200 Length expression.
7201
7202 |
7203 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7204
7205 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7206 |
7207 Dynamically-sized array of integers, displayed in base 16.
7208
7209 The type of +__E__+ must be unsigned.
7210
7211 +__t__+::
7212 Array element C type.
7213
7214 +__n__+::
7215 Field name.
7216
7217 +__e__+::
7218 Argument expression.
7219
7220 +__T__+::
7221 Length expression C type.
7222
7223 +__E__+::
7224 Length expression.
7225
7226 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
7227 |
7228 Dynamically-sized array of integers in network byte order (big-endian),
7229 displayed in base 10.
7230
7231 The type of +__E__+ must be unsigned.
7232
7233 +__t__+::
7234 Array element C type.
7235
7236 +__n__+::
7237 Field name.
7238
7239 +__e__+::
7240 Argument expression.
7241
7242 +__T__+::
7243 Length expression C type.
7244
7245 +__E__+::
7246 Length expression.
7247
7248 |
7249 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7250
7251 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7252
7253 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7254
7255 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7256 |
7257 Dynamically-sized array of bits.
7258
7259 The type of +__e__+ must be an integer type. +__s__+ is the number
7260 of elements of such type in +__e__+, not the number of bits.
7261
7262 The type of +__E__+ must be unsigned.
7263
7264 +__t__+::
7265 Array element C type.
7266
7267 +__n__+::
7268 Field name.
7269
7270 +__e__+::
7271 Argument expression.
7272
7273 +__T__+::
7274 Length expression C type.
7275
7276 +__E__+::
7277 Length expression.
7278
7279 |
7280 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7281
7282 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7283
7284 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7285
7286 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7287 |
7288 Dynamically-sized array, displayed as text.
7289
7290 The string does not need to be null-terminated.
7291
7292 The type of +__E__+ must be unsigned.
7293
7294 The behaviour is undefined if +__e__+ is `NULL`.
7295
7296 +__t__+::
7297 Sequence element C type (always `char`).
7298
7299 +__n__+::
7300 Field name.
7301
7302 +__e__+::
7303 Argument expression.
7304
7305 +__T__+::
7306 Length expression C type.
7307
7308 +__E__+::
7309 Length expression.
7310 |====
7311
7312 Use the `_user` versions when the argument expression, `e`, is
7313 a user space address. In the cases of `ctf_user_integer*()` and
7314 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
7315 be addressable.
7316
7317 The `_nowrite` versions omit themselves from the session trace, but are
7318 otherwise identical. This means the `_nowrite` fields won't be written
7319 in the recorded trace. Their primary purpose is to make some
7320 of the event context available to the
7321 <<enabling-disabling-events,event filters>> without having to
7322 commit the data to sub-buffers.
7323
7324
7325 [[glossary]]
7326 == Glossary
7327
7328 Terms related to LTTng and to tracing in general:
7329
7330 Babeltrace::
7331 The http://diamon.org/babeltrace[Babeltrace] project, which includes
7332 the cmd:babeltrace command, some libraries, and Python bindings.
7333
7334 <<channel-buffering-schemes,buffering scheme>>::
7335 A layout of sub-buffers applied to a given channel.
7336
7337 <<channel,channel>>::
7338 An entity which is responsible for a set of ring buffers.
7339 +
7340 <<event,Event rules>> are always attached to a specific channel.
7341
7342 clock::
7343 A reference of time for a tracer.
7344
7345 <<lttng-consumerd,consumer daemon>>::
7346 A process which is responsible for consuming the full sub-buffers
7347 and write them to a file system or send them over the network.
7348
7349 <<channel-overwrite-mode-vs-discard-mode,discard mode>>:: The event loss
7350 mode in which the tracer _discards_ new event records when there's no
7351 sub-buffer space left to store them.
7352
7353 event::
7354 The consequence of the execution of an instrumentation
7355 point, like a tracepoint that you manually place in some source code,
7356 or a Linux kernel KProbe.
7357 +
7358 An event is said to _occur_ at a specific time. Different actions can
7359 be taken upon the occurrence of an event, like record the event's payload
7360 to a sub-buffer.
7361
7362 <<channel-overwrite-mode-vs-discard-mode,event loss mode>>::
7363 The mechanism by which event records of a given channel are lost
7364 (not recorded) when there is no sub-buffer space left to store them.
7365
7366 [[def-event-name]]event name::
7367 The name of an event, which is also the name of the event record.
7368 This is also called the _instrumentation point name_.
7369
7370 event record::
7371 A record, in a trace, of the payload of an event which occured.
7372
7373 <<event,event rule>>::
7374 Set of conditions which must be satisfied for one or more occuring
7375 events to be recorded.
7376
7377 `java.util.logging`::
7378 Java platform's
7379 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities].
7380
7381 <<instrumenting,instrumentation>>::
7382 The use of LTTng probes to make a piece of software traceable.
7383
7384 instrumentation point::
7385 A point in the execution path of a piece of software that, when
7386 reached by this execution, can emit an event.
7387
7388 instrumentation point name::
7389 See _<<def-event-name,event name>>_.
7390
7391 log4j::
7392 A http://logging.apache.org/log4j/1.2/[logging library] for Java
7393 developed by the Apache Software Foundation.
7394
7395 log level::
7396 Level of severity of a log statement or user space
7397 instrumentation point.
7398
7399 LTTng::
7400 The _Linux Trace Toolkit: next generation_ project.
7401
7402 <<lttng-cli,cmd:lttng>>::
7403 A command-line tool provided by the LTTng-tools project which you
7404 can use to send and receive control messages to and from a
7405 session daemon.
7406
7407 LTTng analyses::
7408 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
7409 which is a set of analyzing programs that are used to obtain a
7410 higher level view of an LTTng trace.
7411
7412 cmd:lttng-consumerd::
7413 The name of the consumer daemon program.
7414
7415 cmd:lttng-crash::
7416 A utility provided by the LTTng-tools project which can convert
7417 ring buffer files (usually
7418 <<persistent-memory-file-systems,saved on a persistent memory file system>>)
7419 to trace files.
7420
7421 LTTng Documentation::
7422 This document.
7423
7424 <<lttng-live,LTTng live>>::
7425 A communication protocol between the relay daemon and live viewers
7426 which makes it possible to see events "live", as they are received by
7427 the relay daemon.
7428
7429 <<lttng-modules,LTTng-modules>>::
7430 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
7431 which contains the Linux kernel modules to make the Linux kernel
7432 instrumentation points available for LTTng tracing.
7433
7434 cmd:lttng-relayd::
7435 The name of the relay daemon program.
7436
7437 cmd:lttng-sessiond::
7438 The name of the session daemon program.
7439
7440 LTTng-tools::
7441 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
7442 contains the various programs and libraries used to
7443 <<controlling-tracing,control tracing>>.
7444
7445 <<lttng-ust,LTTng-UST>>::
7446 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
7447 contains libraries to instrument user applications.
7448
7449 <<lttng-ust-agents,LTTng-UST Java agent>>::
7450 A Java package provided by the LTTng-UST project to allow the
7451 LTTng instrumentation of `java.util.logging` and Apache log4j 1.2
7452 logging statements.
7453
7454 <<lttng-ust-agents,LTTng-UST Python agent>>::
7455 A Python package provided by the LTTng-UST project to allow the
7456 LTTng instrumentation of Python logging statements.
7457
7458 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
7459 The event loss mode in which new event records overwrite older
7460 event records when there's no sub-buffer space left to store them.
7461
7462 <<channel-buffering-schemes,per-process buffering>>::
7463 A buffering scheme in which each instrumented process has its own
7464 sub-buffers for a given user space channel.
7465
7466 <<channel-buffering-schemes,per-user buffering>>::
7467 A buffering scheme in which all the processes of a Unix user share the
7468 same sub-buffer for a given user space channel.
7469
7470 <<lttng-relayd,relay daemon>>::
7471 A process which is responsible for receiving the trace data sent by
7472 a distant consumer daemon.
7473
7474 ring buffer::
7475 A set of sub-buffers.
7476
7477 <<lttng-sessiond,session daemon>>::
7478 A process which receives control commands from you and orchestrates
7479 the tracers and various LTTng daemons.
7480
7481 <<taking-a-snapshot,snapshot>>::
7482 A copy of the current data of all the sub-buffers of a given tracing
7483 session, saved as trace files.
7484
7485 sub-buffer::
7486 One part of an LTTng ring buffer which contains event records.
7487
7488 timestamp::
7489 The time information attached to an event when it is emitted.
7490
7491 trace (_noun_)::
7492 A set of files which are the concatenations of one or more
7493 flushed sub-buffers.
7494
7495 trace (_verb_)::
7496 The action of recording the events emitted by an application
7497 or by a system, or to initiate such recording by controlling
7498 a tracer.
7499
7500 Trace Compass::
7501 The http://tracecompass.org[Trace Compass] project and application.
7502
7503 tracepoint::
7504 An instrumentation point using the tracepoint mechanism of the Linux
7505 kernel or of LTTng-UST.
7506
7507 tracepoint definition::
7508 The definition of a single tracepoint.
7509
7510 tracepoint name::
7511 The name of a tracepoint.
7512
7513 tracepoint provider::
7514 A set of functions providing tracepoints to an instrumented user
7515 application.
7516 +
7517 Not to be confused with a _tracepoint provider package_: many tracepoint
7518 providers can exist within a tracepoint provider package.
7519
7520 tracepoint provider package::
7521 One or more tracepoint providers compiled as an object file or as
7522 a shared library.
7523
7524 tracer::
7525 A software which records emitted events.
7526
7527 <<domain,tracing domain>>::
7528 A namespace for event sources.
7529
7530 <<tracing-group,tracing group>>::
7531 The Unix group in which a Unix user can be to be allowed to trace the
7532 Linux kernel.
7533
7534 <<tracing-session,tracing session>>::
7535 A stateful dialogue between you and a <<lttng-sessiond,session
7536 daemon>>.
7537
7538 user application::
7539 An application running in user space, as opposed to a Linux kernel
7540 module, for example.
This page took 0.18096 seconds and 4 git commands to generate.