2.5, 2.6: installation steps are outdated
[lttng-docs.git] / 2.6 / lttng-docs-2.6.txt
1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.6, May 26, 2016
5
6
7 include::../common/copyright.txt[]
8
9
10 include::../common/warning-not-maintained.txt[]
11
12
13 include::../common/welcome.txt[]
14
15
16 include::../common/audience.txt[]
17
18
19 [[chapters]]
20 === Chapter descriptions
21
22 What follows is a list of brief descriptions of this documentation's
23 chapters. The latter are ordered in such a way as to make the reading
24 as linear as possible.
25
26 . <<nuts-and-bolts,Nuts and bolts>> explains the
27 rudiments of software tracing and the rationale behind the
28 LTTng project.
29 . <<installing-lttng,Installing LTTng>> is divided into
30 sections describing the steps needed to get a working installation
31 of LTTng packages for common Linux distributions and from its
32 source.
33 . <<getting-started,Getting started>> is a very concise guide to
34 get started quickly with LTTng kernel and user space tracing. This
35 chapter is recommended if you're new to LTTng or software tracing
36 in general.
37 . <<understanding-lttng,Understanding LTTng>> deals with some
38 core concepts and components of the LTTng suite. Understanding
39 those is important since the next chapter assumes you're familiar
40 with them.
41 . <<using-lttng,Using LTTng>> is a complete user guide of the
42 LTTng project. It shows in great details how to instrument user
43 applications and the Linux kernel, how to control tracing sessions
44 using the `lttng` command line tool and miscellaneous practical use
45 cases.
46 . <<reference,Reference>> contains references of LTTng components,
47 like links to online manpages and various APIs.
48
49 We recommend that you read the above chapters in this order, although
50 some of them may be skipped depending on your situation. You may skip
51 <<nuts-and-bolts,Nuts and bolts>> if you're familiar with tracing
52 and LTTng. Also, you may jump over <<installing-lttng,Installing LTTng>>
53 if LTTng is already properly installed on your target system.
54
55
56 include::../common/convention.txt[]
57
58
59 include::../common/acknowledgements.txt[]
60
61
62 [[whats-new]]
63 == What's new in LTTng {revision}?
64
65 Most of the changes of LTTng {revision} are bug fixes, making the toolchain
66 more stable than ever before. Still, LTTng {revision} adds some interesting
67 features to the project.
68
69 LTTng 2.5 already supported the instrumentation and tracing of
70 <<java-application,Java applications>> through `java.util.logging`
71 (JUL). LTTng {revision} goes one step further by supporting
72 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2].
73 The new log4j domain is selected using the `--log4j` option in various
74 commands of the `lttng` tool.
75
76 LTTng-modules has supported system call tracing for a long time,
77 but until now, it was only possible to record either all of them,
78 or none of them. LTTng {revision} allows the user to record specific
79 system call events, for example:
80
81 [role="term"]
82 ----
83 lttng enable-event --kernel --syscall open,fork,chdir,pipe
84 ----
85
86 Finally, the `lttng` command line tool is not only able to communicate
87 with humans as it used to do, but also with machines thanks to its new
88 <<mi,machine interface>> feature.
89
90 To learn more about the new features of LTTng {revision}, see the
91 http://lttng.org/blog/2015/02/27/lttng-2.6-released/[release announcement].
92
93
94 [[nuts-and-bolts]]
95 == Nuts and bolts
96
97 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
98 generation_ is a modern toolkit for tracing Linux systems and
99 applications. So your first question might rather be: **what is
100 tracing?**
101
102
103 [[what-is-tracing]]
104 === What is tracing?
105
106 As the history of software engineering progressed and led to what
107 we now take for granted--complex, numerous and
108 interdependent software applications running in parallel on
109 sophisticated operating systems like Linux--the authors of such
110 components, or software developers, began feeling a natural
111 urge of having tools to ensure the robustness and good performance
112 of their masterpieces.
113
114 One major achievement in this field is, inarguably, the
115 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
116 which is an essential tool for developers to find and fix
117 bugs. But even the best debugger won't help make your software run
118 faster, and nowadays, faster software means either more work done by
119 the same hardware, or cheaper hardware for the same work.
120
121 A _profiler_ is often the tool of choice to identify performance
122 bottlenecks. Profiling is suitable to identify _where_ performance is
123 lost in a given software; the profiler outputs a profile, a
124 statistical summary of observed events, which you may use to discover
125 which functions took the most time to execute. However, a profiler
126 won't report _why_ some identified functions are the bottleneck.
127 Bottlenecks might only occur when specific conditions are met, sometimes
128 almost impossible to capture by a statistical profiler, or impossible to
129 reproduce with an application altered by the overhead of an event-based
130 profiler. For a thorough investigation of software performance issues,
131 a history of execution, with the recorded values of chosen variables
132 and context, is essential. This is where tracing comes in handy.
133
134 _Tracing_ is a technique used to understand what goes on in a running
135 software system. The software used for tracing is called a _tracer_,
136 which is conceptually similar to a tape recorder. When recording,
137 specific probes placed in the software source code generate events
138 that are saved on a giant tape: a _trace_ file. Both user applications
139 and the operating system may be traced at the same time, opening the
140 possibility of resolving a wide range of problems that are otherwise
141 extremely challenging.
142
143 Tracing is often compared to _logging_. However, tracers and loggers
144 are two different tools, serving two different purposes. Tracers are
145 designed to record much lower-level events that occur much more
146 frequently than log messages, often in the thousands per second range,
147 with very little execution overhead. Logging is more appropriate for
148 very high-level analysis of less frequent events: user accesses,
149 exceptional conditions (errors and warnings, for example), database
150 transactions, instant messaging communications, and such. More formally,
151 logging is one of several use cases that can be accomplished with
152 tracing.
153
154 The list of recorded events inside a trace file may be read manually
155 like a log file for the maximum level of detail, but it is generally
156 much more interesting to perform application-specific analyses to
157 produce reduced statistics and graphs that are useful to resolve a
158 given problem. Trace viewers and analysers are specialized tools
159 designed to do this.
160
161 So, in the end, this is what LTTng is: a powerful, open source set of
162 tools to trace the Linux kernel and user applications at the same time.
163 LTTng is composed of several components actively maintained and
164 developed by its link:/community/#where[community].
165
166
167 [[lttng-alternatives]]
168 === Alternatives to LTTng
169
170 Excluding proprietary solutions, a few competing software tracers
171 exist for Linux:
172
173 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
174 is the de facto function tracer of the Linux kernel. Its user
175 interface is a set of special files in sysfs.
176 * https://perf.wiki.kernel.org/[perf] is
177 a performance analyzing tool for Linux which supports hardware
178 performance counters, tracepoints, as well as other counters and
179 types of probes. perf's controlling utility is the `perf` command
180 line/curses tool.
181 * http://linux.die.net/man/1/strace[strace]
182 is a command line utility which records system calls made by a
183 user process, as well as signal deliveries and changes of process
184 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
185 to fulfill its function.
186 * https://sourceware.org/systemtap/[SystemTap]
187 is a Linux kernel and user space tracer which uses custom user scripts
188 to produce plain text traces. Scripts are converted to the C language,
189 then compiled as Linux kernel modules which are loaded to produce
190 trace data. SystemTap's primary user interface is the `stap`
191 command line tool.
192 * http://www.sysdig.org/[sysdig], like
193 SystemTap, uses scripts to analyze Linux kernel events. Scripts,
194 or _chisels_ in sysdig's jargon, are written in Lua and executed
195 while the system is being traced, or afterwards. sysdig's interface
196 is the `sysdig` command line tool as well as the curses-based
197 `csysdig` tool.
198
199 The main distinctive features of LTTng is that it produces correlated
200 kernel and user space traces, as well as doing so with the lowest
201 overhead amongst other solutions. It produces trace files in the
202 http://diamon.org/ctf[CTF] format, an optimized file format
203 for production and analyses of multi-gigabyte data. LTTng is the
204 result of close to 10 years of
205 active development by a community of passionate developers. LTTng {revision}
206 is currently available on some major desktop, server, and embedded Linux
207 distributions.
208
209 The main interface for tracing control is a single command line tool
210 named `lttng`. The latter can create several tracing sessions,
211 enable/disable events on the fly, filter them efficiently with custom
212 user expressions, start/stop tracing, and do much more. Traces can be
213 recorded on disk or sent over the network, kept totally or partially,
214 and viewed once tracing becomes inactive or in real-time.
215
216 <<installing-lttng,Install LTTng now>> and start tracing!
217
218
219 [[installing-lttng]]
220 == Installing LTTng
221
222 include::../common/warning-installation-outdated.txt[]
223
224 **LTTng** is a set of software components which interact to allow
225 instrumenting the Linux kernel and user applications as well as
226 controlling tracing sessions (starting/stopping tracing,
227 enabling/disabling events, and more). Those components are bundled into
228 the following packages:
229
230 LTTng-tools::
231 Libraries and command line interface to control tracing sessions.
232
233 LTTng-modules::
234 Linux kernel modules for tracing the kernel.
235
236 LTTng-UST::
237 User space tracing library.
238
239 Most distributions mark the LTTng-modules and LTTng-UST packages as
240 optional. In the following sections, the steps to install all three are
241 always provided, but note that LTTng-modules is only required if
242 you intend to trace the Linux kernel and LTTng-UST is only required if
243 you intend to trace user space applications.
244
245 This chapter shows how to install the above packages on a Linux system.
246 The easiest way is to use the package manager of the system's
247 distribution (<<desktop-distributions,desktop>> or
248 <<embedded-distributions,embedded>>). Support is also available for
249 <<enterprise-distributions,enterprise distributions>>, such as Red Hat
250 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES).
251 Otherwise, you can
252 <<building-from-source,build the LTTng packages from source>>.
253
254
255 [[desktop-distributions]]
256 === Desktop distributions
257
258 Official LTTng {revision} packages are available for
259 <<ubuntu,Ubuntu>>, <<fedora,Fedora>>, and
260 <<opensuse,openSUSE>> (and other RPM-based distributions).
261
262 More recent versions of LTTng are available for Debian and Arch Linux.
263
264 Should any issue arise when
265 following the procedures below, please inform the
266 link:/community[community] about it.
267
268
269 [[ubuntu]]
270 ==== Ubuntu
271
272 LTTng {revision} is packaged in Ubuntu 15.10 _Wily Werewolf_. For other
273 releases of Ubuntu, you need to build and install LTTng {revision}
274 <<building-from-source,from source>>. Ubuntu 15.04 _Vivid Vervet_
275 ships with link:/docs/v2.5/[LTTng 2.5], whilst
276 Ubuntu 16.04 _Xenial Xerus_ ships with
277 link:/docs/v2.7/[LTTng 2.7].
278
279 To install LTTng {revision} from the official Ubuntu repositories,
280 simply use `apt-get`:
281
282 [role="term"]
283 ----
284 sudo apt-get install lttng-tools
285 sudo apt-get install lttng-modules-dkms
286 sudo apt-get install liblttng-ust-dev
287 ----
288
289 If you need to trace
290 <<java-application,Java applications>>,
291 you need to install the LTTng-UST Java agent also:
292
293 [role="term"]
294 ----
295 sudo apt-get install liblttng-ust-agent-java
296 ----
297
298
299 [[fedora]]
300 ==== Fedora
301
302 Fedora 22 and Fedora 23 ship with official LTTng-tools {revision} and
303 LTTng-UST {revision} packages. Simply use `yum`:
304
305 [role="term"]
306 ----
307 sudo yum install lttng-tools
308 sudo yum install lttng-ust
309 sudo yum install lttng-ust-devel
310 ----
311
312 LTTng-modules {revision} still needs to be built and installed from
313 source. For that, make sure that the `kernel-devel` package is
314 already installed beforehand:
315
316 [role="term"]
317 ----
318 sudo yum install kernel-devel
319 ----
320
321 Proceed on to fetch
322 <<building-from-source,LTTng-modules {revision}'s source>>. Build and
323 install it as follows:
324
325 [role="term"]
326 ----
327 KERNELDIR=/usr/src/kernels/$(uname -r) make
328 sudo make modules_install
329 ----
330
331 NOTE: If you need to trace <<java-application,Java applications>> on
332 Fedora, you need to build and install LTTng-UST {revision}
333 <<building-from-source,from source>> and use the
334 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
335 `--enable-java-agent-all` options.
336
337
338 [[opensuse]]
339 ==== openSUSE/RPM
340
341 openSUSE 13.1 and openSUSE 13.2 have LTTng {revision} packages. To install
342 LTTng {revision}, you first need to add an entry to your repository
343 configuration. All LTTng repositories are available
344 http://download.opensuse.org/repositories/devel:/tools:/lttng/[here].
345 For example, the following commands adds the LTTng repository for
346 openSUSE{nbsp}13.1:
347
348 [role="term"]
349 ----
350 sudo zypper addrepo http://download.opensuse.org/repositories/devel:/tools:/lttng/openSUSE_13.1/devel:tools:lttng.repo
351 ----
352
353 Then, refresh the package database:
354
355 [role="term"]
356 ----
357 sudo zypper refresh
358 ----
359
360 and install `lttng-tools`, `lttng-modules` and `lttng-ust-devel`:
361
362 [role="term"]
363 ----
364 sudo zypper install lttng-tools
365 sudo zypper install lttng-modules
366 sudo zypper install lttng-ust-devel
367 ----
368
369 NOTE: If you need to trace <<java-application,Java applications>> on
370 openSUSE, you need to build and install LTTng-UST {revision}
371 <<building-from-source,from source>> and use the
372 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
373 `--enable-java-agent-all` options.
374
375
376 [[embedded-distributions]]
377 === Embedded distributions
378
379 LTTng is packaged by two popular
380 embedded Linux distributions: <<buildroot,Buildroot>> and
381 <<oe-yocto,OpenEmbedded/Yocto>>.
382
383
384 [[buildroot]]
385 ==== Buildroot
386
387 LTTng {revision} is available in Buildroot since Buildroot 2015.05. The
388 LTTng packages are named `lttng-tools`, `lttng-modules`, and `lttng-libust`.
389
390 To enable them, start the Buildroot configuration menu as usual:
391
392 [role="term"]
393 ----
394 make menuconfig
395 ----
396
397 In:
398
399 * _Kernel_: make sure _Linux kernel_ is enabled
400 * _Toolchain_: make sure the following options are enabled:
401 ** _Enable large file (files > 2GB) support_
402 ** _Enable WCHAR support_
403
404 In _Target packages_/_Debugging, profiling and benchmark_, enable
405 _lttng-modules_ and _lttng-tools_. In
406 _Target packages_/_Libraries_/_Other_, enable _lttng-libust_.
407
408 NOTE: If you need to trace <<java-application,Java applications>> on
409 Buildroot, you need to build and install LTTng-UST {revision}
410 <<building-from-source,from source>> and use the
411 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
412 `--enable-java-agent-all` options.
413
414
415 [[oe-yocto]]
416 ==== OpenEmbedded/Yocto
417
418 LTTng {revision} recipes are available in the
419 http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
420 layer of OpenEmbedded since February 8th, 2015 under the following names:
421
422 * `lttng-tools`
423 * `lttng-modules`
424 * `lttng-ust`
425
426 Using BitBake, the simplest way to include LTTng recipes in your
427 target image is to add them to `IMAGE_INSTALL_append` in
428 path:{conf/local.conf}:
429
430 ----
431 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
432 ----
433
434 If you're using Hob, click _Edit image recipe_ once you have selected
435 a machine and an image recipe. Then, under the _All recipes_ tab, search
436 for `lttng` and include the three LTTng recipes.
437
438 NOTE: If you need to trace <<java-application,Java applications>> on
439 OpenEmbedded/Yocto, you need to build and install LTTng-UST {revision}
440 <<building-from-source,from source>> and use the
441 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
442 `--enable-java-agent-all` options.
443
444
445 [[enterprise-distributions]]
446 === Enterprise distributions (RHEL, SLES)
447
448 To install LTTng on enterprise Linux distributions
449 (such as RHEL and SLES), please see
450 http://packages.efficios.com/[EfficiOS Enterprise Packages].
451
452
453 [[building-from-source]]
454 === Building from source
455
456 As <<installing-lttng,previously stated>>, LTTng is shipped as
457 three packages: LTTng-tools, LTTng-modules, and LTTng-UST. LTTng-tools
458 contains everything needed to control tracing sessions, while
459 LTTng-modules is only needed for Linux kernel tracing and LTTng-UST is
460 only needed for user space tracing.
461
462 The tarballs are available in the
463 http://lttng.org/download#build-from-source[Download section]
464 of the LTTng website.
465
466 Please refer to the path:{README.md} files provided by each package to
467 properly build and install them.
468
469 TIP: The aforementioned path:{README.md} files
470 are rendered as rich text when https://github.com/lttng[viewed on GitHub].
471
472
473 [[getting-started]]
474 == Getting started with LTTng
475
476 This is a small guide to get started quickly with LTTng kernel and user
477 space tracing. For a more thorough understanding of LTTng and intermediate
478 to advanced use cases and, see <<understanding-lttng,Understanding LTTng>>
479 and <<using-lttng,Using LTTng>>.
480
481 Before reading this guide, make sure LTTng
482 <<installing-lttng,is installed>>. LTTng-tools is required. Also install
483 LTTng-modules for
484 <<tracing-the-linux-kernel,tracing the Linux kernel>> and LTTng-UST
485 for
486 <<tracing-your-own-user-application,tracing your own user space applications>>.
487 When the traces are finally written and complete, the
488 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
489 section of this chapter will help you analyze your tracepoint events
490 to investigate.
491
492
493 [[tracing-the-linux-kernel]]
494 === Tracing the Linux kernel
495
496 Make sure LTTng-tools and LTTng-modules packages
497 <<installing-lttng,are installed>>.
498
499 Since you're about to trace the Linux kernel itself, let's look at the
500 available kernel events using the `lttng` tool, which has a
501 Git-like command line structure:
502
503 [role="term"]
504 ----
505 lttng list --kernel
506 ----
507
508 Before tracing, you need to create a session:
509
510 [role="term"]
511 ----
512 sudo lttng create
513 ----
514
515 TIP: You can avoid using `sudo` in the previous and following commands
516 if your user is a member of the <<lttng-sessiond,tracing group>>.
517
518 Let's now enable some events for this session:
519
520 [role="term"]
521 ----
522 sudo lttng enable-event --kernel sched_switch,sched_process_fork
523 ----
524
525 Or you might want to simply enable all available kernel events (beware
526 that trace files grow rapidly when doing this):
527
528 [role="term"]
529 ----
530 sudo lttng enable-event --kernel --all
531 ----
532
533 Start tracing:
534
535 [role="term"]
536 ----
537 sudo lttng start
538 ----
539
540 By default, traces are saved in
541 +\~/lttng-traces/__name__-__date__-__time__+,
542 where +__name__+ is the session name.
543
544 When you're done tracing:
545
546 [role="term"]
547 ----
548 sudo lttng stop
549 sudo lttng destroy
550 ----
551
552 Although `destroy` looks scary here, it doesn't actually destroy the
553 written trace files: it only destroys the tracing session.
554
555 What's next? Have a look at
556 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
557 to view and analyze the trace you just recorded.
558
559
560 [[tracing-your-own-user-application]]
561 === Tracing your own user application
562
563 The previous section helped you create a trace out of Linux kernel
564 events. This section steps you through a simple example showing you how
565 to trace a _Hello world_ program written in C.
566
567 Make sure the LTTng-tools and LTTng-UST packages
568 <<installing-lttng,are installed>>.
569
570 Tracing is just like having `printf()` calls at specific locations of
571 your source code, albeit LTTng is much faster and more flexible than
572 `printf()`. In the LTTng realm, **`tracepoint()`** is analogous to
573 `printf()`.
574
575 Unlike `printf()`, though, `tracepoint()` does not use a format string to
576 know the types of its arguments: the formats of all tracepoints must be
577 defined before using them. So before even writing our _Hello world_ program,
578 we need to define the format of our tracepoint. This is done by creating a
579 **tracepoint provider**, which consists of a tracepoint provider header
580 (`.h` file) and a tracepoint provider definition (`.c` file).
581
582 The tracepoint provider header contains some boilerplate as well as a
583 list of tracepoint definitions and other optional definition entries
584 which we skip for this quickstart. Each tracepoint is defined using the
585 `TRACEPOINT_EVENT()` macro. For each tracepoint, you must provide:
586
587 * a **provider name**, which is the "scope" or namespace of this
588 tracepoint (this usually includes the company and project names)
589 * a **tracepoint name**
590 * a **list of arguments** for the eventual `tracepoint()` call, each
591 item being:
592 ** the argument C type
593 ** the argument name
594 * a **list of fields**, which correspond to the actual fields of the
595 recorded events for this tracepoint
596
597 Here's an example of a simple tracepoint provider header with two
598 arguments: an integer and a string:
599
600 [source,c]
601 ----
602 #undef TRACEPOINT_PROVIDER
603 #define TRACEPOINT_PROVIDER hello_world
604
605 #undef TRACEPOINT_INCLUDE
606 #define TRACEPOINT_INCLUDE "./hello-tp.h"
607
608 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
609 #define _HELLO_TP_H
610
611 #include <lttng/tracepoint.h>
612
613 TRACEPOINT_EVENT(
614 hello_world,
615 my_first_tracepoint,
616 TP_ARGS(
617 int, my_integer_arg,
618 char*, my_string_arg
619 ),
620 TP_FIELDS(
621 ctf_string(my_string_field, my_string_arg)
622 ctf_integer(int, my_integer_field, my_integer_arg)
623 )
624 )
625
626 #endif /* _HELLO_TP_H */
627
628 #include <lttng/tracepoint-event.h>
629 ----
630
631 The exact syntax is well explained in the
632 <<c-application,C application>> instrumentation guide of the
633 <<using-lttng,Using LTTng>> chapter, as well as in
634 man:lttng-ust(3).
635
636 Save the above snippet as path:{hello-tp.h}.
637
638 Write the tracepoint provider definition as path:{hello-tp.c}:
639
640 [source,c]
641 ----
642 #define TRACEPOINT_CREATE_PROBES
643 #define TRACEPOINT_DEFINE
644
645 #include "hello-tp.h"
646 ----
647
648 Create the tracepoint provider:
649
650 [role="term"]
651 ----
652 gcc -c -I. hello-tp.c
653 ----
654
655 Now, by including path:{hello-tp.h} in your own application, you may use the
656 tracepoint defined above by properly refering to it when calling
657 `tracepoint()`:
658
659 [source,c]
660 ----
661 #include <stdio.h>
662 #include "hello-tp.h"
663
664 int main(int argc, char *argv[])
665 {
666 int x;
667
668 puts("Hello, World!\nPress Enter to continue...");
669
670 /*
671 * The following getchar() call is only placed here for the purpose
672 * of this demonstration, for pausing the application in order for
673 * you to have time to list its events. It's not needed otherwise.
674 */
675 getchar();
676
677 /*
678 * A tracepoint() call. Arguments, as defined in hello-tp.h:
679 *
680 * 1st: provider name (always)
681 * 2nd: tracepoint name (always)
682 * 3rd: my_integer_arg (first user-defined argument)
683 * 4th: my_string_arg (second user-defined argument)
684 *
685 * Notice the provider and tracepoint names are NOT strings;
686 * they are in fact parts of variables created by macros in
687 * hello-tp.h.
688 */
689 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
690
691 for (x = 0; x < argc; ++x) {
692 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
693 }
694
695 puts("Quitting now!");
696
697 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
698
699 return 0;
700 }
701 ----
702
703 Save this as path:{hello.c}, next to path:{hello-tp.c}.
704
705 Notice path:{hello-tp.h}, the tracepoint provider header, is included
706 by path:{hello.c}.
707
708 You are now ready to compile the application with LTTng-UST support:
709
710 [role="term"]
711 ----
712 gcc -c hello.c
713 gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
714 ----
715
716 Here's the whole build process:
717
718 [role="img-100"]
719 .User space tracing's build process.
720 image::ust-flow.png[]
721
722 If you followed the
723 <<tracing-the-linux-kernel,Tracing the Linux kernel>> tutorial, the
724 following steps should look familiar.
725
726 First, run the application with a few arguments:
727
728 [role="term"]
729 ----
730 ./hello world and beyond
731 ----
732
733 You should see
734
735 ----
736 Hello, World!
737 Press Enter to continue...
738 ----
739
740 Use the `lttng` tool to list all available user space events:
741
742 [role="term"]
743 ----
744 lttng list --userspace
745 ----
746
747 You should see the `hello_world:my_first_tracepoint` tracepoint listed
748 under the `./hello` process.
749
750 Create a tracing session:
751
752 [role="term"]
753 ----
754 lttng create
755 ----
756
757 Enable the `hello_world:my_first_tracepoint` tracepoint:
758
759 [role="term"]
760 ----
761 lttng enable-event --userspace hello_world:my_first_tracepoint
762 ----
763
764 Start tracing:
765
766 [role="term"]
767 ----
768 lttng start
769 ----
770
771 Go back to the running `hello` application and press Enter. All `tracepoint()`
772 calls are executed and the program finally exits.
773
774 Stop tracing:
775
776 [role="term"]
777 ----
778 lttng stop
779 ----
780
781 Done! You may use `lttng view` to list the recorded events. This command
782 starts http://diamon.org/babeltrace[`babeltrace`]
783 in the background, if it's installed:
784
785 [role="term"]
786 ----
787 lttng view
788 ----
789
790 should output something like:
791
792 ----
793 [18:10:27.684304496] (+?.?????????) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "hi there!", my_integer_field = 23 }
794 [18:10:27.684338440] (+0.000033944) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "./hello", my_integer_field = 0 }
795 [18:10:27.684340692] (+0.000002252) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "world", my_integer_field = 1 }
796 [18:10:27.684342616] (+0.000001924) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "and", my_integer_field = 2 }
797 [18:10:27.684343518] (+0.000000902) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "beyond", my_integer_field = 3 }
798 [18:10:27.684357978] (+0.000014460) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "x^2", my_integer_field = 16 }
799 ----
800
801 When you're done, you may destroy the tracing session, which does _not_
802 destroy the generated trace files, leaving them available for further
803 analysis:
804
805 [role="term"]
806 ----
807 lttng destroy
808 ----
809
810 The next section presents other alternatives to view and analyze your
811 LTTng traces.
812
813
814 [[viewing-and-analyzing-your-traces]]
815 === Viewing and analyzing your traces
816
817 This section describes how to visualize the data gathered after tracing
818 the Linux kernel or a user space application.
819
820 Many ways exist to read LTTng traces:
821
822 * **`babeltrace`** is a command line utility which converts trace formats;
823 it supports the format used by LTTng,
824 CTF, as well as a basic
825 text output which may be ++grep++ed. The `babeltrace` command is
826 part of the
827 http://diamon.org/babeltrace[Babeltrace] project.
828 * Babeltrace also includes **Python bindings** so that you may
829 easily open and read an LTTng trace with your own script, benefiting
830 from the power of Python.
831 * **http://tracecompass.org/[Trace Compass]**
832 is an Eclipse plugin used to visualize and analyze various types of
833 traces, including LTTng's. It also comes as a standalone application.
834
835 LTTng trace files are usually recorded in the dir:{~/lttng-traces} directory.
836 Let's now view the trace and perform a basic analysis using
837 `babeltrace`.
838
839 The simplest way to list all the recorded events of a trace is to pass its
840 path to `babeltrace` with no options:
841
842 [role="term"]
843 ----
844 babeltrace ~/lttng-traces/my-session
845 ----
846
847 `babeltrace` finds all traces recursively within the given path and
848 prints all their events, merging them in order of time.
849
850 Listing all the system calls of a Linux kernel trace with their arguments is
851 easy with `babeltrace` and `grep`:
852
853 [role="term"]
854 ----
855 babeltrace ~/lttng-traces/my-kernel-session | grep sys_
856 ----
857
858 Counting events is also straightforward:
859
860 [role="term"]
861 ----
862 babeltrace ~/lttng-traces/my-kernel-session | grep sys_read | wc --lines
863 ----
864
865 The text output of `babeltrace` is useful for isolating events by simple
866 matching using `grep` and similar utilities. However, more elaborate filters
867 such as keeping only events with a field value falling within a specific range
868 are not trivial to write using a shell. Moreover, reductions and even the
869 most basic computations involving multiple events are virtually impossible
870 to implement.
871
872 Fortunately, Babeltrace ships with Python 3 bindings which makes it
873 really easy to read the events of an LTTng trace sequentially and compute
874 the desired information.
875
876 Here's a simple example using the Babeltrace Python bindings. The following
877 script accepts an LTTng Linux kernel trace path as its first argument and
878 prints the short names of the top 5 running processes on CPU 0 during the
879 whole trace:
880
881 [source,python]
882 ----
883 import sys
884 from collections import Counter
885 import babeltrace
886
887
888 def top5proc():
889 if len(sys.argv) != 2:
890 msg = 'Usage: python {} TRACEPATH'.format(sys.argv[0])
891 raise ValueError(msg)
892
893 # a trace collection holds one to many traces
894 col = babeltrace.TraceCollection()
895
896 # add the trace provided by the user
897 # (LTTng traces always have the 'ctf' format)
898 if col.add_trace(sys.argv[1], 'ctf') is None:
899 raise RuntimeError('Cannot add trace')
900
901 # this counter dict will hold execution times:
902 #
903 # task command name -> total execution time (ns)
904 exec_times = Counter()
905
906 # this holds the last `sched_switch` timestamp
907 last_ts = None
908
909 # iterate events
910 for event in col.events:
911 # keep only `sched_switch` events
912 if event.name != 'sched_switch':
913 continue
914
915 # keep only events which happened on CPU 0
916 if event['cpu_id'] != 0:
917 continue
918
919 # event timestamp
920 cur_ts = event.timestamp
921
922 if last_ts is None:
923 # we start here
924 last_ts = cur_ts
925
926 # previous task command (short) name
927 prev_comm = event['prev_comm']
928
929 # initialize entry in our dict if not yet done
930 if prev_comm not in exec_times:
931 exec_times[prev_comm] = 0
932
933 # compute previous command execution time
934 diff = cur_ts - last_ts
935
936 # update execution time of this command
937 exec_times[prev_comm] += diff
938
939 # update last timestamp
940 last_ts = cur_ts
941
942 # display top 10
943 for name, ns in exec_times.most_common(5):
944 s = ns / 1000000000
945 print('{:20}{} s'.format(name, s))
946
947
948 if __name__ == '__main__':
949 top5proc()
950 ----
951
952 Save this script as path:{top5proc.py} and run it with Python 3, providing the
953 path to an LTTng Linux kernel trace as the first argument:
954
955 [role="term"]
956 ----
957 python3 top5proc.py ~/lttng-sessions/my-session-.../kernel
958 ----
959
960 Make sure the path you provide is the directory containing actual trace
961 files (`channel0_0`, `metadata`, and the rest): the `babeltrace` utility
962 recurses directories, but the Python bindings do not.
963
964 Here's an example of output:
965
966 ----
967 swapper/0 48.607245889 s
968 chromium 7.192738188 s
969 pavucontrol 0.709894415 s
970 Compositor 0.660867933 s
971 Xorg.bin 0.616753786 s
972 ----
973
974 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
975 weren't using the CPU that much when tracing, its first position in the list
976 makes sense.
977
978
979 [[understanding-lttng]]
980 == Understanding LTTng
981
982 If you're going to use LTTng in any serious way, it is fundamental that
983 you become familiar with its core concepts. Technical terms like
984 _tracing sessions_, _domains_, _channels_ and _events_ are used over
985 and over in the <<using-lttng,Using LTTng>> chapter,
986 and it is assumed that you understand what they mean when reading it.
987
988 LTTng, as you already know, is a _toolkit_. It would be wrong
989 to call it a simple _tool_ since it is composed of multiple interacting
990 components. This chapter also describes the latter, providing details
991 about their respective roles and how they connect together to form
992 the current LTTng ecosystem.
993
994
995 [[core-concepts]]
996 === Core concepts
997
998 This section explains the various elementary concepts a user has to deal
999 with when using LTTng. They are:
1000
1001 * <<tracing-session,tracing session>>
1002 * <<domain,domain>>
1003 * <<channel,channel>>
1004 * <<event,event>>
1005
1006
1007 [[tracing-session]]
1008 ==== Tracing session
1009
1010 A _tracing session_ is--like any session--a container of
1011 state. Anything that is done when tracing using LTTng happens in the
1012 scope of a tracing session. In this regard, it is analogous to a bank
1013 website's session: you can't interact online with your bank account
1014 unless you are logged in a session, except for reading a few static
1015 webpages (LTTng, too, can report some static information that does not
1016 need a created tracing session).
1017
1018 A tracing session holds the following attributes and objects (some of
1019 which are described in the following sections):
1020
1021 * a name
1022 * the tracing state (tracing started or stopped)
1023 * the trace data output path/URL (local path or sent over the network)
1024 * a mode (normal, snapshot or live)
1025 * the snapshot output paths/URLs (if applicable)
1026 * for each <<domain,domain>>, a list of <<channel,channels>>
1027 * for each channel:
1028 ** a name
1029 ** the channel state (enabled or disabled)
1030 ** its parameters (event loss mode, sub-buffers size and count,
1031 timer periods, output type, trace files size and count, and the rest)
1032 ** a list of added context information
1033 ** a list of <<event,events>>
1034 * for each event:
1035 ** its state (enabled or disabled)
1036 ** a list of instrumentation points (tracepoints, system calls,
1037 dynamic probes, other types of probes)
1038 ** associated log levels
1039 ** a filter expression
1040
1041 All this information is completely isolated between tracing sessions.
1042 As you can see in the list above, even the tracing state
1043 is a per-tracing session attribute, so that you may trace your target
1044 system/application in a given tracing session with a specific
1045 configuration while another one stays inactive.
1046
1047 [role="img-100"]
1048 .A _tracing session_ is a container of domains, channels, and events.
1049 image::concepts.png[]
1050
1051 Conceptually, a tracing session is a per-user object; the
1052 <<plumbing,Plumbing>> section shows how this is actually
1053 implemented. Any user may create as many concurrent tracing sessions
1054 as desired.
1055
1056 [role="img-100"]
1057 .Each user may create as many tracing sessions as desired.
1058 image::many-sessions.png[]
1059
1060 The trace data generated in a tracing session may be either saved
1061 to disk, sent over the network or not saved at all (in which case
1062 snapshots may still be saved to disk or sent to a remote machine).
1063
1064
1065 [[domain]]
1066 ==== Domain
1067
1068 A tracing _domain_ is the official term the LTTng project uses to
1069 designate a tracer category.
1070
1071 There are currently four known domains:
1072
1073 * Linux kernel
1074 * user space
1075 * `java.util.logging` (JUL)
1076 * log4j
1077
1078 Different tracers expose common features in their own interfaces, but,
1079 from a user's perspective, you still need to target a specific type of
1080 tracer to perform some actions. For example, since both kernel and user
1081 space tracers support named tracepoints (probes manually inserted in
1082 source code), you need to specify which one is concerned when enabling
1083 an event because both domains could have existing events with the same
1084 name.
1085
1086 Some features are not available in all domains. Filtering enabled
1087 events using custom expressions, for example, is currently not
1088 supported in the kernel domain, but support could be added in the
1089 future.
1090
1091
1092 [[channel]]
1093 ==== Channel
1094
1095 A _channel_ is a set of events with specific parameters and potential
1096 added context information. Channels have unique names per domain within
1097 a tracing session. A given event is always registered to at least one
1098 channel; having the same enabled event in two channels makes
1099 this event being recorded twice everytime it occurs.
1100
1101 Channels may be individually enabled or disabled. Occurring events of
1102 a disabled channel never make it to recorded events.
1103
1104 The fundamental role of a channel is to keep a shared ring buffer, where
1105 events are eventually recorded by the tracer and consumed by a consumer
1106 daemon. This internal ring buffer is divided into many sub-buffers of
1107 equal size.
1108
1109 Channels, when created, may be fine-tuned thanks to a few parameters,
1110 many of them related to sub-buffers. The following subsections explain
1111 what those parameters are and in which situations you should manually
1112 adjust them.
1113
1114
1115 [[channel-overwrite-mode-vs-discard-mode]]
1116 ===== Overwrite and discard event loss modes
1117
1118 As previously mentioned, a channel's ring buffer is divided into many
1119 equally sized sub-buffers.
1120
1121 As events occur, they are serialized as trace data into a specific
1122 sub-buffer (yellow arc in the following animation) until it is full:
1123 when this happens, the sub-buffer is marked as consumable (red) and
1124 another, _empty_ (white) sub-buffer starts receiving the following
1125 events. The marked sub-buffer is eventually consumed by a consumer
1126 daemon (returns to white).
1127
1128 [NOTE]
1129 [role="docsvg-channel-subbuf-anim"]
1130 ====
1131 {note-no-anim}
1132 ====
1133
1134 In an ideal world, sub-buffers are consumed faster than filled, like it
1135 is the case above. In the real world, however, all sub-buffers could be
1136 full at some point, leaving no space to record the following events. By
1137 design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer
1138 exists, losing events is acceptable when the alternative would be to
1139 cause substantial delays in the instrumented application's execution.
1140 LTTng privileges performance over integrity, aiming at perturbing the
1141 traced system as little as possible in order to make tracing of subtle
1142 race conditions and rare interrupt cascades possible.
1143
1144 When it comes to losing events because no empty sub-buffer is available,
1145 the channel's _event loss mode_ determines what to do amongst:
1146
1147 Discard::
1148 Drop the newest events until a sub-buffer is released.
1149
1150 Overwrite::
1151 Clear the sub-buffer containing the oldest recorded
1152 events and start recording the newest events there. This mode is
1153 sometimes called _flight recorder mode_ because it behaves like a
1154 flight recorder: always keep a fixed amount of the latest data.
1155
1156 Which mechanism you should choose depends on your context: prioritize
1157 the newest or the oldest events in the ring buffer?
1158
1159 Beware that, in overwrite mode, a whole sub-buffer is abandoned as soon
1160 as a new event doesn't find an empty sub-buffer, whereas in discard
1161 mode, only the event that doesn't fit is discarded.
1162
1163 Also note that a count of lost events is incremented and saved in
1164 the trace itself when an event is lost in discard mode, whereas no
1165 information is kept when a sub-buffer gets overwritten before being
1166 committed.
1167
1168 There are known ways to decrease your probability of losing events. The
1169 next section shows how tuning the sub-buffers count and size can be
1170 used to virtually stop losing events.
1171
1172
1173 [[channel-subbuf-size-vs-subbuf-count]]
1174 ===== Sub-buffers count and size
1175
1176 For each channel, an LTTng user may set its number of sub-buffers and
1177 their size.
1178
1179 Note that there is a noticeable tracer's CPU overhead introduced when
1180 switching sub-buffers (marking a full one as consumable and switching
1181 to an empty one for the following events to be recorded). Knowing this,
1182 the following list presents a few practical situations along with how
1183 to configure sub-buffers for them:
1184
1185 High event throughput::
1186 In general, prefer bigger sub-buffers to
1187 lower the risk of losing events. Having bigger sub-buffers
1188 also ensures a lower sub-buffer switching frequency. The number of
1189 sub-buffers is only meaningful if the channel is enabled in
1190 overwrite mode: in this case, if a sub-buffer overwrite happens, the
1191 other sub-buffers are left unaltered.
1192
1193 Low event throughput::
1194 In general, prefer smaller sub-buffers
1195 since the risk of losing events is already low. Since events
1196 happen less frequently, the sub-buffer switching frequency should
1197 remain low and thus the tracer's overhead should not be a problem.
1198
1199 Low memory system::
1200 If your target system has a low memory
1201 limit, prefer fewer first, then smaller sub-buffers. Even if the
1202 system is limited in memory, you want to keep the sub-buffers as
1203 big as possible to avoid a high sub-buffer switching frequency.
1204
1205 You should know that LTTng uses CTF as its trace format, which means
1206 event data is very compact. For example, the average LTTng Linux kernel
1207 event weights about 32{nbsp}bytes. A sub-buffer size of 1{nbsp}MiB is
1208 thus considered big.
1209
1210 The previous situations highlight the major trade-off between a few big
1211 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1212 frequency vs. how much data is lost in overwrite mode. Assuming a
1213 constant event throughput and using the overwrite mode, the two
1214 following configurations have the same ring buffer total size:
1215
1216 [NOTE]
1217 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1218 ====
1219 {note-no-anim}
1220 ====
1221
1222 * **2 sub-buffers of 4 MiB each** lead to a very low sub-buffer
1223 switching frequency, but if a sub-buffer overwrite happens, half of
1224 the recorded events so far (4{nbsp}MiB) are definitely lost.
1225 * **8 sub-buffers of 1 MiB each** lead to 4{nbsp}times the tracer's
1226 overhead as the previous configuration, but if a sub-buffer
1227 overwrite happens, only the eighth of events recorded so far are
1228 definitely lost.
1229
1230 In discard mode, the sub-buffers count parameter is pointless: use two
1231 sub-buffers and set their size according to the requirements of your
1232 situation.
1233
1234
1235 [[channel-switch-timer]]
1236 ===== Switch timer
1237
1238 The _switch timer_ period is another important configurable feature of
1239 channels to ensure periodic sub-buffer flushing.
1240
1241 When the _switch timer_ fires, a sub-buffer switch happens. This timer
1242 may be used to ensure that event data is consumed and committed to
1243 trace files periodically in case of a low event throughput:
1244
1245 [NOTE]
1246 [role="docsvg-channel-switch-timer"]
1247 ====
1248 {note-no-anim}
1249 ====
1250
1251 It's also convenient when big sub-buffers are used to cope with
1252 sporadic high event throughput, even if the throughput is normally
1253 lower.
1254
1255
1256 [[channel-buffering-schemes]]
1257 ===== Buffering schemes
1258
1259 In the user space tracing domain, two **buffering schemes** are
1260 available when creating a channel:
1261
1262 Per-PID buffering::
1263 Keep one ring buffer per process.
1264
1265 Per-UID buffering::
1266 Keep one ring buffer for all processes of a single user.
1267
1268 The per-PID buffering scheme consumes more memory than the per-UID
1269 option if more than one process is instrumented for LTTng-UST. However,
1270 per-PID buffering ensures that one process having a high event
1271 throughput won't fill all the shared sub-buffers, only its own.
1272
1273 The Linux kernel tracing domain only has one available buffering scheme
1274 which is to use a single ring buffer for the whole system.
1275
1276
1277 [[event]]
1278 ==== Event
1279
1280 An _event_, in LTTng's realm, is a term often used metonymically,
1281 having multiple definitions depending on the context:
1282
1283 . When tracing, an event is a _point in space-time_. Space, in a
1284 tracing context, is the set of all executable positions of a
1285 compiled application by a logical processor. When a program is
1286 executed by a processor and some instrumentation point, or
1287 _probe_, is encountered, an event occurs. This event is accompanied
1288 by some contextual payload (values of specific variables at this
1289 point of execution) which may or may not be recorded.
1290 . In the context of a recorded trace file, the term _event_ implies
1291 a _recorded event_.
1292 . When configuring a tracing session, _enabled events_ refer to
1293 specific rules which could lead to the transfer of actual
1294 occurring events (1) to recorded events (2).
1295
1296 The whole <<core-concepts,Core concepts>> section focuses on the
1297 third definition. An event is always registered to _one or more_
1298 channels and may be enabled or disabled at will per channel. A disabled
1299 event never leads to a recorded event, even if its channel is enabled.
1300
1301 An event (3) is enabled with a few conditions that must _all_ be met
1302 when an event (1) happens in order to generate a recorded event (2):
1303
1304 . A _probe_ or group of probes in the traced application must be
1305 executed.
1306 . **Optionally**, the probe must have a log level matching a
1307 log level range specified when enabling the event.
1308 . **Optionally**, the occurring event must satisfy a custom
1309 expression, or _filter_, specified when enabling the event.
1310
1311
1312 [[plumbing]]
1313 === Plumbing
1314
1315 The previous section described the concepts at the heart of LTTng.
1316 This section summarizes LTTng's implementation: how those objects are
1317 managed by different applications and libraries working together to
1318 form the toolkit.
1319
1320
1321 [[plumbing-overview]]
1322 ==== Overview
1323
1324 As <<installing-lttng,mentioned previously>>, the whole LTTng suite
1325 is made of the LTTng-tools, LTTng-UST, and
1326 LTTng-modules packages. Together, they provide different daemons, libraries,
1327 kernel modules and command line interfaces. The following tree shows
1328 which usable component belongs to which package:
1329
1330 * **LTTng-tools**:
1331 ** session daemon (`lttng-sessiond`)
1332 ** consumer daemon (`lttng-consumerd`)
1333 ** relay daemon (`lttng-relayd`)
1334 ** tracing control library (`liblttng-ctl`)
1335 ** tracing control command line tool (`lttng`)
1336 * **LTTng-UST**:
1337 ** user space tracing library (`liblttng-ust`) and its headers
1338 ** preloadable user space tracing helpers
1339 (`liblttng-ust-libc-wrapper`, `liblttng-ust-pthread-wrapper`,
1340 `liblttng-ust-cyg-profile`, `liblttng-ust-cyg-profile-fast`
1341 and `liblttng-ust-dl`)
1342 ** user space tracepoint code generator command line tool
1343 (`lttng-gen-tp`)
1344 ** `java.util.logging`/log4j tracepoint providers
1345 (`liblttng-ust-jul-jni` and `liblttng-ust-log4j-jni`) and JAR
1346 file (path:{liblttng-ust-agent.jar})
1347 * **LTTng-modules**:
1348 ** LTTng Linux kernel tracer module
1349 ** tracing ring buffer kernel modules
1350 ** many LTTng probe kernel modules
1351
1352 The following diagram shows how the most important LTTng components
1353 interact. Plain purple arrows represent trace data paths while dashed
1354 red arrows indicate control communications. The LTTng relay daemon is
1355 shown running on a remote system, although it could as well run on the
1356 target (monitored) system.
1357
1358 [role="img-100"]
1359 .Control and data paths between LTTng components.
1360 image::plumbing-26.png[]
1361
1362 Each component is described in the following subsections.
1363
1364
1365 [[lttng-sessiond]]
1366 ==== Session daemon
1367
1368 At the heart of LTTng's plumbing is the _session daemon_, often called
1369 by its command name, `lttng-sessiond`.
1370
1371 The session daemon is responsible for managing tracing sessions and
1372 what they logically contain (channel properties, enabled/disabled
1373 events, and the rest). By communicating locally with instrumented
1374 applications (using LTTng-UST) and with the LTTng Linux kernel modules
1375 (LTTng-modules), it oversees all tracing activities.
1376
1377 One of the many things that `lttng-sessiond` does is to keep
1378 track of the available event types. User space applications and
1379 libraries actively connect and register to the session daemon when they
1380 start. By contrast, `lttng-sessiond` seeks out and loads the appropriate
1381 LTTng kernel modules as part of its own initialization. Kernel event
1382 types are _pulled_ by `lttng-sessiond`, whereas user space event types
1383 are _pushed_ to it by the various user space tracepoint providers.
1384
1385 Using a specific inter-process communication protocol with Linux kernel
1386 and user space tracers, the session daemon can send channel information
1387 so that they are initialized, enable/disable specific probes based on
1388 enabled/disabled events by the user, send event filters information to
1389 LTTng tracers so that filtering actually happens at the tracer site,
1390 start/stop tracing a specific application or the Linux kernel, and more.
1391
1392 The session daemon is not useful without some user controlling it,
1393 because it's only a sophisticated control interchange and thus
1394 doesn't make any decision on its own. `lttng-sessiond` opens a local
1395 socket for controlling it, albeit the preferred way to control it is
1396 using `liblttng-ctl`, an installed C library hiding the communication
1397 protocol behind an easy-to-use API. The `lttng` tool makes use of
1398 `liblttng-ctl` to implement a user-friendly command line interface.
1399
1400 `lttng-sessiond` does not receive any trace data from instrumented
1401 applications; the _consumer daemons_ are the programs responsible for
1402 collecting trace data using shared ring buffers. However, the session
1403 daemon is the one that must spawn a consumer daemon and establish
1404 a control communication with it.
1405
1406 Session daemons run on a per-user basis. Knowing this, multiple
1407 instances of `lttng-sessiond` may run simultaneously, each belonging
1408 to a different user and each operating independently of the others.
1409 Only `root`'s session daemon, however, may control LTTng kernel modules
1410 (that is, the kernel tracer). With that in mind, if a user has no root
1411 access on the target system, he cannot trace the system's kernel, but
1412 should still be able to trace its own instrumented applications.
1413
1414 It has to be noted that, although only `root`'s session daemon may
1415 control the kernel tracer, the `lttng-sessiond` command has a `--group`
1416 option which may be used to specify the name of a special user group
1417 allowed to communicate with `root`'s session daemon and thus record
1418 kernel traces. By default, this group is named `tracing`.
1419
1420 If not done yet, the `lttng` tool, by default, automatically starts a
1421 session daemon. `lttng-sessiond` may also be started manually:
1422
1423 [role="term"]
1424 ----
1425 lttng-sessiond
1426 ----
1427
1428 This starts the session daemon in foreground. Use
1429
1430 [role="term"]
1431 ----
1432 lttng-sessiond --daemonize
1433 ----
1434
1435 to start it as a true daemon.
1436
1437 To kill the current user's session daemon, `pkill` may be used:
1438
1439 [role="term"]
1440 ----
1441 pkill lttng-sessiond
1442 ----
1443
1444 The default `SIGTERM` signal terminates it cleanly.
1445
1446 Several other options are available and described in
1447 man:lttng-sessiond(8) or by running `lttng-sessiond --help`.
1448
1449
1450 [[lttng-consumerd]]
1451 ==== Consumer daemon
1452
1453 The _consumer daemon_, or `lttng-consumerd`, is a program sharing some
1454 ring buffers with user applications or the LTTng kernel modules to
1455 collect trace data and output it at some place (on disk or sent over
1456 the network to an LTTng relay daemon).
1457
1458 Consumer daemons are created by a session daemon as soon as events are
1459 enabled within a tracing session, well before tracing is activated
1460 for the latter. Entirely managed by session daemons,
1461 consumer daemons survive session destruction to be reused later,
1462 should a new tracing session be created. Consumer daemons are always
1463 owned by the same user as their session daemon. When its owner session
1464 daemon is killed, the consumer daemon also exits. This is because
1465 the consumer daemon is always the child process of a session daemon.
1466 Consumer daemons should never be started manually. For this reason,
1467 they are not installed in one of the usual locations listed in the
1468 `PATH` environment variable. `lttng-sessiond` has, however, a
1469 bunch of options (see man:lttng-sessiond(8)) to
1470 specify custom consumer daemon paths if, for some reason, a consumer
1471 daemon other than the default installed one is needed.
1472
1473 There are up to two running consumer daemons per user, whereas only one
1474 session daemon may run per user. This is because each process has
1475 independent bitness: if the target system runs a mixture of 32-bit and
1476 64-bit processes, it is more efficient to have separate corresponding
1477 32-bit and 64-bit consumer daemons. The `root` user is an exception: it
1478 may have up to _three_ running consumer daemons: 32-bit and 64-bit
1479 instances for its user space applications and one more reserved for
1480 collecting kernel trace data.
1481
1482 As new tracing domains are added to LTTng, the development community's
1483 intent is to minimize the need for additionnal consumer daemon instances
1484 dedicated to them. For instance, the `java.util.logging` (JUL) domain
1485 events are in fact mapped to the user space domain, thus tracing this
1486 particular domain is handled by existing user space domain consumer
1487 daemons.
1488
1489
1490 [[lttng-relayd]]
1491 ==== Relay daemon
1492
1493 When a tracing session is configured to send its trace data over the
1494 network, an LTTng _relay daemon_ must be used at the other end to
1495 receive trace packets and serialize them to trace files. This setup
1496 makes it possible to trace a target system without ever committing trace
1497 data to its local storage, a feature which is useful for embedded
1498 systems, amongst others. The command implementing the relay daemon
1499 is `lttng-relayd`.
1500
1501 The basic use case of `lttng-relayd` is to transfer trace data received
1502 over the network to trace files on the local file system. The relay
1503 daemon must listen on two TCP ports to achieve this: one control port,
1504 used by the target session daemon, and one data port, used by the
1505 target consumer daemon. The relay and session daemons agree on common
1506 default ports when custom ones are not specified.
1507
1508 Since the communication transport protocol for both ports is standard
1509 TCP, the relay daemon may be started either remotely or locally (on the
1510 target system).
1511
1512 While two instances of consumer daemons (32-bit and 64-bit) may run
1513 concurrently for a given user, `lttng-relayd` needs only be of its
1514 host operating system's bitness.
1515
1516 The other important feature of LTTng's relay daemon is the support of
1517 _LTTng live_. LTTng live is an application protocol to view events as
1518 they arrive. The relay daemon still records events in trace files,
1519 but a _tee_ allows to inspect incoming events.
1520
1521 [role="img-100"]
1522 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a live viewer.
1523 image::lttng-live.png[]
1524
1525 Using LTTng live locally thus requires to run a local relay daemon.
1526
1527
1528 [[liblttng-ctl-lttng]]
1529 ==== [[lttng-cli]]Control library and command line interface
1530
1531 The LTTng control library, `liblttng-ctl`, can be used to communicate
1532 with the session daemon using a C API that hides the underlying
1533 protocol's details. `liblttng-ctl` is part of LTTng-tools.
1534
1535 `liblttng-ctl` may be used by including its "master" header:
1536
1537 [source,c]
1538 ----
1539 #include <lttng/lttng.h>
1540 ----
1541
1542 Some objects are referred by name (C string), such as tracing sessions,
1543 but most of them require creating a handle first using
1544 `lttng_create_handle()`. The best available developer documentation for
1545 `liblttng-ctl` is, for the moment, its installed header files as such.
1546 Every function/structure is thoroughly documented.
1547
1548 The `lttng` program is the _de facto_ standard user interface to
1549 control LTTng tracing sessions. `lttng` uses `liblttng-ctl` to
1550 communicate with session daemons behind the scenes.
1551 Its man page, man:lttng(1), is exhaustive, as well as its command
1552 line help (+lttng _cmd_ --help+, where +_cmd_+ is the command name).
1553
1554 The <<controlling-tracing,Controlling tracing>> section is a feature
1555 tour of the `lttng` tool.
1556
1557
1558 [[lttng-ust]]
1559 ==== User space tracing library
1560
1561 The user space tracing part of LTTng is possible thanks to the user
1562 space tracing library, `liblttng-ust`, which is part of the LTTng-UST
1563 package.
1564
1565 `liblttng-ust` provides header files containing macros used to define
1566 tracepoints and create tracepoint providers, as well as a shared object
1567 that must be linked to individual applications to connect to and
1568 communicate with a session daemon and a consumer daemon as soon as the
1569 application starts.
1570
1571 The exact mechanism by which an application is registered to the
1572 session daemon is beyond the scope of this documentation. The only thing
1573 you need to know is that, since the library constructor does this job
1574 automatically, tracepoints may be safely inserted anywhere in the source
1575 code without prior manual initialization of `liblttng-ust`.
1576
1577 The `liblttng-ust`-session daemon collaboration also provides an
1578 interesting feature: user space events may be enabled _before_
1579 applications actually start. By doing this and starting tracing before
1580 launching the instrumented application, you make sure that even the
1581 earliest occurring events can be recorded.
1582
1583 The <<c-application,C application>> instrumenting guide of the
1584 <<using-lttng,Using LTTng>> chapter focuses on using `liblttng-ust`:
1585 instrumenting, building/linking and running a user application.
1586
1587
1588 [[lttng-modules]]
1589 ==== LTTng kernel modules
1590
1591 The LTTng Linux kernel modules provide everything needed to trace the
1592 Linux kernel: various probes, a ring buffer implementation for a
1593 consumer daemon to read trace data and the tracer itself.
1594
1595 Only in exceptional circumstances should you ever need to load the
1596 LTTng kernel modules manually: it is normally the responsability of
1597 `root`'s session daemon to do so. Even if you were to develop your
1598 own LTTng probe module--for tracing a custom kernel or some kernel
1599 module (this topic is covered in the
1600 <<instrumenting-linux-kernel,Linux kernel>> instrumenting guide of
1601 the <<using-lttng,Using LTTng>> chapter)&#8212;you
1602 should use the `--extra-kmod-probes` option of the session daemon to
1603 append your probe to the default list. The session and consumer daemons
1604 of regular users do not interact with the LTTng kernel modules at all.
1605
1606 LTTng kernel modules are installed, by default, in
1607 +/usr/lib/modules/_release_/extra+, where +_release_+ is the
1608 kernel release (see `uname --kernel-release`).
1609
1610
1611 [[using-lttng]]
1612 == Using LTTng
1613
1614 Using LTTng involves two main activities: **instrumenting** and
1615 **controlling tracing**.
1616
1617 _<<instrumenting,Instrumenting>>_ is the process of inserting probes
1618 into some source code. It can be done manually, by writing tracepoint
1619 calls at specific locations in the source code of the program to trace,
1620 or more automatically using dynamic probes (address in assembled code,
1621 symbol name, function entry/return, and others).
1622
1623 It has to be noted that, as an LTTng user, you may not have to worry
1624 about the instrumentation process. Indeed, you may want to trace a
1625 program already instrumented. As an example, the Linux kernel is
1626 thoroughly instrumented, which is why you can trace it without caring
1627 about adding probes.
1628
1629 _<<controlling-tracing,Controlling tracing>>_ is everything
1630 that can be done by the LTTng session daemon, which is controlled using
1631 `liblttng-ctl` or its command line utility, `lttng`: creating tracing
1632 sessions, listing tracing sessions and events, enabling/disabling
1633 events, starting/stopping the tracers, taking snapshots, amongst many
1634 other commands.
1635
1636 This chapter is a complete user guide of both activities,
1637 with common use cases of LTTng exposed throughout the text. It is
1638 assumed that you are familiar with LTTng's concepts (events, channels,
1639 domains, tracing sessions) and that you understand the roles of its
1640 components (daemons, libraries, command line tools); if not, we invite
1641 you to read the <<understanding-lttng,Understanding LTTng>> chapter
1642 before you begin reading this one.
1643
1644 If you're new to LTTng, we suggest that you rather start with the
1645 <<getting-started,Getting started>> small guide first, then come
1646 back here to broaden your knowledge.
1647
1648 If you're only interested in tracing the Linux kernel with its current
1649 instrumentation, you may skip the
1650 <<instrumenting,Instrumenting>> section.
1651
1652
1653 [[instrumenting]]
1654 === Instrumenting
1655
1656 There are many examples of tracing and monitoring in our everyday life.
1657 You have access to real-time and historical weather reports and forecasts
1658 thanks to weather stations installed around the country. You know your
1659 possibly hospitalized friends' and family's hearts are safe thanks to
1660 electrocardiography. You make sure not to drive your car too fast
1661 and have enough fuel to reach your destination thanks to gauges visible
1662 on your dashboard.
1663
1664 All the previous examples have something in common: they rely on
1665 **probes**. Without electrodes attached to the surface of a body's
1666 skin, cardiac monitoring would be futile.
1667
1668 LTTng, as a tracer, is no different from the real life examples above.
1669 If you're about to trace a software system or, put in other words, record its
1670 history of execution, you better have probes in the subject you're
1671 tracing: the actual software. Various ways were developed to do this.
1672 The most straightforward one is to manually place probes, called
1673 _tracepoints_, in the software's source code. The Linux kernel tracing
1674 domain also allows probes added dynamically.
1675
1676 If you're only interested in tracing the Linux kernel, it may very well
1677 be that your tracing needs are already appropriately covered by LTTng's
1678 built-in Linux kernel tracepoints and other probes. Or you may be in
1679 possession of a user space application which has already been
1680 instrumented. In such cases, the work resides entirely in the design
1681 and execution of tracing sessions, allowing you to jump to
1682 <<controlling-tracing,Controlling tracing>> right now.
1683
1684 This chapter focuses on the following use cases of instrumentation:
1685
1686 * <<c-application,C>> and <<cxx-application,$$C++$$>> applications
1687 * <<prebuilt-ust-helpers,prebuilt user space tracing helpers>>
1688 * <<java-application,Java application>>
1689 * <<instrumenting-linux-kernel,Linux kernel>> module or the
1690 kernel itself
1691 * the <<proc-lttng-logger-abi,path:{/proc/lttng-logger} ABI>>
1692
1693 Some advanced techniques are also presented at the very end of this
1694 chapter.
1695
1696
1697 [[c-application]]
1698 ==== C application
1699
1700 Instrumenting a C (or $$C++$$) application, be it an executable program
1701 or a library, implies using LTTng-UST, the
1702 user space tracing component of LTTng. For C/$$C++$$ applications, the
1703 LTTng-UST package includes a dynamically loaded library
1704 (`liblttng-ust`), C headers and the `lttng-gen-tp` command line utility.
1705
1706 Since C and $$C++$$ are the base languages of virtually all other
1707 programming languages
1708 (Java virtual machine, Python, Perl, PHP and Node.js interpreters, to
1709 name a few), implementing user space tracing for an unsupported language
1710 is just a matter of using the LTTng-UST C API at the right places.
1711
1712 The usual work flow to instrument a user space C application with
1713 LTTng-UST is:
1714
1715 . Define tracepoints (actual probes)
1716 . Write tracepoint providers
1717 . Insert tracepoints into target source code
1718 . Package (build) tracepoint providers
1719 . Build user application and link it with tracepoint providers
1720
1721 The steps above are discussed in greater detail in the following
1722 subsections.
1723
1724
1725 [[tracepoint-provider]]
1726 ===== Tracepoint provider
1727
1728 Before jumping into defining tracepoints and inserting
1729 them into the application source code, you must understand what a
1730 _tracepoint provider_ is.
1731
1732 For the sake of this guide, consider the following two files:
1733
1734 [source,c]
1735 .path:{tp.h}
1736 ----
1737 #undef TRACEPOINT_PROVIDER
1738 #define TRACEPOINT_PROVIDER my_provider
1739
1740 #undef TRACEPOINT_INCLUDE
1741 #define TRACEPOINT_INCLUDE "./tp.h"
1742
1743 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1744 #define _TP_H
1745
1746 #include <lttng/tracepoint.h>
1747
1748 TRACEPOINT_EVENT(
1749 my_provider,
1750 my_first_tracepoint,
1751 TP_ARGS(
1752 int, my_integer_arg,
1753 char*, my_string_arg
1754 ),
1755 TP_FIELDS(
1756 ctf_string(my_string_field, my_string_arg)
1757 ctf_integer(int, my_integer_field, my_integer_arg)
1758 )
1759 )
1760
1761 TRACEPOINT_EVENT(
1762 my_provider,
1763 my_other_tracepoint,
1764 TP_ARGS(
1765 int, my_int
1766 ),
1767 TP_FIELDS(
1768 ctf_integer(int, some_field, my_int)
1769 )
1770 )
1771
1772 #endif /* _TP_H */
1773
1774 #include <lttng/tracepoint-event.h>
1775 ----
1776
1777 [source,c]
1778 .path:{tp.c}
1779 ----
1780 #define TRACEPOINT_CREATE_PROBES
1781
1782 #include "tp.h"
1783 ----
1784
1785 The two files above are defining a _tracepoint provider_. A tracepoint
1786 provider is some sort of namespace for _tracepoint definitions_. Tracepoint
1787 definitions are written above with the `TRACEPOINT_EVENT()` macro, and allow
1788 eventual `tracepoint()` calls respecting their definitions to be inserted
1789 into the user application's C source code (we explore this in a
1790 later section).
1791
1792 Many tracepoint definitions may be part of the same tracepoint provider
1793 and many tracepoint providers may coexist in a user space application. A
1794 tracepoint provider is packaged either:
1795
1796 * directly into an existing user application's C source file
1797 * as an object file
1798 * as a static library
1799 * as a shared library
1800
1801 The two files above, path:{tp.h} and path:{tp.c}, show a typical template for
1802 writing a tracepoint provider. LTTng-UST was designed so that two
1803 tracepoint providers should not be defined in the same header file.
1804
1805 We will now go through the various parts of the above files and
1806 give them a meaning. As you may have noticed, the LTTng-UST API for
1807 C/$$C++$$ applications is some preprocessor sorcery. The LTTng-UST macros
1808 used in your application and those in the LTTng-UST headers are
1809 combined to produce actual source code needed to make tracing possible
1810 using LTTng.
1811
1812 Let's start with the header file, path:{tp.h}. It begins with
1813
1814 [source,c]
1815 ----
1816 #undef TRACEPOINT_PROVIDER
1817 #define TRACEPOINT_PROVIDER my_provider
1818 ----
1819
1820 `TRACEPOINT_PROVIDER` defines the name of the provider to which the
1821 following tracepoint definitions belong. It is used internally by
1822 LTTng-UST headers and _must_ be defined. Since `TRACEPOINT_PROVIDER`
1823 could have been defined by another header file also included by the same
1824 C source file, the best practice is to undefine it first.
1825
1826 NOTE: Names in LTTng-UST follow the C
1827 _identifier_ syntax (starting with a letter and containing either
1828 letters, numbers or underscores); they are _not_ C strings
1829 (not surrounded by double quotes). This is because LTTng-UST macros
1830 use those identifier-like strings to create symbols (named types and
1831 variables).
1832
1833 The tracepoint provider is a group of tracepoint definitions; its chosen
1834 name should reflect this. A hierarchy like Java packages is recommended,
1835 using underscores instead of dots, for example,
1836 `org_company_project_component`.
1837
1838 Next is `TRACEPOINT_INCLUDE`:
1839
1840 [source,c]
1841 ----
1842 #undef TRACEPOINT_INCLUDE
1843 #define TRACEPOINT_INCLUDE "./tp.h"
1844 ----
1845
1846 This little bit of instrospection is needed by LTTng-UST to include
1847 your header at various predefined places.
1848
1849 Include guard follows:
1850
1851 [source,c]
1852 ----
1853 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1854 #define _TP_H
1855 ----
1856
1857 Add these precompiler conditionals to ensure the tracepoint event
1858 generation can include this file more than once.
1859
1860 The `TRACEPOINT_EVENT()` macro is defined in a LTTng-UST header file which
1861 must be included:
1862
1863 [source,c]
1864 ----
1865 #include <lttng/tracepoint.h>
1866 ----
1867
1868 This also allows the application to use the `tracepoint()` macro.
1869
1870 Next is a list of `TRACEPOINT_EVENT()` macro calls which create the
1871 actual tracepoint definitions. We skip this for the moment and
1872 come back to how to use `TRACEPOINT_EVENT()`
1873 <<defining-tracepoints,in a later section>>. Just pay attention to
1874 the first argument: it's always the name of the tracepoint provider
1875 being defined in this header file.
1876
1877 End of include guard:
1878
1879 [source,c]
1880 ----
1881 #endif /* _TP_H */
1882 ----
1883
1884 Finally, include `<lttng/tracepoint-event.h>` to expand the macros:
1885
1886 [source,c]
1887 ----
1888 #include <lttng/tracepoint-event.h>
1889 ----
1890
1891 That's it for path:{tp.h}. Of course, this is only a header file; it must be
1892 included in some C source file to actually use it. This is the job of
1893 path:{tp.c}:
1894
1895 [source,c]
1896 ----
1897 #define TRACEPOINT_CREATE_PROBES
1898
1899 #include "tp.h"
1900 ----
1901
1902 When `TRACEPOINT_CREATE_PROBES` is defined, the macros used in path:{tp.h},
1903 which is included just after, actually create the source code for
1904 LTTng-UST probes (global data structures and functions) out of your
1905 tracepoint definitions. How exactly this is done is out of this text's scope.
1906 `TRACEPOINT_CREATE_PROBES` is discussed further
1907 in
1908 <<building-tracepoint-providers-and-user-application,Building/linking
1909 tracepoint providers and the user application>>.
1910
1911 You could include other header files like path:{tp.h} here to create the probes
1912 of different tracepoint providers, for example:
1913
1914 [source,c]
1915 ----
1916 #define TRACEPOINT_CREATE_PROBES
1917
1918 #include "tp1.h"
1919 #include "tp2.h"
1920 ----
1921
1922 The rule is: probes of a given tracepoint provider
1923 must be created in exactly one source file. This source file could be one
1924 of your project's; it doesn't have to be on its own like
1925 path:{tp.c}, although
1926 <<building-tracepoint-providers-and-user-application,a later section>>
1927 shows that doing so allows packaging the tracepoint providers
1928 independently and keep them out of your application, also making it
1929 possible to reuse them between projects.
1930
1931 The following sections explain how to define tracepoints, how to use the
1932 `tracepoint()` macro to instrument your user space C application and how
1933 to build/link tracepoint providers and your application with LTTng-UST
1934 support.
1935
1936
1937 [[lttng-gen-tp]]
1938 ===== Using `lttng-gen-tp`
1939
1940 LTTng-UST ships with `lttng-gen-tp`, a handy command line utility for
1941 generating most of the stuff discussed above. It takes a _template file_,
1942 with a name usually ending with the `.tp` extension, containing only
1943 tracepoint definitions, and outputs a tracepoint provider (either a C
1944 source file or a precompiled object file) with its header file.
1945
1946 `lttng-gen-tp` should suffice in <<static-linking,static linking>>
1947 situations. When using it, write a template file containing a list of
1948 `TRACEPOINT_EVENT()` macro calls. The tool finds the provider names
1949 used and generate the appropriate files which are going to look a lot
1950 like path:{tp.h} and path:{tp.c} above.
1951
1952 Just call `lttng-gen-tp` like this:
1953
1954 [role="term"]
1955 ----
1956 lttng-gen-tp my-template.tp
1957 ----
1958
1959 path:{my-template.c}, path:{my-template.o} and path:{my-template.h}
1960 are created in the same directory.
1961
1962 You may specify custom C flags passed to the compiler invoked by
1963 `lttng-gen-tp` using the `CFLAGS` environment variable:
1964
1965 [role="term"]
1966 ----
1967 CFLAGS=-I/custom/include/path lttng-gen-tp my-template.tp
1968 ----
1969
1970 For more information on `lttng-gen-tp`, see man:lttng-gen-tp(1).
1971
1972
1973 [[defining-tracepoints]]
1974 ===== Defining tracepoints
1975
1976 As written in <<tracepoint-provider,Tracepoint provider>>,
1977 tracepoints are defined using the
1978 `TRACEPOINT_EVENT()` macro. Each tracepoint, when called using the
1979 `tracepoint()` macro in the actual application's source code, generates
1980 a specific event type with its own fields.
1981
1982 Let's have another look at the example above, with a few added comments:
1983
1984 [source,c]
1985 ----
1986 TRACEPOINT_EVENT(
1987 /* tracepoint provider name */
1988 my_provider,
1989
1990 /* tracepoint/event name */
1991 my_first_tracepoint,
1992
1993 /* list of tracepoint arguments */
1994 TP_ARGS(
1995 int, my_integer_arg,
1996 char*, my_string_arg
1997 ),
1998
1999 /* list of fields of eventual event */
2000 TP_FIELDS(
2001 ctf_string(my_string_field, my_string_arg)
2002 ctf_integer(int, my_integer_field, my_integer_arg)
2003 )
2004 )
2005 ----
2006
2007 The tracepoint provider name must match the name of the tracepoint
2008 provider in which this tracepoint is defined
2009 (see <<tracepoint-provider,Tracepoint provider>>). In other words,
2010 always use the same string as the value of `TRACEPOINT_PROVIDER` above.
2011
2012 The tracepoint name becomes the event name once events are recorded
2013 by the LTTng-UST tracer. It must follow the tracepoint provider name
2014 syntax: start with a letter and contain either letters, numbers or
2015 underscores. Two tracepoints under the same provider cannot have the
2016 same name. In other words, you cannot overload a tracepoint like you
2017 would overload functions and methods in $$C++$$/Java.
2018
2019 NOTE: The concatenation of the tracepoint
2020 provider name and the tracepoint name cannot exceed 254 characters. If
2021 it does, the instrumented application compiles and runs, but LTTng
2022 issues multiple warnings and you could experience serious problems.
2023
2024 The list of tracepoint arguments gives this tracepoint its signature:
2025 see it like the declaration of a C function. The format of `TP_ARGS()`
2026 arguments is: C type, then argument name; repeat as needed, up to ten
2027 times. For example, if we were to replicate the signature of C standard
2028 library's `fseek()`, the `TP_ARGS()` part would look like:
2029
2030 [source,c]
2031 ----
2032 TP_ARGS(
2033 FILE*, stream,
2034 long int, offset,
2035 int, origin
2036 ),
2037 ----
2038
2039 Of course, you need to include appropriate header files before
2040 the `TRACEPOINT_EVENT()` macro calls if any argument has a complex type.
2041
2042 `TP_ARGS()` may not be omitted, but may be empty. `TP_ARGS(void)` is
2043 also accepted.
2044
2045 The list of fields is where the fun really begins. The fields defined
2046 in this list are the fields of the events generated by the execution
2047 of this tracepoint. Each tracepoint field definition has a C
2048 _argument expression_ which is evaluated when the execution reaches
2049 the tracepoint. Tracepoint arguments _may be_ used freely in those
2050 argument expressions, but they _don't_ have to.
2051
2052 There are several types of tracepoint fields available. The macros to
2053 define them are given and explained in the
2054 <<liblttng-ust-tp-fields,LTTng-UST library reference>> section.
2055
2056 Field names must follow the standard C identifier syntax: letter, then
2057 optional sequence of letters, numbers or underscores. Each field must have
2058 a different name.
2059
2060 Those `ctf_*()` macros are added to the `TP_FIELDS()` part of
2061 `TRACEPOINT_EVENT()`. Note that they are not delimited by commas.
2062 `TP_FIELDS()` may be empty, but the `TP_FIELDS(void)` form is _not_
2063 accepted.
2064
2065 The following snippet shows how argument expressions may be used in
2066 tracepoint fields and how they may refer freely to tracepoint arguments.
2067
2068 [source,c]
2069 ----
2070 /* for struct stat */
2071 #include <sys/types.h>
2072 #include <sys/stat.h>
2073 #include <unistd.h>
2074
2075 TRACEPOINT_EVENT(
2076 my_provider,
2077 my_tracepoint,
2078 TP_ARGS(
2079 int, my_int_arg,
2080 char*, my_str_arg,
2081 struct stat*, st
2082 ),
2083 TP_FIELDS(
2084 /* simple integer field with constant value */
2085 ctf_integer(
2086 int, /* field C type */
2087 my_constant_field, /* field name */
2088 23 + 17 /* argument expression */
2089 )
2090
2091 /* my_int_arg tracepoint argument */
2092 ctf_integer(
2093 int,
2094 my_int_arg_field,
2095 my_int_arg
2096 )
2097
2098 /* my_int_arg squared */
2099 ctf_integer(
2100 int,
2101 my_int_arg_field2,
2102 my_int_arg * my_int_arg
2103 )
2104
2105 /* sum of first 4 characters of my_str_arg */
2106 ctf_integer(
2107 int,
2108 sum4,
2109 my_str_arg[0] + my_str_arg[1] +
2110 my_str_arg[2] + my_str_arg[3]
2111 )
2112
2113 /* my_str_arg as string field */
2114 ctf_string(
2115 my_str_arg_field, /* field name */
2116 my_str_arg /* argument expression */
2117 )
2118
2119 /* st_size member of st tracepoint argument, hexadecimal */
2120 ctf_integer_hex(
2121 off_t, /* field C type */
2122 size_field, /* field name */
2123 st->st_size /* argument expression */
2124 )
2125
2126 /* st_size member of st tracepoint argument, as double */
2127 ctf_float(
2128 double, /* field C type */
2129 size_dbl_field, /* field name */
2130 (double) st->st_size /* argument expression */
2131 )
2132
2133 /* half of my_str_arg string as text sequence */
2134 ctf_sequence_text(
2135 char, /* element C type */
2136 half_my_str_arg_field, /* field name */
2137 my_str_arg, /* argument expression */
2138 size_t, /* length expression C type */
2139 strlen(my_str_arg) / 2 /* length expression */
2140 )
2141 )
2142 )
2143 ----
2144
2145 As you can see, having a custom argument expression for each field
2146 makes tracepoints very flexible for tracing a user space C application.
2147 This tracepoint definition is reused later in this guide, when
2148 actually using tracepoints in a user space application.
2149
2150
2151 [[using-tracepoint-classes]]
2152 ===== Using tracepoint classes
2153
2154 In LTTng-UST, a _tracepoint class_ is a class of tracepoints sharing the
2155 same field types and names. A _tracepoint instance_ is one instance of
2156 such a declared tracepoint class, with its own event name and tracepoint
2157 provider name.
2158
2159 What is documented in <<defining-tracepoints,Defining tracepoints>>
2160 is actually how to declare a _tracepoint class_ and define a
2161 _tracepoint instance_ at the same time. Without revealing the internals
2162 of LTTng-UST too much, it has to be noted that one serialization
2163 function is created for each tracepoint class. A serialization
2164 function is responsible for serializing the fields of a tracepoint
2165 into a sub-buffer when tracing. For various performance reasons, when
2166 your situation requires multiple tracepoints with different names, but
2167 with the same fields layout, the best practice is to manually create
2168 a tracepoint class and instantiate as many tracepoint instances as
2169 needed. One positive effect of such a design, amongst other advantages,
2170 is that all tracepoint instances of the same tracepoint class
2171 reuse the same serialization function, thus reducing cache pollution.
2172
2173 As an example, here are three tracepoint definitions as we know them:
2174
2175 [source,c]
2176 ----
2177 TRACEPOINT_EVENT(
2178 my_app,
2179 get_account,
2180 TP_ARGS(
2181 int, userid,
2182 size_t, len
2183 ),
2184 TP_FIELDS(
2185 ctf_integer(int, userid, userid)
2186 ctf_integer(size_t, len, len)
2187 )
2188 )
2189
2190 TRACEPOINT_EVENT(
2191 my_app,
2192 get_settings,
2193 TP_ARGS(
2194 int, userid,
2195 size_t, len
2196 ),
2197 TP_FIELDS(
2198 ctf_integer(int, userid, userid)
2199 ctf_integer(size_t, len, len)
2200 )
2201 )
2202
2203 TRACEPOINT_EVENT(
2204 my_app,
2205 get_transaction,
2206 TP_ARGS(
2207 int, userid,
2208 size_t, len
2209 ),
2210 TP_FIELDS(
2211 ctf_integer(int, userid, userid)
2212 ctf_integer(size_t, len, len)
2213 )
2214 )
2215 ----
2216
2217 In this case, three tracepoint classes are created, with one tracepoint
2218 instance for each of them: `get_account`, `get_settings` and
2219 `get_transaction`. However, they all share the same field names and
2220 types. Declaring one tracepoint class and three tracepoint instances of
2221 the latter is a better design choice:
2222
2223 [source,c]
2224 ----
2225 /* the tracepoint class */
2226 TRACEPOINT_EVENT_CLASS(
2227 /* tracepoint provider name */
2228 my_app,
2229
2230 /* tracepoint class name */
2231 my_class,
2232
2233 /* arguments */
2234 TP_ARGS(
2235 int, userid,
2236 size_t, len
2237 ),
2238
2239 /* fields */
2240 TP_FIELDS(
2241 ctf_integer(int, userid, userid)
2242 ctf_integer(size_t, len, len)
2243 )
2244 )
2245
2246 /* the tracepoint instances */
2247 TRACEPOINT_EVENT_INSTANCE(
2248 /* tracepoint provider name */
2249 my_app,
2250
2251 /* tracepoint class name */
2252 my_class,
2253
2254 /* tracepoint/event name */
2255 get_account,
2256
2257 /* arguments */
2258 TP_ARGS(
2259 int, userid,
2260 size_t, len
2261 )
2262 )
2263 TRACEPOINT_EVENT_INSTANCE(
2264 my_app,
2265 my_class,
2266 get_settings,
2267 TP_ARGS(
2268 int, userid,
2269 size_t, len
2270 )
2271 )
2272 TRACEPOINT_EVENT_INSTANCE(
2273 my_app,
2274 my_class,
2275 get_transaction,
2276 TP_ARGS(
2277 int, userid,
2278 size_t, len
2279 )
2280 )
2281 ----
2282
2283 Of course, all those names and `TP_ARGS()` invocations are redundant,
2284 but some C preprocessor magic can solve this:
2285
2286 [source,c]
2287 ----
2288 #define MY_TRACEPOINT_ARGS \
2289 TP_ARGS( \
2290 int, userid, \
2291 size_t, len \
2292 )
2293
2294 TRACEPOINT_EVENT_CLASS(
2295 my_app,
2296 my_class,
2297 MY_TRACEPOINT_ARGS,
2298 TP_FIELDS(
2299 ctf_integer(int, userid, userid)
2300 ctf_integer(size_t, len, len)
2301 )
2302 )
2303
2304 #define MY_APP_TRACEPOINT_INSTANCE(name) \
2305 TRACEPOINT_EVENT_INSTANCE( \
2306 my_app, \
2307 my_class, \
2308 name, \
2309 MY_TRACEPOINT_ARGS \
2310 )
2311
2312 MY_APP_TRACEPOINT_INSTANCE(get_account)
2313 MY_APP_TRACEPOINT_INSTANCE(get_settings)
2314 MY_APP_TRACEPOINT_INSTANCE(get_transaction)
2315 ----
2316
2317
2318 [[assigning-log-levels]]
2319 ===== Assigning log levels to tracepoints
2320
2321 Optionally, a log level can be assigned to a defined tracepoint.
2322 Assigning different levels of importance to tracepoints can be useful;
2323 when controlling tracing sessions,
2324 <<controlling-tracing,you can choose>> to only enable tracepoints
2325 falling into a specific log level range.
2326
2327 Log levels are assigned to defined tracepoints using the
2328 `TRACEPOINT_LOGLEVEL()` macro. The latter must be used _after_ having
2329 used `TRACEPOINT_EVENT()` for a given tracepoint. The
2330 `TRACEPOINT_LOGLEVEL()` macro has the following construct:
2331
2332 [source,c]
2333 ----
2334 TRACEPOINT_LOGLEVEL(PROVIDER_NAME, TRACEPOINT_NAME, LOG_LEVEL)
2335 ----
2336
2337 where the first two arguments are the same as the first two arguments
2338 of `TRACEPOINT_EVENT()` and `LOG_LEVEL` is one
2339 of the values given in the
2340 <<liblttng-ust-tracepoint-loglevel,LTTng-UST library reference>>
2341 section.
2342
2343 As an example, let's assign a `TRACE_DEBUG_UNIT` log level to our
2344 previous tracepoint definition:
2345
2346 [source,c]
2347 ----
2348 TRACEPOINT_LOGLEVEL(my_provider, my_tracepoint, TRACE_DEBUG_UNIT)
2349 ----
2350
2351
2352 [[probing-the-application-source-code]]
2353 ===== Probing the application's source code
2354
2355 Once tracepoints are properly defined within a tracepoint provider,
2356 they may be inserted into the user application to be instrumented
2357 using the `tracepoint()` macro. Its first argument is the tracepoint
2358 provider name and its second is the tracepoint name. The next, optional
2359 arguments are defined by the `TP_ARGS()` part of the definition of
2360 the tracepoint to use.
2361
2362 As an example, let us again take the following tracepoint definition:
2363
2364 [source,c]
2365 ----
2366 TRACEPOINT_EVENT(
2367 /* tracepoint provider name */
2368 my_provider,
2369
2370 /* tracepoint/event name */
2371 my_first_tracepoint,
2372
2373 /* list of tracepoint arguments */
2374 TP_ARGS(
2375 int, my_integer_arg,
2376 char*, my_string_arg
2377 ),
2378
2379 /* list of fields of eventual event */
2380 TP_FIELDS(
2381 ctf_string(my_string_field, my_string_arg)
2382 ctf_integer(int, my_integer_field, my_integer_arg)
2383 )
2384 )
2385 ----
2386
2387 Assuming this is part of a file named path:{tp.h} which defines the tracepoint
2388 provider and which is included by path:{tp.c}, here's a complete C application
2389 calling this tracepoint (multiple times):
2390
2391 [source,c]
2392 ----
2393 #define TRACEPOINT_DEFINE
2394 #include "tp.h"
2395
2396 int main(int argc, char* argv[])
2397 {
2398 int i;
2399
2400 tracepoint(my_provider, my_first_tracepoint, 23, "Hello, World!");
2401
2402 for (i = 0; i < argc; ++i) {
2403 tracepoint(my_provider, my_first_tracepoint, i, argv[i]);
2404 }
2405
2406 return 0;
2407 }
2408 ----
2409
2410 For each tracepoint provider, `TRACEPOINT_DEFINE` must be defined into
2411 exactly one translation unit (C source file) of the user application,
2412 before including the tracepoint provider header file. In other words,
2413 for a given tracepoint provider, you cannot define `TRACEPOINT_DEFINE`,
2414 and then include its header file in two separate C source files of
2415 the same application. `TRACEPOINT_DEFINE` is discussed further in
2416 <<building-tracepoint-providers-and-user-application,Building/linking
2417 tracepoint providers and the user application>>.
2418
2419 As another example, remember this definition we wrote in a previous
2420 section (comments are stripped):
2421
2422 [source,c]
2423 ----
2424 /* for struct stat */
2425 #include <sys/types.h>
2426 #include <sys/stat.h>
2427 #include <unistd.h>
2428
2429 TRACEPOINT_EVENT(
2430 my_provider,
2431 my_tracepoint,
2432 TP_ARGS(
2433 int, my_int_arg,
2434 char*, my_str_arg,
2435 struct stat*, st
2436 ),
2437 TP_FIELDS(
2438 ctf_integer(int, my_constant_field, 23 + 17)
2439 ctf_integer(int, my_int_arg_field, my_int_arg)
2440 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2441 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2442 my_str_arg[2] + my_str_arg[3])
2443 ctf_string(my_str_arg_field, my_str_arg)
2444 ctf_integer_hex(off_t, size_field, st->st_size)
2445 ctf_float(double, size_dbl_field, (double) st->st_size)
2446 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2447 size_t, strlen(my_str_arg) / 2)
2448 )
2449 )
2450 ----
2451
2452 Here's an example of calling it:
2453
2454 [source,c]
2455 ----
2456 #define TRACEPOINT_DEFINE
2457 #include "tp.h"
2458
2459 int main(void)
2460 {
2461 struct stat s;
2462
2463 stat("/etc/fstab", &s);
2464
2465 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2466
2467 return 0;
2468 }
2469 ----
2470
2471 When viewing the trace, assuming the file size of path:{/etc/fstab} is
2472 301{nbsp}bytes, the event generated by the execution of this tracepoint
2473 should have the following fields, in this order:
2474
2475 ----
2476 my_constant_field 40
2477 my_int_arg_field 23
2478 my_int_arg_field2 529
2479 sum4_field 389
2480 my_str_arg_field "Hello, World!"
2481 size_field 0x12d
2482 size_dbl_field 301.0
2483 half_my_str_arg_field "Hello,"
2484 ----
2485
2486
2487 [[building-tracepoint-providers-and-user-application]]
2488 ===== Building/linking tracepoint providers and the user application
2489
2490 The final step of using LTTng-UST for tracing a user space C application
2491 (beside running the application) is building and linking tracepoint
2492 providers and the application itself.
2493
2494 As discussed above, the macros used by the user-written tracepoint provider
2495 header file are useless until actually used to create probes code
2496 (global data structures and functions) in a translation unit (C source file).
2497 This is accomplished by defining `TRACEPOINT_CREATE_PROBES` in a translation
2498 unit and then including the tracepoint provider header file.
2499 When `TRACEPOINT_CREATE_PROBES` is defined, macros used and included by
2500 the tracepoint provider header produce actual source code needed by any
2501 application using the defined tracepoints. Defining
2502 `TRACEPOINT_CREATE_PROBES` produces code used when registering
2503 tracepoint providers when the tracepoint provider package loads.
2504
2505 The other important definition is `TRACEPOINT_DEFINE`. This one creates
2506 global, per-tracepoint structures referencing the tracepoint providers
2507 data. Those structures are required by the actual functions inserted
2508 where `tracepoint()` macros are placed and need to be defined by the
2509 instrumented application.
2510
2511 Both `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` need to be defined
2512 at some places in order to trace a user space C application using LTTng.
2513 Although explaining their exact mechanism is beyond the scope of this
2514 document, the reason they both exist separately is to allow the trace
2515 providers to be packaged as a shared object (dynamically loaded library).
2516
2517 There are two ways to compile and link the tracepoint providers
2518 with the application: _<<static-linking,statically>>_ or
2519 _<<dynamic-linking,dynamically>>_. Both methods are covered in the
2520 following subsections.
2521
2522
2523 [[static-linking]]
2524 ===== Static linking the tracepoint providers to the application
2525
2526 With the static linking method, compiled tracepoint providers are copied
2527 into the target application. There are three ways to do this:
2528
2529 . Use one of your **existing C source files** to create probes.
2530 . Create probes in a separate C source file and build it as an
2531 **object file** to be linked with the application (more decoupled).
2532 . Create probes in a separate C source file, build it as an
2533 object file and archive it to create a **static library**
2534 (more decoupled, more portable).
2535
2536 The first approach is to define `TRACEPOINT_CREATE_PROBES` and include
2537 your tracepoint provider(s) header file(s) directly into an existing C
2538 source file. Here's an example:
2539
2540 [source,c]
2541 ----
2542 #include <stdlib.h>
2543 #include <stdio.h>
2544 /* ... */
2545
2546 #define TRACEPOINT_CREATE_PROBES
2547 #define TRACEPOINT_DEFINE
2548 #include "tp.h"
2549
2550 /* ... */
2551
2552 int my_func(int a, const char* b)
2553 {
2554 /* ... */
2555
2556 tracepoint(my_provider, my_tracepoint, buf, sz, limit, &tt)
2557
2558 /* ... */
2559 }
2560
2561 /* ... */
2562 ----
2563
2564 Again, before including a given tracepoint provider header file,
2565 `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` must be defined in
2566 one, **and only one**, translation unit. Other C source files of the
2567 same application may include path:{tp.h} to use tracepoints with
2568 the `tracepoint()` macro, but must not define
2569 `TRACEPOINT_CREATE_PROBES`/`TRACEPOINT_DEFINE` again.
2570
2571 This translation unit may be built as an object file by making sure to
2572 add `.` to the include path:
2573
2574 [role="term"]
2575 ----
2576 gcc -c -I. file.c
2577 ----
2578
2579 The second approach is to isolate the tracepoint provider code into a
2580 separate object file by using a dedicated C source file to create probes:
2581
2582 [source,c]
2583 ----
2584 #define TRACEPOINT_CREATE_PROBES
2585
2586 #include "tp.h"
2587 ----
2588
2589 `TRACEPOINT_DEFINE` must be defined by a translation unit of the
2590 application. Since we're talking about static linking here, it could as
2591 well be defined directly in the file above, before `#include "tp.h"`:
2592
2593 [source,c]
2594 ----
2595 #define TRACEPOINT_CREATE_PROBES
2596 #define TRACEPOINT_DEFINE
2597
2598 #include "tp.h"
2599 ----
2600
2601 This is actually what <<lttng-gen-tp,`lttng-gen-tp`>> does, and is
2602 the recommended practice.
2603
2604 Build the tracepoint provider:
2605
2606 [role="term"]
2607 ----
2608 gcc -c -I. tp.c
2609 ----
2610
2611 Finally, the resulting object file may be archived to create a
2612 more portable tracepoint provider static library:
2613
2614 [role="term"]
2615 ----
2616 ar rc tp.a tp.o
2617 ----
2618
2619 Using a static library does have the advantage of centralising the
2620 tracepoint providers objects so they can be shared between multiple
2621 applications. This way, when the tracepoint provider is modified, the
2622 source code changes don't have to be patched into each application's source
2623 code tree. The applications need to be relinked after each change, but need
2624 not to be otherwise recompiled (unless the tracepoint provider's API
2625 changes).
2626
2627 Regardless of which method you choose, you end up with an object file
2628 (potentially archived) containing the trace providers assembled code.
2629 To link this code with the rest of your application, you must also link
2630 with `liblttng-ust` and `libdl`:
2631
2632 [role="term"]
2633 ----
2634 gcc -o app tp.o other.o files.o of.o your.o app.o -llttng-ust -ldl
2635 ----
2636
2637 or
2638
2639 [role="term"]
2640 ----
2641 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -ldl
2642 ----
2643
2644 If you're using a BSD
2645 system, replace `-ldl` with `-lc`:
2646
2647 [role="term"]
2648 ----
2649 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -lc
2650 ----
2651
2652 The application can be started as usual, for example:
2653
2654 [role="term"]
2655 ----
2656 ./app
2657 ----
2658
2659 The `lttng` command line tool can be used to
2660 <<controlling-tracing,control tracing>>.
2661
2662
2663 [[dynamic-linking]]
2664 ===== Dynamic linking the tracepoint providers to the application
2665
2666 The second approach to package the tracepoint providers is to use
2667 dynamic linking: the library and its member functions are explicitly
2668 sought, loaded and unloaded at runtime using `libdl`.
2669
2670 It has to be noted that, for a variety of reasons, the created shared
2671 library is be dynamically _loaded_, as opposed to dynamically
2672 _linked_. The tracepoint provider shared object is, however, linked
2673 with `liblttng-ust`, so that `liblttng-ust` is guaranteed to be loaded
2674 as soon as the tracepoint provider is. If the tracepoint provider is
2675 not loaded, since the application itself is not linked with
2676 `liblttng-ust`, the latter is not loaded at all and the tracepoint calls
2677 become inert.
2678
2679 The process to create the tracepoint provider shared object is pretty
2680 much the same as the static library method, except that:
2681
2682 * since the tracepoint provider is not part of the application
2683 anymore, `TRACEPOINT_DEFINE` _must_ be defined, for each tracepoint
2684 provider, in exactly one translation unit (C source file) of the
2685 _application_;
2686 * `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` must be defined next to
2687 `TRACEPOINT_DEFINE`.
2688
2689 Regarding `TRACEPOINT_DEFINE` and `TRACEPOINT_PROBE_DYNAMIC_LINKAGE`,
2690 the recommended practice is to use a separate C source file in your
2691 application to define them, then include the tracepoint provider
2692 header files afterwards. For example:
2693
2694 [source,c]
2695 ----
2696 #define TRACEPOINT_DEFINE
2697 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2698
2699 /* include the header files of one or more tracepoint providers below */
2700 #include "tp1.h"
2701 #include "tp2.h"
2702 #include "tp3.h"
2703 ----
2704
2705 `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` makes the macros included afterwards
2706 (by including the tracepoint provider header, which itself includes
2707 LTTng-UST headers) aware that the tracepoint provider is to be loaded
2708 dynamically and not part of the application's executable.
2709
2710 The tracepoint provider object file used to create the shared library
2711 is built like it is using the static library method, only with the
2712 `-fpic` option added:
2713
2714 [role="term"]
2715 ----
2716 gcc -c -fpic -I. tp.c
2717 ----
2718
2719 It is then linked as a shared library like this:
2720
2721 [role="term"]
2722 ----
2723 gcc -shared -Wl,--no-as-needed -o tp.so -llttng-ust tp.o
2724 ----
2725
2726 As previously stated, this tracepoint provider shared object isn't
2727 linked with the user application: it's loaded manually. This is
2728 why the application is built with no mention of this tracepoint
2729 provider, but still needs `libdl`:
2730
2731 [role="term"]
2732 ----
2733 gcc -o app other.o files.o of.o your.o app.o -ldl
2734 ----
2735
2736 Now, to make LTTng-UST tracing available to the application, the
2737 `LD_PRELOAD` environment variable is used to preload the tracepoint
2738 provider shared library _before_ the application actually starts:
2739
2740 [role="term"]
2741 ----
2742 LD_PRELOAD=/path/to/tp.so ./app
2743 ----
2744
2745 [NOTE]
2746 ====
2747 It is not safe to use
2748 `dlclose()` on a tracepoint provider shared object that
2749 is being actively used for tracing, due to a lack of reference
2750 counting from LTTng-UST to the shared object.
2751
2752 For example, statically linking a tracepoint provider to a
2753 shared object which is to be dynamically loaded by an application
2754 (a plugin, for example) is not safe: the shared object, which
2755 contains the tracepoint provider, could be dynamically closed
2756 (`dlclose()`) at any time by the application.
2757
2758 To instrument a shared object, either:
2759
2760 * Statically link the tracepoint provider to the _application_, or
2761 * Build the tracepoint provider as a shared object (following
2762 the procedure shown in this section), and preload it when
2763 tracing is needed using the `LD_PRELOAD`
2764 environment variable.
2765 ====
2766
2767 Your application will still work without this preloading, albeit without
2768 LTTng-UST tracing support:
2769
2770 [role="term"]
2771 ----
2772 ./app
2773 ----
2774
2775
2776 [[using-lttng-ust-with-daemons]]
2777 ===== Using LTTng-UST with daemons
2778
2779 Some extra care is needed when using `liblttng-ust` with daemon
2780 applications that call `fork()`, `clone()` or BSD's `rfork()` without
2781 a following `exec()` family system call. The `liblttng-ust-fork`
2782 library must be preloaded for the application.
2783
2784 Example:
2785
2786 [role="term"]
2787 ----
2788 LD_PRELOAD=liblttng-ust-fork.so ./app
2789 ----
2790
2791 Or, if you're using a tracepoint provider shared library:
2792
2793 [role="term"]
2794 ----
2795 LD_PRELOAD="liblttng-ust-fork.so /path/to/tp.so" ./app
2796 ----
2797
2798
2799 [[lttng-ust-pkg-config]]
2800 ===== Using pkg-config
2801
2802 On some distributions, LTTng-UST is shipped with a pkg-config metadata
2803 file, so that you may use the `pkg-config` tool:
2804
2805 [role="term"]
2806 ----
2807 pkg-config --libs lttng-ust
2808 ----
2809
2810 This prints `-llttng-ust -ldl` on Linux systems.
2811
2812 You may also check the LTTng-UST version using `pkg-config`:
2813
2814 [role="term"]
2815 ----
2816 pkg-config --modversion lttng-ust
2817 ----
2818
2819 For more information about pkg-config, see
2820 http://linux.die.net/man/1/pkg-config[its manpage].
2821
2822
2823 [role="since-2.5"]
2824 [[tracef]]
2825 ===== Using `tracef()`
2826
2827 `tracef()` is a small LTTng-UST API to avoid defining your own
2828 tracepoints and tracepoint providers. The signature of `tracef()` is
2829 the same as `printf()`'s.
2830
2831 The `tracef()` utility function was developed to make user space tracing
2832 super simple, albeit with notable disadvantages compared to custom,
2833 full-fledged tracepoint providers:
2834
2835 * All generated events have the same provider/event names, respectively
2836 `lttng_ust_tracef` and `event`.
2837 * There's no static type checking.
2838 * The only event field you actually get, named `msg`, is a string
2839 potentially containing the values you passed to the function
2840 using your own format. This also means that you cannot use filtering
2841 using a custom expression at runtime because there are no isolated
2842 fields.
2843 * Since `tracef()` uses C standard library's `vasprintf()` function
2844 in the background to format the strings at runtime, its
2845 expected performance is lower than using custom tracepoint providers
2846 with typed fields, which do not require a conversion to a string.
2847
2848 Thus, `tracef()` is useful for quick prototyping and debugging, but
2849 should not be considered for any permanent/serious application
2850 instrumentation.
2851
2852 To use `tracef()`, first include `<lttng/tracef.h>` in the C source file
2853 where you need to insert probes:
2854
2855 [source,c]
2856 ----
2857 #include <lttng/tracef.h>
2858 ----
2859
2860 Use `tracef()` like you would use `printf()` in your source code, for
2861 example:
2862
2863 [source,c]
2864 ----
2865 /* ... */
2866
2867 tracef("my message, my integer: %d", my_integer);
2868
2869 /* ... */
2870 ----
2871
2872 Link your application with `liblttng-ust`:
2873
2874 [role="term"]
2875 ----
2876 gcc -o app app.c -llttng-ust
2877 ----
2878
2879 Execute the application as usual:
2880
2881 [role="term"]
2882 ----
2883 ./app
2884 ----
2885
2886 Voilà! Use the `lttng` command line tool to
2887 <<controlling-tracing,control tracing>>. You can enable `tracef()`
2888 events like this:
2889
2890 [role="term"]
2891 ----
2892 lttng enable-event --userspace 'lttng_ust_tracef:*'
2893 ----
2894
2895
2896 [[lttng-ust-environment-variables-compiler-flags]]
2897 ===== LTTng-UST environment variables and special compilation flags
2898
2899 A few special environment variables and compile flags may affect the
2900 behavior of LTTng-UST.
2901
2902 LTTng-UST's debugging can be activated by setting the environment
2903 variable `LTTNG_UST_DEBUG` to `1` when launching the application. It
2904 can also be enabled at compile time by defining `LTTNG_UST_DEBUG` when
2905 compiling LTTng-UST (using the `-DLTTNG_UST_DEBUG` compiler option).
2906
2907 The environment variable `LTTNG_UST_REGISTER_TIMEOUT` can be used to
2908 specify how long the application should wait for the
2909 <<lttng-sessiond,session daemon>>'s _registration done_ command
2910 before proceeding to execute the main program. The timeout value is
2911 specified in milliseconds. 0 means _don't wait_. -1 means
2912 _wait forever_. Setting this environment variable to 0 is recommended
2913 for applications with time contraints on the process startup time.
2914
2915 The default value of `LTTNG_UST_REGISTER_TIMEOUT` (when not defined)
2916 is **3000{nbsp}ms**.
2917
2918 The compilation definition `LTTNG_UST_DEBUG_VALGRIND` should be enabled
2919 at build time (`-DLTTNG_UST_DEBUG_VALGRIND`) to allow `liblttng-ust`
2920 to be used with http://valgrind.org/[Valgrind].
2921 The side effect of defining `LTTNG_UST_DEBUG_VALGRIND` is that per-CPU
2922 buffering is disabled.
2923
2924
2925 [[cxx-application]]
2926 ==== $$C++$$ application
2927
2928 Because of $$C++$$'s cross-compatibility with the C language, $$C++$$
2929 applications can be readily instrumented with the LTTng-UST C API.
2930
2931 Follow the <<c-application,C application>> user guide above. It
2932 should be noted that, in this case, tracepoint providers should have
2933 the typical `.cpp`, `.cxx` or `.cc` extension and be built with `g++`
2934 instead of `gcc`. This is the easiest way of avoiding linking errors
2935 due to symbol name mangling incompatibilities between both languages.
2936
2937
2938 [[prebuilt-ust-helpers]]
2939 ==== Prebuilt user space tracing helpers
2940
2941 The LTTng-UST package provides a few helpers that one may find
2942 useful in some situations. They all work the same way: you must
2943 preload the appropriate shared object before running the user
2944 application (using the `LD_PRELOAD` environment variable).
2945
2946 The shared objects are normally found in dir:{/usr/lib}.
2947
2948 The current installed helpers are:
2949
2950 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}::
2951 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
2952 and POSIX threads tracing>>.
2953
2954 path:{liblttng-ust-cyg-profile.so} and path:{liblttng-ust-cyg-profile-fast.so}::
2955 <<liblttng-ust-cyg-profile,Function tracing>>.
2956
2957 path:{liblttng-ust-dl.so}::
2958 <<liblttng-ust-dl,Dynamic linker tracing>>.
2959
2960 The following subsections document what helpers instrument exactly
2961 and how to use them.
2962
2963
2964 [role="since-2.3"]
2965 [[liblttng-ust-libc-pthread-wrapper]]
2966 ===== C standard library and POSIX threads tracing
2967
2968 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}
2969 can add instrumentation to respectively some C standard library and
2970 POSIX threads functions.
2971
2972 The following functions are traceable by path:{liblttng-ust-libc-wrapper.so}:
2973
2974 [role="growable"]
2975 .Functions instrumented by path:{liblttng-ust-libc-wrapper.so}
2976 |====
2977 |TP provider name |TP name |Instrumented function
2978
2979 .6+|`ust_libc` |`malloc` |`malloc()`
2980 |`calloc` |`calloc()`
2981 |`realloc` |`realloc()`
2982 |`free` |`free()`
2983 |`memalign` |`memalign()`
2984 |`posix_memalign` |`posix_memalign()`
2985 |====
2986
2987 The following functions are traceable by
2988 path:{liblttng-ust-pthread-wrapper.so}:
2989
2990 [role="growable"]
2991 .Functions instrumented by path:{liblttng-ust-pthread-wrapper.so}
2992 |====
2993 |TP provider name |TP name |Instrumented function
2994
2995 .4+|`ust_pthread` |`pthread_mutex_lock_req` |`pthread_mutex_lock()` (request time)
2996 |`pthread_mutex_lock_acq` |`pthread_mutex_lock()` (acquire time)
2997 |`pthread_mutex_trylock` |`pthread_mutex_trylock()`
2998 |`pthread_mutex_unlock` |`pthread_mutex_unlock()`
2999 |====
3000
3001 All tracepoints have fields corresponding to the arguments of the
3002 function they instrument.
3003
3004 To use one or the other with any user application, independently of
3005 how the latter is built, do:
3006
3007 [role="term"]
3008 ----
3009 LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
3010 ----
3011
3012 or
3013
3014 [role="term"]
3015 ----
3016 LD_PRELOAD=liblttng-ust-pthread-wrapper.so my-app
3017 ----
3018
3019 To use both, do:
3020
3021 [role="term"]
3022 ----
3023 LD_PRELOAD="liblttng-ust-libc-wrapper.so liblttng-ust-pthread-wrapper.so" my-app
3024 ----
3025
3026 When the shared object is preloaded, it effectively replaces the
3027 functions listed in the above tables by wrappers which add tracepoints
3028 and call the replaced functions.
3029
3030 Of course, like any other tracepoint, the ones above need to be enabled
3031 in order for LTTng-UST to generate events. This is done using the
3032 `lttng` command line tool
3033 (see <<controlling-tracing,Controlling tracing>>).
3034
3035
3036 [[liblttng-ust-cyg-profile]]
3037 ===== Function tracing
3038
3039 Function tracing is the recording of which functions are entered and
3040 left during the execution of an application. Like with any LTTng event,
3041 the precise time at which this happens is also kept.
3042
3043 GCC and clang have an option named
3044 https://gcc.gnu.org/onlinedocs/gcc-4.9.1/gcc/Code-Gen-Options.html[`-finstrument-functions`]
3045 which generates instrumentation calls for entry and exit to functions.
3046 The LTTng-UST function tracing helpers, path:{liblttng-ust-cyg-profile.so}
3047 and path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
3048 to add instrumentation to the two generated functions (which contain
3049 `cyg_profile` in their names, hence the shared object's name).
3050
3051 In order to use LTTng-UST function tracing, the translation units to
3052 instrument must be built using the `-finstrument-functions` compiler
3053 flag.
3054
3055 LTTng-UST function tracing comes in two flavors, each providing
3056 different trade-offs: path:{liblttng-ust-cyg-profile-fast.so} and
3057 path:{liblttng-ust-cyg-profile.so}.
3058
3059 **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant that
3060 should only be used where it can be _guaranteed_ that the complete event
3061 stream is recorded without any missing events. Any kind of duplicate
3062 information is left out. This version registers the following
3063 tracepoints:
3064
3065 [role="growable",options="header,autowidth"]
3066 .Functions instrumented by path:{liblttng-ust-cyg-profile-fast.so}
3067 |====
3068 |TP provider name |TP name |Instrumented function
3069
3070 .2+|`lttng_ust_cyg_profile_fast`
3071
3072 |`func_entry`
3073 a|Function entry
3074
3075 `addr`::
3076 Address of called function.
3077
3078 |`func_exit`
3079 |Function exit
3080 |====
3081
3082 Assuming no event is lost, having only the function addresses on entry
3083 is enough for creating a call graph (remember that a recorded event
3084 always contains the ID of the CPU that generated it). A tool like
3085 https://sourceware.org/binutils/docs/binutils/addr2line.html[`addr2line`]
3086 may be used to convert function addresses back to source files names
3087 and line numbers.
3088
3089 The other helper,
3090 **path:{liblttng-ust-cyg-profile.so}**,
3091 is a more robust variant which also works for use cases where
3092 events might get discarded or not recorded from application startup.
3093 In these cases, the trace analyzer needs extra information to be
3094 able to reconstruct the program flow. This version registers the
3095 following tracepoints:
3096
3097 [role="growable",options="header,autowidth"]
3098 .Functions instrumented by path:{liblttng-ust-cyg-profile.so}
3099 |====
3100 |TP provider name |TP name |Instrumented function
3101
3102 .2+|`lttng_ust_cyg_profile`
3103
3104 |`func_entry`
3105 a|Function entry
3106
3107 `addr`::
3108 Address of called function.
3109
3110 `call_site`::
3111 Call site address.
3112
3113 |`func_exit`
3114 a|Function exit
3115
3116 `addr`::
3117 Address of called function.
3118
3119 `call_site`::
3120 Call site address.
3121 |====
3122
3123 To use one or the other variant with any user application, assuming at
3124 least one translation unit of the latter is compiled with the
3125 `-finstrument-functions` option, do:
3126
3127 [role="term"]
3128 ----
3129 LD_PRELOAD=liblttng-ust-cyg-profile-fast.so my-app
3130 ----
3131
3132 or
3133
3134 [role="term"]
3135 ----
3136 LD_PRELOAD=liblttng-ust-cyg-profile.so my-app
3137 ----
3138
3139 It might be necessary to limit the number of source files where
3140 `-finstrument-functions` is used to prevent excessive amount of trace
3141 data to be generated at runtime.
3142
3143 TIP: When using GCC, at least, you can use
3144 the `-finstrument-functions-exclude-function-list`
3145 option to avoid instrumenting entries and exits of specific
3146 symbol names.
3147
3148 All events generated from LTTng-UST function tracing are provided on
3149 log level `TRACE_DEBUG_FUNCTION`, which is useful to easily enable
3150 function tracing events in your tracing session using the
3151 `--loglevel-only` option of `lttng enable-event`
3152 (see <<controlling-tracing,Controlling tracing>>).
3153
3154
3155 [role="since-2.4"]
3156 [[liblttng-ust-dl]]
3157 ===== Dynamic linker tracing
3158
3159 This LTTng-UST helper causes all calls to `dlopen()` and `dlclose()`
3160 in the target application to be traced with LTTng.
3161
3162 The helper's shared object, path:{liblttng-ust-dl.so}, registers the
3163 following tracepoints when preloaded:
3164
3165 [role="growable",options="header,autowidth"]
3166 .Functions instrumented by path:{liblttng-ust-dl.so}
3167 |====
3168 |TP provider name |TP name |Instrumented function
3169
3170 .2+|`ust_baddr`
3171
3172 |`push`
3173 a|`dlopen()` call
3174
3175 `baddr`::
3176 Memory base address (where the dynamic linker placed the shared
3177 object).
3178
3179 `sopath`::
3180 File system path to the loaded shared object.
3181
3182 `size`::
3183 File size of the the loaded shared object.
3184
3185 `mtime`::
3186 Last modification time (seconds since Epoch time) of the loaded shared
3187 object.
3188
3189 |`pop`
3190 a|Function exit
3191
3192 `baddr`::
3193 Memory base address (where the dynamic linker placed the shared
3194 object).
3195 |====
3196
3197 To use this LTTng-UST helper with any user application, independently of
3198 how the latter is built, do:
3199
3200 [role="term"]
3201 ----
3202 LD_PRELOAD=liblttng-ust-dl.so my-app
3203 ----
3204
3205 Of course, like any other tracepoint, the ones above need to be enabled
3206 in order for LTTng-UST to generate events. This is done using the
3207 `lttng` command line tool
3208 (see <<controlling-tracing,Controlling tracing>>).
3209
3210
3211 [role="since-2.4"]
3212 [[java-application]]
3213 ==== Java application
3214
3215 LTTng-UST provides a _logging_ back-end for Java applications using either
3216 http://docs.oracle.com/javase/7/docs/api/java/util/logging/Logger.html[`java.util.logging`]
3217 (JUL) or
3218 http://logging.apache.org/log4j/1.2/[Apache log4j 1.2]
3219 This back-end is called the _LTTng-UST Java agent_, and it is responsible
3220 for the communications with an LTTng session daemon.
3221
3222 From the user's point of view, once the LTTng-UST Java agent has been
3223 initialized, JUL and log4j loggers may be created and used as usual.
3224 The agent adds its own handler to the _root logger_, so that all
3225 loggers may generate LTTng events with no effort.
3226
3227 Common JUL/log4j features are supported using the `lttng` tool
3228 (see <<controlling-tracing,Controlling tracing>>):
3229
3230 * listing all logger names
3231 * enabling/disabling events per logger name
3232 * JUL/log4j log levels
3233
3234
3235 [role="since-2.1"]
3236 [[jul]]
3237 ===== `java.util.logging`
3238
3239 Here's an example of tracing a Java application which is using
3240 **`java.util.logging`**:
3241
3242 [source,java]
3243 ----
3244 import java.util.logging.Logger;
3245 import org.lttng.ust.agent.LTTngAgent;
3246
3247 public class Test
3248 {
3249 private static final int answer = 42;
3250
3251 public static void main(String[] argv) throws Exception
3252 {
3253 // create a logger
3254 Logger logger = Logger.getLogger("jello");
3255
3256 // call this as soon as possible (before logging)
3257 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3258
3259 // log at will!
3260 logger.info("some info");
3261 logger.warning("some warning");
3262 Thread.sleep(500);
3263 logger.finer("finer information; the answer is " + answer);
3264 Thread.sleep(123);
3265 logger.severe("error!");
3266
3267 // not mandatory, but cleaner
3268 lttngAgent.dispose();
3269 }
3270 }
3271 ----
3272
3273 The LTTng-UST Java agent is packaged in a JAR file named
3274 `liblttng-ust-agent.jar` It is typically located in
3275 dir:{/usr/lib/lttng/java}. To compile the snippet above
3276 (saved as `Test.java`), do:
3277
3278 [role="term"]
3279 ----
3280 javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar Test.java
3281 ----
3282
3283 You can run the resulting compiled class like this:
3284
3285 [role="term"]
3286 ----
3287 java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:. Test
3288 ----
3289
3290 NOTE: http://openjdk.java.net/[OpenJDK] 7 is used for development and
3291 continuous integration, thus this version is directly supported.
3292 However, the LTTng-UST Java agent has also been tested with OpenJDK 6.
3293
3294
3295 [role="since-2.6"]
3296 [[log4j]]
3297 ===== Apache log4j 1.2
3298
3299 LTTng features an Apache log4j 1.2 agent, which means your existing
3300 Java applications using log4j 1.2 for logging can record events to
3301 LTTng traces with just a minor source code modification.
3302
3303 NOTE: This version of LTTng does not support Log4j 2.
3304
3305 Here's an example:
3306
3307 [source,java]
3308 ----
3309 import org.apache.log4j.Logger;
3310 import org.apache.log4j.BasicConfigurator;
3311 import org.lttng.ust.agent.LTTngAgent;
3312
3313 public class Test
3314 {
3315 private static final int answer = 42;
3316
3317 public static void main(String[] argv) throws Exception
3318 {
3319 // create and configure a logger
3320 Logger logger = Logger.getLogger(Test.class);
3321 BasicConfigurator.configure();
3322
3323 // call this as soon as possible (before logging)
3324 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3325
3326 // log at will!
3327 logger.info("some info");
3328 logger.warn("some warning");
3329 Thread.sleep(500);
3330 logger.debug("debug information; the answer is " + answer);
3331 Thread.sleep(123);
3332 logger.error("error!");
3333 logger.fatal("fatal error!");
3334
3335 // not mandatory, but cleaner
3336 lttngAgent.dispose();
3337 }
3338 }
3339 ----
3340
3341 To compile the snippet above, do:
3342
3343 [role="term"]
3344 ----
3345 javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP Test.java
3346 ----
3347
3348 where `$LOG4JCP` is the log4j 1.2 JAR file path.
3349
3350 You can run the resulting compiled class like this:
3351
3352 [role="term"]
3353 ----
3354 java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP:. Test
3355 ----
3356
3357
3358 [[instrumenting-linux-kernel]]
3359 ==== Linux kernel
3360
3361 The Linux kernel can be instrumented for LTTng tracing, either its core
3362 source code or a kernel module. It has to be noted that Linux is
3363 readily traceable using LTTng since many parts of its source code are
3364 already instrumented: this is the job of the upstream
3365 http://git.lttng.org/?p=lttng-modules.git[LTTng-modules]
3366 package. This section presents how to add LTTng instrumentation where it
3367 does not currently exist and how to instrument custom kernel modules.
3368
3369 All LTTng instrumentation in the Linux kernel is based on an existing
3370 infrastructure which bears the name of its main macro, `TRACE_EVENT()`.
3371 This macro is used to define tracepoints,
3372 each tracepoint having a name, usually with the
3373 +__subsys__&#95;__name__+ format,
3374 +_subsys_+ being the subsystem name and
3375 +_name_+ the specific event name.
3376
3377 Tracepoints defined with `TRACE_EVENT()` may be inserted anywhere in
3378 the Linux kernel source code, after what callbacks, called _probes_,
3379 may be registered to execute some action when a tracepoint is
3380 executed. This mechanism is directly used by ftrace and perf,
3381 but cannot be used as is by LTTng: an adaptation layer is added to
3382 satisfy LTTng's specific needs.
3383
3384 With that in mind, this documentation does not cover the `TRACE_EVENT()`
3385 format and how to use it, but it is mandatory to understand it and use
3386 it to instrument Linux for LTTng. A series of
3387 LWN articles explain
3388 `TRACE_EVENT()` in details:
3389 http://lwn.net/Articles/379903/[part 1],
3390 http://lwn.net/Articles/381064/[part 2], and
3391 http://lwn.net/Articles/383362/[part 3].
3392 Once you master `TRACE_EVENT()` enough for your use case, continue
3393 reading this section so that you can add the LTTng adaptation layer of
3394 instrumentation.
3395
3396 This section first discusses the general method of instrumenting the
3397 Linux kernel for LTTng. This method is then reused for the specific
3398 case of instrumenting a kernel module.
3399
3400
3401 [[instrumenting-linux-kernel-itself]]
3402 ===== Instrumenting the Linux kernel for LTTng
3403
3404 The following subsections explain strictly how to add custom LTTng
3405 instrumentation to the Linux kernel. They do not explain how the
3406 macros actually work and the internal mechanics of the tracer.
3407
3408 You should have a Linux kernel source code tree to work with.
3409 Throughout this section, all file paths are relative to the root of
3410 this tree unless otherwise stated.
3411
3412 You need a copy of the LTTng-modules Git repository:
3413
3414 [role="term"]
3415 ----
3416 git clone git://git.lttng.org/lttng-modules.git
3417 ----
3418
3419 The steps to add custom LTTng instrumentation to a Linux kernel
3420 involves defining and using the mainline `TRACE_EVENT()` tracepoints
3421 first, then writing and using the LTTng adaptation layer.
3422
3423
3424 [[mainline-trace-event]]
3425 ===== Defining/using tracepoints with mainline `TRACE_EVENT()` infrastructure
3426
3427 The first step is to define tracepoints using the mainline Linux
3428 `TRACE_EVENT()` macro and insert tracepoints where you want them.
3429 Your tracepoint definitions reside in a header file in
3430 dir:{include/trace/events}. If you're adding tracepoints to an existing
3431 subsystem, edit its appropriate header file.
3432
3433 As an example, the following header file (let's call it
3434 dir:{include/trace/events/hello.h}) defines one tracepoint using
3435 `TRACE_EVENT()`:
3436
3437 [source,c]
3438 ----
3439 /* subsystem name is "hello" */
3440 #undef TRACE_SYSTEM
3441 #define TRACE_SYSTEM hello
3442
3443 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3444 #define _TRACE_HELLO_H
3445
3446 #include <linux/tracepoint.h>
3447
3448 TRACE_EVENT(
3449 /* "hello" is the subsystem name, "world" is the event name */
3450 hello_world,
3451
3452 /* tracepoint function prototype */
3453 TP_PROTO(int foo, const char* bar),
3454
3455 /* arguments for this tracepoint */
3456 TP_ARGS(foo, bar),
3457
3458 /* LTTng doesn't need those */
3459 TP_STRUCT__entry(),
3460 TP_fast_assign(),
3461 TP_printk("", 0)
3462 );
3463
3464 #endif
3465
3466 /* this part must be outside protection */
3467 #include <trace/define_trace.h>
3468 ----
3469
3470 Notice that we don't use any of the last three arguments: they
3471 are left empty here because LTTng doesn't need them. You would only fill
3472 `TP_STRUCT__entry()`, `TP_fast_assign()` and `TP_printk()` if you were
3473 to also use this tracepoint for ftrace/perf.
3474
3475 Once this is done, you may place calls to `trace_hello_world()`
3476 wherever you want in the Linux source code. As an example, let us place
3477 such a tracepoint in the `usb_probe_device()` static function
3478 (path:{drivers/usb/core/driver.c}):
3479
3480 [source,c]
3481 ----
3482 /* called from driver core with dev locked */
3483 static int usb_probe_device(struct device *dev)
3484 {
3485 struct usb_device_driver *udriver = to_usb_device_driver(dev->driver);
3486 struct usb_device *udev = to_usb_device(dev);
3487 int error = 0;
3488
3489 trace_hello_world(udev->devnum, udev->product);
3490
3491 /* ... */
3492 }
3493 ----
3494
3495 This tracepoint should fire every time a USB device is plugged in.
3496
3497 At the top of path:{driver.c}, we need to include our actual tracepoint
3498 definition and, in this case (one place per subsystem), define
3499 `CREATE_TRACE_POINTS`, which creates our tracepoint:
3500
3501 [source,c]
3502 ----
3503 /* ... */
3504
3505 #include "usb.h"
3506
3507 #define CREATE_TRACE_POINTS
3508 #include <trace/events/hello.h>
3509
3510 /* ... */
3511 ----
3512
3513 Build your custom Linux kernel. In order to use LTTng, make sure the
3514 following kernel configuration options are enabled:
3515
3516 * `CONFIG_MODULES` (loadable module support)
3517 * `CONFIG_KALLSYMS` (load all symbols for debugging/kksymoops)
3518 * `CONFIG_HIGH_RES_TIMERS` (high resolution timer support)
3519 * `CONFIG_TRACEPOINTS` (kernel tracepoint instrumentation)
3520
3521 Boot the custom kernel. The directory
3522 dir:{/sys/kernel/debug/tracing/events/hello} should exist if everything
3523 went right, with a dir:{hello_world} subdirectory.
3524
3525
3526 [[lttng-adaptation-layer]]
3527 ===== Adding the LTTng adaptation layer
3528
3529 The steps to write the LTTng adaptation layer are, in your
3530 LTTng-modules copy's source code tree:
3531
3532 . In dir:{instrumentation/events/lttng-module},
3533 add a header +__subsys__.h+ for your custom
3534 subsystem +__subsys__+ and write your
3535 tracepoint definitions using LTTng-modules macros in it.
3536 Those macros look like the mainline kernel equivalents,
3537 but they present subtle, yet important differences.
3538 . In dir:{probes}, create the C source file of the LTTng probe kernel
3539 module for your subsystem. It should be named
3540 +lttng-probe-__subsys__.c+.
3541 . Edit path:{probes/Makefile} so that the LTTng-modules project
3542 builds your custom LTTng probe kernel module.
3543 . Build and install LTTng kernel modules.
3544
3545 Following our `hello_world` event example, here's the content of
3546 path:{instrumentation/events/lttng-module/hello.h}:
3547
3548 [source,c]
3549 ----
3550 #undef TRACE_SYSTEM
3551 #define TRACE_SYSTEM hello
3552
3553 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3554 #define _TRACE_HELLO_H
3555
3556 #include "../../../probes/lttng-tracepoint-event.h"
3557 #include <linux/tracepoint.h>
3558
3559 LTTNG_TRACEPOINT_EVENT(
3560 /* format identical to mainline version for those */
3561 hello_world,
3562 TP_PROTO(int foo, const char* bar),
3563 TP_ARGS(foo, bar),
3564
3565 /* possible differences */
3566 TP_STRUCT__entry(
3567 __field(int, my_int)
3568 __field(char, char0)
3569 __field(char, char1)
3570 __string(product, bar)
3571 ),
3572
3573 /* notice the use of tp_assign()/tp_strcpy() and no semicolons */
3574 TP_fast_assign(
3575 tp_assign(my_int, foo)
3576 tp_assign(char0, bar[0])
3577 tp_assign(char1, bar[1])
3578 tp_strcpy(product, bar)
3579 ),
3580
3581 /* This one is actually not used by LTTng either, but must be
3582 * present for the moment.
3583 */
3584 TP_printk("", 0)
3585
3586 /* no semicolon after this either */
3587 )
3588
3589 #endif
3590
3591 /* other difference: do NOT include <trace/define_trace.h> */
3592 #include "../../../probes/define_trace.h"
3593 ----
3594
3595 Some possible entries for `TP_STRUCT__entry()` and `TP_fast_assign()`,
3596 in the case of LTTng-modules, are shown in the
3597 <<lttng-modules-ref,LTTng-modules reference>> section.
3598
3599 The best way to learn how to use the above macros is to inspect
3600 existing LTTng tracepoint definitions in
3601 dir:{instrumentation/events/lttng-module} header files. Compare
3602 them with the Linux kernel mainline versions in
3603 dir:{include/trace/events}.
3604
3605 The next step is writing the LTTng probe kernel module C source file.
3606 This one is named +lttng-probe-__subsys__.c+
3607 in dir:{probes}. You may always use the following template:
3608
3609 [source,c]
3610 ----
3611 #include <linux/module.h>
3612 #include "../lttng-tracer.h"
3613
3614 /* Build time verification of mismatch between mainline TRACE_EVENT()
3615 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3616 */
3617 #include <trace/events/hello.h>
3618
3619 /* create LTTng tracepoint probes */
3620 #define LTTNG_PACKAGE_BUILD
3621 #define CREATE_TRACE_POINTS
3622 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
3623
3624 #include "../instrumentation/events/lttng-module/hello.h"
3625
3626 MODULE_LICENSE("GPL and additional rights");
3627 MODULE_AUTHOR("Your name <your-email>");
3628 MODULE_DESCRIPTION("LTTng hello probes");
3629 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
3630 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
3631 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
3632 LTTNG_MODULES_EXTRAVERSION);
3633 ----
3634
3635 Just replace `hello` with your subsystem name. In this example,
3636 `<trace/events/hello.h>`, which is the original mainline tracepoint
3637 definition header, is included for verification purposes: the
3638 LTTng-modules build system is able to emit an error at build time when
3639 the arguments of the mainline `TRACE_EVENT()` definitions do not match
3640 the ones of the LTTng-modules adaptation layer
3641 (`LTTNG_TRACEPOINT_EVENT()`).
3642
3643 Edit path:{probes/Makefile} and add your new kernel module object
3644 next to existing ones:
3645
3646 [source,make]
3647 ----
3648 # ...
3649
3650 obj-m += lttng-probe-module.o
3651 obj-m += lttng-probe-power.o
3652
3653 obj-m += lttng-probe-hello.o
3654
3655 # ...
3656 ----
3657
3658 Time to build! Point to your custom Linux kernel source tree using
3659 the `KERNELDIR` variable:
3660
3661 [role="term"]
3662 ----
3663 make KERNELDIR=/path/to/custom/linux
3664 ----
3665
3666 Finally, install modules:
3667
3668 [role="term"]
3669 ----
3670 sudo make modules_install
3671 ----
3672
3673
3674 [[instrumenting-linux-kernel-tracing]]
3675 ===== Tracing
3676
3677 The <<controlling-tracing,Controlling tracing>> section explains
3678 how to use the `lttng` tool to create and control tracing sessions.
3679 Although the `lttng` tool loads the appropriate _known_ LTTng kernel
3680 modules when needed (by launching `root`'s session daemon), it won't
3681 load your custom `lttng-probe-hello` module by default. You need to
3682 manually start an LTTng session daemon as `root` and use the
3683 `--extra-kmod-probes` option to append your custom probe module to the
3684 default list:
3685
3686 [role="term"]
3687 ----
3688 sudo pkill -u root lttng-sessiond
3689 sudo lttng-sessiond --extra-kmod-probes=hello
3690 ----
3691
3692 The first command makes sure any existing instance is killed. If
3693 you're not interested in using the default probes, or if you only
3694 want to use a few of them, you could use `--kmod-probes` instead,
3695 which specifies an absolute list:
3696
3697 [role="term"]
3698 ----
3699 sudo lttng-sessiond --kmod-probes=hello,ext4,net,block,signal,sched
3700 ----
3701
3702 Confirm the custom probe module is loaded:
3703
3704 [role="term"]
3705 ----
3706 lsmod | grep lttng_probe_hello
3707 ----
3708
3709 The `hello_world` event should appear in the list when doing
3710
3711 [role="term"]
3712 ----
3713 lttng list --kernel | grep hello
3714 ----
3715
3716 You may now create an LTTng tracing session, enable the `hello_world`
3717 kernel event (and others if you wish) and start tracing:
3718
3719 [role="term"]
3720 ----
3721 sudo lttng create my-session
3722 sudo lttng enable-event --kernel hello_world
3723 sudo lttng start
3724 ----
3725
3726 Plug a few USB devices, then stop tracing and inspect the trace (if
3727 http://diamon.org/babeltrace[Babeltrace]
3728 is installed):
3729
3730 [role="term"]
3731 ----
3732 sudo lttng stop
3733 sudo lttng view
3734 ----
3735
3736 Here's a sample output:
3737
3738 ----
3739 [15:30:34.835895035] (+?.?????????) hostname hello_world: { cpu_id = 1 }, { my_int = 8, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3740 [15:30:42.262781421] (+7.426886386) hostname hello_world: { cpu_id = 1 }, { my_int = 9, char0 = 80, char1 = 97, product = "Patriot Memory" }
3741 [15:30:48.175621778] (+5.912840357) hostname hello_world: { cpu_id = 1 }, { my_int = 10, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3742 ----
3743
3744 Two USB flash drives were used for this test.
3745
3746 You may change your LTTng custom probe, rebuild it and reload it at
3747 any time when not tracing. Make sure you remove the old module
3748 (either by killing the root LTTng session daemon which loaded the
3749 module in the first place, or by using `modprobe --remove` directly)
3750 before loading the updated one.
3751
3752
3753 [[instrumenting-out-of-tree-linux-kernel]]
3754 ===== Advanced: Instrumenting an out-of-tree Linux kernel module for LTTng
3755
3756 Instrumenting a custom Linux kernel module for LTTng follows the exact
3757 same steps as
3758 <<instrumenting-linux-kernel-itself,adding instrumentation
3759 to the Linux kernel itself>>,
3760 the only difference being that your mainline tracepoint definition
3761 header doesn't reside in the mainline source tree, but in your
3762 kernel module source tree.
3763
3764 The only reference to this mainline header is in the LTTng custom
3765 probe's source code (path:{probes/lttng-probe-hello.c} in our example),
3766 for build time verification:
3767
3768 [source,c]
3769 ----
3770 /* ... */
3771
3772 /* Build time verification of mismatch between mainline TRACE_EVENT()
3773 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3774 */
3775 #include <trace/events/hello.h>
3776
3777 /* ... */
3778 ----
3779
3780 The preferred, flexible way to include your module's mainline
3781 tracepoint definition header is to put it in a specific directory
3782 relative to your module's root (`tracepoints`, for example) and include it
3783 relative to your module's root directory in the LTTng custom probe's
3784 source:
3785
3786 [source,c]
3787 ----
3788 #include <tracepoints/hello.h>
3789 ----
3790
3791 You may then build LTTng-modules by adding your module's root
3792 directory as an include path to the extra C flags:
3793
3794 [role="term"]
3795 ----
3796 make ccflags-y=-I/path/to/kernel/module KERNELDIR=/path/to/custom/linux
3797 ----
3798
3799 Using `ccflags-y` allows you to move your kernel module to another
3800 directory and rebuild the LTTng-modules project with no change to
3801 source files.
3802
3803
3804 [role="since-2.5"]
3805 [[proc-lttng-logger-abi]]
3806 ==== LTTng logger ABI
3807
3808 The `lttng-tracer` Linux kernel module, installed by the LTTng-modules
3809 package, creates a special LTTng logger ABI file path:{/proc/lttng-logger}
3810 when loaded. Writing text data to this file generates an LTTng kernel
3811 domain event named `lttng_logger`.
3812
3813 Unlike other kernel domain events, `lttng_logger` may be enabled by
3814 any user, not only root users or members of the tracing group.
3815
3816 To use the LTTng logger ABI, simply write a string to
3817 path:{/proc/lttng-logger}:
3818
3819 [role="term"]
3820 ----
3821 echo -n 'Hello, World!' > /proc/lttng-logger
3822 ----
3823
3824 The `msg` field of the `lttng_logger` event contains the recorded
3825 message.
3826
3827 NOTE: Messages are split in chunks of 1024{nbsp}bytes.
3828
3829 The LTTng logger ABI is a quick and easy way to trace some events from
3830 user space through the kernel tracer. However, it is much more basic
3831 than LTTng-UST: it's slower (involves system call round-trip to the
3832 kernel and only supports logging strings). The LTTng logger ABI is
3833 particularly useful for recording logs as LTTng traces from shell
3834 scripts, potentially combining them with other Linux kernel/user space
3835 events.
3836
3837
3838 [[instrumenting-32-bit-app-on-64-bit-system]]
3839 ==== Advanced: Instrumenting a 32-bit application on a 64-bit system
3840
3841 [[advanced-instrumenting-techniques]]In order to trace a 32-bit
3842 application running on a 64-bit system,
3843 LTTng must use a dedicated 32-bit
3844 <<lttng-consumerd,consumer daemon>>. This section discusses how to
3845 build that daemon (which is _not_ part of the default 64-bit LTTng
3846 build) and the LTTng 32-bit tracing libraries, and how to instrument
3847 a 32-bit application in that context.
3848
3849 Make sure you install all 32-bit versions of LTTng dependencies.
3850 Their names can be found in the `README.md` files of each LTTng package
3851 source. How to find and install them depends on your target's
3852 Linux distribution. `gcc-multilib` is a common package name for the
3853 multilib version of GCC, which you also need.
3854
3855 The following packages will be built for 32-bit support on a 64-bit
3856 system: http://urcu.so/[Userspace RCU],
3857 LTTng-UST and LTTng-tools.
3858
3859
3860 [[building-32-bit-userspace-rcu]]
3861 ===== Building 32-bit Userspace RCU
3862
3863 Follow this:
3864
3865 [role="term"]
3866 ----
3867 git clone git://git.urcu.so/urcu.git
3868 cd urcu
3869 ./bootstrap
3870 ./configure --libdir=/usr/lib32 CFLAGS=-m32
3871 make
3872 sudo make install
3873 sudo ldconfig
3874 ----
3875
3876 The `-m32` C compiler flag creates 32-bit object files and `--libdir`
3877 indicates where to install the resulting libraries.
3878
3879
3880 [[building-32-bit-lttng-ust]]
3881 ===== Building 32-bit LTTng-UST
3882
3883 Follow this:
3884
3885 [role="term"]
3886 ----
3887 git clone http://git.lttng.org/lttng-ust.git
3888 cd lttng-ust
3889 ./bootstrap
3890 ./configure --prefix=/usr \
3891 --libdir=/usr/lib32 \
3892 CFLAGS=-m32 CXXFLAGS=-m32 \
3893 LDFLAGS=-L/usr/lib32
3894 make
3895 sudo make install
3896 sudo ldconfig
3897 ----
3898
3899 `-L/usr/lib32` is required for the build to find the 32-bit versions
3900 of Userspace RCU and other dependencies.
3901
3902 [NOTE]
3903 ====
3904 Depending on your Linux distribution,
3905 32-bit libraries could be installed at a different location than
3906 dir:{/usr/lib32}. For example, Debian is known to install
3907 some 32-bit libraries in dir:{/usr/lib/i386-linux-gnu}.
3908
3909 In this case, make sure to set `LDFLAGS` to all the
3910 relevant 32-bit library paths, for example,
3911 `LDFLAGS="-L/usr/lib32 -L/usr/lib/i386-linux-gnu"`.
3912 ====
3913
3914 NOTE: You may add options to path:{./configure} if you need them, e.g., for
3915 Java and SystemTap support. Look at `./configure --help` for more
3916 information.
3917
3918
3919 [[building-32-bit-lttng-tools]]
3920 ===== Building 32-bit LTTng-tools
3921
3922 Since the host is a 64-bit system, most 32-bit binaries and libraries of
3923 LTTng-tools are not needed; the host uses their 64-bit counterparts.
3924 The required step here is building and installing a 32-bit consumer
3925 daemon.
3926
3927 Follow this:
3928
3929 [role="term"]
3930 ----
3931 git clone http://git.lttng.org/lttng-tools.git
3932 cd lttng-ust
3933 ./bootstrap
3934 ./configure --prefix=/usr \
3935 --libdir=/usr/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3936 LDFLAGS=-L/usr/lib32
3937 make
3938 cd src/bin/lttng-consumerd
3939 sudo make install
3940 sudo ldconfig
3941 ----
3942
3943 The above commands build all the LTTng-tools project as 32-bit
3944 applications, but only installs the 32-bit consumer daemon.
3945
3946
3947 [[building-64-bit-lttng-tools]]
3948 ===== Building 64-bit LTTng-tools
3949
3950 Finally, you need to build a 64-bit version of LTTng-tools which is
3951 aware of the 32-bit consumer daemon previously built and installed:
3952
3953 [role="term"]
3954 ----
3955 make clean
3956 ./bootstrap
3957 ./configure --prefix=/usr \
3958 --with-consumerd32-libdir=/usr/lib32 \
3959 --with-consumerd32-bin=/usr/lib32/lttng/libexec/lttng-consumerd
3960 make
3961 sudo make install
3962 sudo ldconfig
3963 ----
3964
3965 Henceforth, the 64-bit session daemon automatically finds the
3966 32-bit consumer daemon if required.
3967
3968
3969 [[building-instrumented-32-bit-c-application]]
3970 ===== Building an instrumented 32-bit C application
3971
3972 Let us reuse the _Hello world_ example of
3973 <<tracing-your-own-user-application,Tracing your own user application>>
3974 (<<getting-started,Getting started>> chapter).
3975
3976 The instrumentation process is unaltered.
3977
3978 First, a typical 64-bit build (assuming you're running a 64-bit system):
3979
3980 [role="term"]
3981 ----
3982 gcc -o hello64 -I. hello.c hello-tp.c -ldl -llttng-ust
3983 ----
3984
3985 Now, a 32-bit build:
3986
3987 [role="term"]
3988 ----
3989 gcc -o hello32 -I. -m32 hello.c hello-tp.c -L/usr/lib32 \
3990 -ldl -llttng-ust -Wl,-rpath,/usr/lib32
3991 ----
3992
3993 The `-rpath` option, passed to the linker, makes the dynamic loader
3994 check for libraries in dir:{/usr/lib32} before looking in its default paths,
3995 where it should find the 32-bit version of `liblttng-ust`.
3996
3997
3998 [[running-32-bit-and-64-bit-c-applications]]
3999 ===== Running 32-bit and 64-bit versions of an instrumented C application
4000
4001 Now, both 32-bit and 64-bit versions of the _Hello world_ example above
4002 can be traced in the same tracing session. Use the `lttng` tool as usual
4003 to create a tracing session and start tracing:
4004
4005 [role="term"]
4006 ----
4007 lttng create session-3264
4008 lttng enable-event -u -a
4009 ./hello32
4010 ./hello64
4011 lttng stop
4012 ----
4013
4014 Use `lttng view` to verify both processes were
4015 successfully traced.
4016
4017
4018 [[controlling-tracing]]
4019 === Controlling tracing
4020
4021 Once you're in possession of a software that is properly
4022 <<instrumenting,instrumented>> for LTTng tracing, be it thanks to
4023 the built-in LTTng probes for the Linux kernel, a custom user
4024 application or a custom Linux kernel, all that is left is actually
4025 tracing it. As a user, you control LTTng tracing using a single command
4026 line interface: the `lttng` tool. This tool uses `liblttng-ctl` behind
4027 the scene to connect to and communicate with session daemons. LTTng
4028 session daemons may either be started manually (`lttng-sessiond`) or
4029 automatically by the `lttng` command when needed. Trace data may
4030 be forwarded to the network and used elsewhere using an LTTng relay
4031 daemon (`lttng-relayd`).
4032
4033 The manpages of `lttng`, `lttng-sessiond` and `lttng-relayd` are pretty
4034 complete, thus this section is not an online copy of the latter (we
4035 leave this contents for the
4036 <<online-lttng-manpages,Online LTTng manpages>> section).
4037 This section is rather a tour of LTTng
4038 features through practical examples and tips.
4039
4040 If not already done, make sure you understand the core concepts
4041 and how LTTng components connect together by reading the
4042 <<understanding-lttng,Understanding LTTng>> chapter; this section
4043 assumes you are familiar with them.
4044
4045
4046 [[creating-destroying-tracing-sessions]]
4047 ==== Creating and destroying tracing sessions
4048
4049 Whatever you want to do with `lttng`, it has to happen inside a
4050 **tracing session**, created beforehand. A session, in general, is a
4051 per-user container of state. A tracing session is no different; it
4052 keeps a specific state of stuff like:
4053
4054 * session name
4055 * enabled/disabled channels with associated parameters
4056 * enabled/disabled events with associated log levels and filters
4057 * context information added to channels
4058 * tracing activity (started or stopped)
4059
4060 and more.
4061
4062 A single user may have many active tracing sessions. LTTng session
4063 daemons are the ultimate owners and managers of tracing sessions. For
4064 user space tracing, each user has its own session daemon. Since Linux
4065 kernel tracing requires root privileges, only `root`'s session daemon
4066 may enable and trace kernel events. However, `lttng` has a `--group`
4067 option (which is passed to `lttng-sessiond` when starting it) to
4068 specify the name of a _tracing group_ which selected users may be part
4069 of to be allowed to communicate with `root`'s session daemon. By
4070 default, the tracing group name is `tracing`.
4071
4072 To create a tracing session, do:
4073
4074 [role="term"]
4075 ----
4076 lttng create my-session
4077 ----
4078
4079 This creates a new tracing session named `my-session` and make it
4080 the current one. If you don't specify a name (running only
4081 `lttng create`), your tracing session is named `auto` followed by the
4082 current date and time. Traces
4083 are written in +\~/lttng-traces/__session__-+ followed
4084 by the tracing session's creation date/time by default, where
4085 +__session__+ is the tracing session name. To save them
4086 at a different location, use the `--output` option:
4087
4088 [role="term"]
4089 ----
4090 lttng create --output /tmp/some-directory my-session
4091 ----
4092
4093 You may create as many tracing sessions as you wish:
4094
4095 [role="term"]
4096 ----
4097 lttng create other-session
4098 lttng create yet-another-session
4099 ----
4100
4101 You may view all existing tracing sessions using the `list` command:
4102
4103 [role="term"]
4104 ----
4105 lttng list
4106 ----
4107
4108 The state of a _current tracing session_ is kept in path:{~/.lttngrc}. Each
4109 invocation of `lttng` reads this file to set its current tracing
4110 session name so that you don't have to specify a session name for each
4111 command. You could edit this file manually, but the preferred way to
4112 set the current tracing session is to use the `set-session` command:
4113
4114 [role="term"]
4115 ----
4116 lttng set-session other-session
4117 ----
4118
4119 Most `lttng` commands accept a `--session` option to specify the name
4120 of the target tracing session.
4121
4122 Any existing tracing session may be destroyed using the `destroy`
4123 command:
4124
4125 [role="term"]
4126 ----
4127 lttng destroy my-session
4128 ----
4129
4130 Providing no argument to `lttng destroy` destroys the current
4131 tracing session. Destroying a tracing session stops any tracing
4132 running within the latter. Destroying a tracing session frees resources
4133 acquired by the session daemon and tracer side, making sure to flush
4134 all trace data.
4135
4136 You can't do much with LTTng using only the `create`, `set-session`
4137 and `destroy` commands of `lttng`, but it is essential to know them in
4138 order to control LTTng tracing, which always happen within the scope of
4139 a tracing session.
4140
4141
4142 [[enabling-disabling-events]]
4143 ==== Enabling and disabling events
4144
4145 Inside a tracing session, individual events may be enabled or disabled
4146 so that tracing them may or may not generate trace data.
4147
4148 We sometimes use the term _event_ metonymically throughout this text to
4149 refer to a specific condition, or _rule_, that could lead, when
4150 satisfied, to an actual occurring event (a point at a specific position
4151 in source code/binary program, logical processor and time capturing
4152 some payload) being recorded as trace data. This specific condition is
4153 composed of:
4154
4155 . A **domain** (kernel, user space, `java.util.logging`, or log4j)
4156 (required).
4157 . One or many **instrumentation points** in source code or binary
4158 program (tracepoint name, address, symbol name, function name,
4159 logger name, amongst other types of probes) to be executed (required).
4160 . A **log level** (each instrumentation point declares its own log
4161 level) or log level range to match (optional; only valid for user
4162 space domain).
4163 . A **custom user expression**, or **filter**, that must evaluate to
4164 _true_ when a tracepoint is executed (optional; only valid for user
4165 space domain).
4166
4167 All conditions are specified using arguments passed to the
4168 `enable-event` command of the `lttng` tool.
4169
4170 Condition 1 is specified using either `--kernel`/`-k` (kernel),
4171 `--userspace`/`-u` (user space), `--jul`/`-j`
4172 (JUL), or `--log4j`/`-l` (log4j).
4173 Exactly one of those four arguments must be specified.
4174
4175 Condition 2 is specified using one of:
4176
4177 `--tracepoint`::
4178 Tracepoint.
4179
4180 `--probe`::
4181 Dynamic probe (address, symbol name or combination
4182 of both in binary program; only valid for kernel domain).
4183
4184 `--function`::
4185 function entry/exit (address, symbol name or
4186 combination of both in binary program; only valid for kernel domain).
4187
4188 `--syscall`::
4189 System call entry/exit (only valid for kernel domain).
4190
4191 When none of the above is specified, `enable-event` defaults to
4192 using `--tracepoint`.
4193
4194 Condition 3 is specified using one of:
4195
4196 `--loglevel`::
4197 Log level range from the specified level to the most severe
4198 level.
4199
4200 `--loglevel-only`::
4201 Specific log level.
4202
4203 See `lttng enable-event --help` for the complete list of log level
4204 names.
4205
4206 Condition 4 is specified using the `--filter` option. This filter is
4207 a C-like expression, potentially reading real-time values of event
4208 fields, that has to evaluate to _true_ for the condition to be satisfied.
4209 Event fields are read using plain identifiers while context fields
4210 must be prefixed with `$ctx.`. See `lttng enable-event --help` for
4211 all usage details.
4212
4213 The aforementioned arguments are combined to create and enable events.
4214 Each unique combination of arguments leads to a different
4215 _enabled event_. The log level and filter arguments are optional, their
4216 default values being respectively all log levels and a filter which
4217 always returns _true_.
4218
4219 Here are a few examples (you must
4220 <<creating-destroying-tracing-sessions,create a tracing session>>
4221 first):
4222
4223 [role="term"]
4224 ----
4225 lttng enable-event -u --tracepoint my_app:hello_world
4226 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_WARNING
4227 lttng enable-event -u --tracepoint 'my_other_app:*'
4228 lttng enable-event -u --tracepoint my_app:foo_bar \
4229 --filter 'some_field <= 23 && !other_field'
4230 lttng enable-event -k --tracepoint sched_switch
4231 lttng enable-event -k --tracepoint gpio_value
4232 lttng enable-event -k --function usb_probe_device usb_probe_device
4233 lttng enable-event -k --syscall --all
4234 ----
4235
4236 The wildcard symbol, `*`, matches _anything_ and may only be used at
4237 the end of the string when specifying a _tracepoint_. Make sure to
4238 use it between single quotes in your favorite shell to avoid
4239 undesired shell expansion.
4240
4241 System call events can be enabled individually, too:
4242
4243 [role="term"]
4244 ----
4245 lttng enable-event -k --syscall open
4246 lttng enable-event -k --syscall read
4247 lttng enable-event -k --syscall fork,chdir,pipe
4248 ----
4249
4250 The complete list of available system call events can be
4251 obtained using
4252
4253 [role="term"]
4254 ----
4255 lttng list --kernel --syscall
4256 ----
4257
4258 You can see a list of events (enabled or disabled) using
4259
4260 [role="term"]
4261 ----
4262 lttng list some-session
4263 ----
4264
4265 where `some-session` is the name of the desired tracing session.
4266
4267 What you're actually doing when enabling events with specific conditions
4268 is creating a **whitelist** of traceable events for a given channel.
4269 Thus, the following case presents redundancy:
4270
4271 [role="term"]
4272 ----
4273 lttng enable-event -u --tracepoint my_app:hello_you
4274 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_DEBUG
4275 ----
4276
4277 The second command, matching a log level range, is useless since the first
4278 command enables all tracepoints matching the same name,
4279 `my_app:hello_you`.
4280
4281 Disabling an event is simpler: you only need to provide the event
4282 name to the `disable-event` command:
4283
4284 [role="term"]
4285 ----
4286 lttng disable-event --userspace my_app:hello_you
4287 ----
4288
4289 This name has to match a name previously given to `enable-event` (it
4290 has to be listed in the output of `lttng list some-session`).
4291 The `*` wildcard is supported, as long as you also used it in a
4292 previous `enable-event` invocation.
4293
4294 Disabling an event does not add it to some blacklist: it simply removes
4295 it from its channel's whitelist. This is why you cannot disable an event
4296 which wasn't previously enabled.
4297
4298 A disabled event doesn't generate any trace data, even if all its
4299 specified conditions are met.
4300
4301 Events may be enabled and disabled at will, either when LTTng tracers
4302 are active or not. Events may be enabled before a user space application
4303 is even started.
4304
4305
4306 [[basic-tracing-session-control]]
4307 ==== Basic tracing session control
4308
4309 Once you have
4310 <<creating-destroying-tracing-sessions,created a tracing session>>
4311 and <<enabling-disabling-events,enabled one or more events>>,
4312 you may activate the LTTng tracers for the current tracing session at
4313 any time:
4314
4315 [role="term"]
4316 ----
4317 lttng start
4318 ----
4319
4320 Subsequently, you may stop the tracers:
4321
4322 [role="term"]
4323 ----
4324 lttng stop
4325 ----
4326
4327 LTTng is very flexible: user space applications may be launched before
4328 or after the tracers are started. Events are only recorded if they
4329 are properly enabled and if they occur while tracers are active.
4330
4331 A tracing session name may be passed to both the `start` and `stop`
4332 commands to start/stop tracing a session other than the current one.
4333
4334
4335 [[enabling-disabling-channels]]
4336 ==== Enabling and disabling channels
4337
4338 <<event,As mentioned>> in the
4339 <<understanding-lttng,Understanding LTTng>> chapter, enabled
4340 events are contained in a specific channel, itself contained in a
4341 specific tracing session. A channel is a group of events with
4342 tunable parameters (event loss mode, sub-buffer size, number of
4343 sub-buffers, trace file sizes and count, to name a few). A given channel
4344 may only be responsible for enabled events belonging to one domain:
4345 either kernel or user space.
4346
4347 If you only used the `create`, `enable-event` and `start`/`stop`
4348 commands of the `lttng` tool so far, one or two channels were
4349 automatically created for you (one for the kernel domain and/or one
4350 for the user space domain). The default channels are both named
4351 `channel0`; channels from different domains may have the same name.
4352
4353 The current channels of a given tracing session can be viewed with
4354
4355 [role="term"]
4356 ----
4357 lttng list some-session
4358 ----
4359
4360 where `some-session` is the name of the desired tracing session.
4361
4362 To create and enable a channel, use the `enable-channel` command:
4363
4364 [role="term"]
4365 ----
4366 lttng enable-channel --kernel my-channel
4367 ----
4368
4369 This creates a kernel domain channel named `my-channel` with
4370 default parameters in the current tracing session.
4371
4372 [NOTE]
4373 ====
4374 Because of a current limitation, all
4375 channels must be _created_ prior to beginning tracing in a
4376 given tracing session, that is before the first time you do
4377 `lttng start`.
4378
4379 Since a channel is automatically created by
4380 `enable-event` only for the specified domain, you cannot,
4381 for example, enable a kernel domain event, start tracing and then
4382 enable a user space domain event because no user space channel
4383 exists yet and it's too late to create one.
4384
4385 For this reason, make sure to configure your channels properly
4386 before starting the tracers for the first time!
4387 ====
4388
4389 Here's another example:
4390
4391 [role="term"]
4392 ----
4393 lttng enable-channel --userspace --session other-session --overwrite \
4394 --tracefile-size 1048576 1mib-channel
4395 ----
4396
4397 This creates a user space domain channel named `1mib-channel` in
4398 the tracing session named `other-session` that loses new events by
4399 overwriting previously recorded events (instead of the default mode of
4400 discarding newer ones) and saves trace files with a maximum size of
4401 1{nbsp}MiB each.
4402
4403 Note that channels may also be created using the `--channel` option of
4404 the `enable-event` command when the provided channel name doesn't exist
4405 for the specified domain:
4406
4407 [role="term"]
4408 ----
4409 lttng enable-event --kernel --channel some-channel sched_switch
4410 ----
4411
4412 If no kernel domain channel named `some-channel` existed before calling
4413 the above command, it would be created with default parameters.
4414
4415 You may enable the same event in two different channels:
4416
4417 [role="term"]
4418 ----
4419 lttng enable-event --userspace --channel my-channel app:tp
4420 lttng enable-event --userspace --channel other-channel app:tp
4421 ----
4422
4423 If both channels are enabled, the occurring `app:tp` event
4424 generates two recorded events, one for each channel.
4425
4426 Disabling a channel is done with the `disable-event` command:
4427
4428 [role="term"]
4429 ----
4430 lttng disable-event --kernel some-channel
4431 ----
4432
4433 The state of a channel precedes the individual states of events within
4434 it: events belonging to a disabled channel, even if they are
4435 enabled, won't be recorded.
4436
4437
4438
4439 [[fine-tuning-channels]]
4440 ===== Fine-tuning channels
4441
4442 There are various parameters that may be fine-tuned with the
4443 `enable-channel` command. The latter are well documented in
4444 man:lttng(1) and in the <<channel,Channel>> section of the
4445 <<understanding-lttng,Understanding LTTng>> chapter. For basic
4446 tracing needs, their default values should be just fine, but here are a
4447 few examples to break the ice.
4448
4449 As the frequency of recorded events increases--either because the
4450 event throughput is actually higher or because you enabled more events
4451 than usual&#8212;__event loss__ might be experienced. Since LTTng never
4452 waits, by design, for sub-buffer space availability (non-blocking
4453 tracer), when a sub-buffer is full and no empty sub-buffers are left,
4454 there are two possible outcomes: either the new events that do not fit
4455 are rejected, or they start replacing the oldest recorded events.
4456 The choice of which algorithm to use is a per-channel parameter, the
4457 default being discarding the newest events until there is some space
4458 left. If your situation always needs the latest events at the expense
4459 of writing over the oldest ones, create a channel with the `--overwrite`
4460 option:
4461
4462 [role="term"]
4463 ----
4464 lttng enable-channel --kernel --overwrite my-channel
4465 ----
4466
4467 When an event is lost, it means no space was available in any
4468 sub-buffer to accommodate it. Thus, if you want to cope with sporadic
4469 high event throughput situations and avoid losing events, you need to
4470 allocate more room for storing them in memory. This can be done by
4471 either increasing the size of sub-buffers or by adding sub-buffers.
4472 The following example creates a user space domain channel with
4473 16{nbsp}sub-buffers of 512{nbsp}kiB each:
4474
4475 [role="term"]
4476 ----
4477 lttng enable-channel --userspace --num-subbuf 16 --subbuf-size 512k big-channel
4478 ----
4479
4480 Both values need to be powers of two, otherwise they are rounded up
4481 to the next one.
4482
4483 Two other interesting available parameters of `enable-channel` are
4484 `--tracefile-size` and `--tracefile-count`, which respectively limit
4485 the size of each trace file and the their count for a given channel.
4486 When the number of written trace files reaches its limit for a given
4487 channel-CPU pair, the next trace file overwrites the very first
4488 one. The following example creates a kernel domain channel with a
4489 maximum of three trace files of 1{nbsp}MiB each:
4490
4491 [role="term"]
4492 ----
4493 lttng enable-channel --kernel --tracefile-size 1M --tracefile-count 3 my-channel
4494 ----
4495
4496 An efficient way to make sure lots of events are generated is enabling
4497 all kernel events in this channel and starting the tracer:
4498
4499 [role="term"]
4500 ----
4501 lttng enable-event --kernel --all --channel my-channel
4502 lttng start
4503 ----
4504
4505 After a few seconds, look at trace files in your tracing session
4506 output directory. For two CPUs, it should look like:
4507
4508 ----
4509 my-channel_0_0 my-channel_1_0
4510 my-channel_0_1 my-channel_1_1
4511 my-channel_0_2 my-channel_1_2
4512 ----
4513
4514 Amongst the files above, you might see one in each group with a size
4515 lower than 1{nbsp}MiB: they are the files currently being written.
4516
4517 Since all those small files are valid LTTng trace files, LTTng trace
4518 viewers may read them. It is the viewer's responsibility to properly
4519 merge the streams so as to present an ordered list to the user.
4520 http://diamon.org/babeltrace[Babeltrace]
4521 merges LTTng trace files correctly and is fast at doing it.
4522
4523
4524 [[adding-context]]
4525 ==== Adding some context to channels
4526
4527 If you read all the sections of
4528 <<controlling-tracing,Controlling tracing>> so far, you should be
4529 able to create tracing sessions, create and enable channels and events
4530 within them and start/stop the LTTng tracers. Event fields recorded in
4531 trace files provide important information about occurring events, but
4532 sometimes external context may help you solve a problem faster. This
4533 section discusses how to add context information to events of a
4534 specific channel using the `lttng` tool.
4535
4536 There are various available context values which can accompany events
4537 recorded by LTTng, for example:
4538
4539 * **process information**:
4540 ** identifier (PID)
4541 ** name
4542 ** priority
4543 ** scheduling priority (niceness)
4544 ** thread identifier (TID)
4545 * the **hostname** of the system on which the event occurred
4546 * plenty of **performance counters** using perf, for example:
4547 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types
4548 ** cache misses
4549 ** branch instructions, misses, loads
4550 ** CPU faults
4551
4552 The full list is available in the output of `lttng add-context --help`.
4553 Some of them are reserved for a specific domain (kernel or
4554 user space) while others are available for both.
4555
4556 To add context information to one or all channels of a given tracing
4557 session, use the `add-context` command:
4558
4559 [role="term"]
4560 ----
4561 lttng add-context --userspace --type vpid --type perf:thread:cpu-cycles
4562 ----
4563
4564 The above example adds the virtual process identifier and per-thread
4565 CPU cycles count values to all recorded user space domain events of the
4566 current tracing session. Use the `--channel` option to select a specific
4567 channel:
4568
4569 [role="term"]
4570 ----
4571 lttng add-context --kernel --channel my-channel --type tid
4572 ----
4573
4574 adds the thread identifier value to all recorded kernel domain events
4575 in the channel `my-channel` of the current tracing session.
4576
4577 Beware that context information cannot be removed from channels once
4578 it's added for a given tracing session.
4579
4580
4581 [role="since-2.5"]
4582 [[saving-loading-tracing-session]]
4583 ==== Saving and loading tracing session configurations
4584
4585 Configuring a tracing session may be long: creating and enabling
4586 channels with specific parameters, enabling kernel and user space
4587 domain events with specific log levels and filters, and adding context
4588 to some channels are just a few of the many possible operations using
4589 the `lttng` command line tool. If you're going to use LTTng to solve real
4590 world problems, chances are you're going to have to record events using
4591 the same tracing session setup over and over, modifying a few variables
4592 each time in your instrumented program or environment. To avoid
4593 constant tracing session reconfiguration, the `lttng` tool is able to
4594 save and load tracing session configurations to/from XML files.
4595
4596 To save a given tracing session configuration, do:
4597
4598 [role="term"]
4599 ----
4600 lttng save my-session
4601 ----
4602
4603 where `my-session` is the name of the tracing session to save. Tracing
4604 session configurations are saved to dir:{~/.lttng/sessions} by default;
4605 use the `--output-path` option to change this destination directory.
4606
4607 All configuration parameters are saved:
4608
4609 * tracing session name
4610 * trace data output path
4611 * channels with their state and all their parameters
4612 * context information added to channels
4613 * events with their state, log level and filter
4614 * tracing activity (started or stopped)
4615
4616 To load a tracing session, simply do:
4617
4618 [role="term"]
4619 ----
4620 lttng load my-session
4621 ----
4622
4623 or, if you used a custom path:
4624
4625 [role="term"]
4626 ----
4627 lttng load --input-path /path/to/my-session.lttng
4628 ----
4629
4630 Your saved tracing session is restored as if you just configured
4631 it manually.
4632
4633
4634 [[sending-trace-data-over-the-network]]
4635 ==== Sending trace data over the network
4636
4637 The possibility of sending trace data over the network comes as a
4638 built-in feature of LTTng-tools. For this to be possible, an LTTng
4639 _relay daemon_ must be executed and listening on the machine where
4640 trace data is to be received, and the user must create a tracing
4641 session using appropriate options to forward trace data to the remote
4642 relay daemon.
4643
4644 The relay daemon listens on two different TCP ports: one for control
4645 information and the other for actual trace data.
4646
4647 Starting the relay daemon on the remote machine is easy:
4648
4649 [role="term"]
4650 ----
4651 lttng-relayd
4652 ----
4653
4654 This makes it listen to its default ports: 5342 for control and
4655 5343 for trace data. The `--control-port` and `--data-port` options may
4656 be used to specify different ports.
4657
4658 Traces written by `lttng-relayd` are written to
4659 +\~/lttng-traces/__hostname__/__session__+ by
4660 default, where +__hostname__+ is the host name of the
4661 traced (monitored) system and +__session__+ is the
4662 tracing session name. Use the `--output` option to write trace data
4663 outside dir:{~/lttng-traces}.
4664
4665 On the sending side, a tracing session must be created using the
4666 `lttng` tool with the `--set-url` option to connect to the distant
4667 relay daemon:
4668
4669 [role="term"]
4670 ----
4671 lttng create my-session --set-url net://distant-host
4672 ----
4673
4674 The URL format is described in the output of `lttng create --help`.
4675 The above example uses the default ports; the `--ctrl-url` and
4676 `--data-url` options may be used to set the control and data URLs
4677 individually.
4678
4679 Once this basic setup is completed and the connection is established,
4680 you may use the `lttng` tool on the target machine as usual; everything
4681 you do is transparently forwarded to the remote machine if needed.
4682 For example, a parameter changing the maximum size of trace files
4683 only has an effect on the distant relay daemon actually writing
4684 the trace.
4685
4686
4687 [role="since-2.4"]
4688 [[lttng-live]]
4689 ==== Viewing events as they arrive
4690
4691 We have seen how trace files may be produced by LTTng out of generated
4692 application and Linux kernel events. We have seen that those trace files
4693 may be either recorded locally by consumer daemons or remotely using
4694 a relay daemon. And we have seen that the maximum size and count of
4695 trace files is configurable for each channel. With all those features,
4696 it's still not possible to read a trace file as it is being written
4697 because it could be incomplete and appear corrupted to the viewer.
4698 There is a way to view events as they arrive, however: using
4699 _LTTng live_.
4700
4701 LTTng live is implemented, in LTTng, solely on the relay daemon side.
4702 As trace data is sent over the network to a relay daemon by a (possibly
4703 remote) consumer daemon, a _tee_ is created: trace data is recorded to
4704 trace files _as well as_ being transmitted to a connected live viewer:
4705
4706 [role="img-90"]
4707 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a live viewer.
4708 image::lttng-live.png[]
4709
4710 In order to use this feature, a tracing session must created in live
4711 mode on the target system:
4712
4713 [role="term"]
4714 ----
4715 lttng create --live
4716 ----
4717
4718 An optional parameter may be passed to `--live` to set the period
4719 (in microseconds) between flushes to the network
4720 (1{nbsp}second is the default). With:
4721
4722 [role="term"]
4723 ----
4724 lttng create --live 100000
4725 ----
4726
4727 the daemons flush their data every 100{nbsp}ms.
4728
4729 If no network output is specified to the `create` command, a local
4730 relay daemon is spawned. In this very common case, viewing a live
4731 trace is easy: enable events and start tracing as usual, then use
4732 `lttng view` to start the default live viewer:
4733
4734 [role="term"]
4735 ----
4736 lttng view
4737 ----
4738
4739 The correct arguments are passed to the live viewer so that it
4740 may connect to the local relay daemon and start reading live events.
4741
4742 You may also wish to use a live viewer not running on the target
4743 system. In this case, you should specify a network output when using
4744 the `create` command (`--set-url` or `--ctrl-url`/`--data-url` options).
4745 A distant LTTng relay daemon should also be started to receive control
4746 and trace data. By default, `lttng-relayd` listens on 127.0.0.1:5344
4747 for an LTTng live connection. Otherwise, the desired URL may be
4748 specified using its `--live-port` option.
4749
4750 The
4751 http://diamon.org/babeltrace[`babeltrace`]
4752 viewer supports LTTng live as one of its input formats. `babeltrace` is
4753 the default viewer when using `lttng view`. To use it manually, first
4754 list active tracing sessions by doing the following (assuming the relay
4755 daemon to connect to runs on the same host):
4756
4757 [role="term"]
4758 ----
4759 babeltrace --input-format lttng-live net://localhost
4760 ----
4761
4762 Then, choose a tracing session and start viewing events as they arrive
4763 using LTTng live:
4764
4765 [role="term"]
4766 ----
4767 babeltrace --input-format lttng-live net://localhost/host/hostname/my-session
4768 ----
4769
4770
4771 [role="since-2.3"]
4772 [[taking-a-snapshot]]
4773 ==== Taking a snapshot
4774
4775 The normal behavior of LTTng is to record trace data as trace files.
4776 This is ideal for keeping a long history of events that occurred on
4777 the target system and applications, but may be too much data in some
4778 situations. For example, you may wish to trace your application
4779 continuously until some critical situation happens, in which case you
4780 would only need the latest few recorded events to perform the desired
4781 analysis, not multi-gigabyte trace files.
4782
4783 LTTng has an interesting feature called _snapshots_. When creating
4784 a tracing session in snapshot mode, no trace files are written; the
4785 tracers' sub-buffers are constantly overwriting the oldest recorded
4786 events with the newest. At any time, either when the tracers are started
4787 or stopped, you may take a snapshot of those sub-buffers.
4788
4789 There is no difference between the format of a normal trace file and the
4790 format of a snapshot: viewers of LTTng traces also support LTTng
4791 snapshots. By default, snapshots are written to disk, but they may also
4792 be sent over the network.
4793
4794 To create a tracing session in snapshot mode, do:
4795
4796 [role="term"]
4797 ----
4798 lttng create --snapshot my-snapshot-session
4799 ----
4800
4801 Next, enable channels, events and add context to channels as usual.
4802 Once a tracing session is created in snapshot mode, channels are
4803 forced to use the
4804 <<channel-overwrite-mode-vs-discard-mode,overwrite>> mode
4805 (`--overwrite` option of the `enable-channel` command; also called
4806 _flight recorder mode_) and have an `mmap()` channel type
4807 (`--output mmap`).
4808
4809 Start tracing. When you're ready to take a snapshot, do:
4810
4811 [role="term"]
4812 ----
4813 lttng snapshot record --name my-snapshot
4814 ----
4815
4816 This records a snapshot named `my-snapshot` of all channels of
4817 all domains of the current tracing session. By default, snapshots files
4818 are recorded in the path returned by `lttng snapshot list-output`. You
4819 may change this path or decide to send snapshots over the network
4820 using either:
4821
4822 . an output path/URL specified when creating the tracing session
4823 (`lttng create`)
4824 . an added snapshot output path/URL using
4825 `lttng snapshot add-output`
4826 . an output path/URL provided directly to the
4827 `lttng snapshot record` command
4828
4829 Method 3 overrides method 2 which overrides method 1. When specifying
4830 a URL, a relay daemon must be listening on some machine (see
4831 <<sending-trace-data-over-the-network,Sending trace data over the network>>).
4832
4833 If you need to make absolutely sure that the output file won't be
4834 larger than a certain limit, you can set a maximum snapshot size when
4835 taking it with the `--max-size` option:
4836
4837 [role="term"]
4838 ----
4839 lttng snapshot record --name my-snapshot --max-size 2M
4840 ----
4841
4842 Older recorded events are discarded in order to respect this
4843 maximum size.
4844
4845
4846 [role="since-2.6"]
4847 [[mi]]
4848 ==== Machine interface
4849
4850 The `lttng` tool aims at providing a command output as human-readable as
4851 possible. While this output is easy to parse by a human being, machines
4852 have a hard time.
4853
4854 This is why the `lttng` tool provides the general `--mi` option, which
4855 must specify a machine interface output format. As of the latest
4856 LTTng stable release, only the `xml` format is supported. A schema
4857 definition (XSD) is made
4858 https://github.com/lttng/lttng-tools/blob/master/src/common/mi_lttng.xsd[available]
4859 to ease the integration with external tools as much as possible.
4860
4861 The `--mi` option can be used in conjunction with all `lttng` commands.
4862 Here are some examples:
4863
4864 [role="term"]
4865 ----
4866 lttng --mi xml create some-session
4867 lttng --mi xml list some-session
4868 lttng --mi xml list --kernel
4869 lttng --mi xml enable-event --kernel --syscall open
4870 lttng --mi xml start
4871 ----
4872
4873
4874 [[reference]]
4875 == Reference
4876
4877 This chapter presents various references for LTTng packages such as links
4878 to online manpages, tables needed by the rest of the text, descriptions
4879 of library functions, and more.
4880
4881
4882 [[online-lttng-manpages]]
4883 === Online LTTng manpages
4884
4885 LTTng packages currently install the following link:/man[man pages],
4886 available online using the links below:
4887
4888 * **LTTng-tools**
4889 ** man:lttng(1)
4890 ** man:lttng-sessiond(8)
4891 ** man:lttng-relayd(8)
4892 * **LTTng-UST**
4893 ** man:lttng-gen-tp(1)
4894 ** man:lttng-ust(3)
4895 ** man:lttng-ust-cyg-profile(3)
4896 ** man:lttng-ust-dl(3)
4897
4898
4899 [[lttng-ust-ref]]
4900 === LTTng-UST
4901
4902 This section presents references of the LTTng-UST package.
4903
4904
4905 [[liblttng-ust]]
4906 ==== LTTng-UST library (+liblttng&#8209;ust+)
4907
4908 The LTTng-UST library, or `liblttng-ust`, is the main shared object
4909 against which user applications are linked to make LTTng user space
4910 tracing possible.
4911
4912 The <<c-application,C application>> guide shows the complete
4913 process to instrument, build and run a C/$$C++$$ application using
4914 LTTng-UST, while this section contains a few important tables.
4915
4916
4917 [[liblttng-ust-tp-fields]]
4918 ===== Tracepoint fields macros (for `TP_FIELDS()`)
4919
4920 The available macros to define tracepoint fields, which should be listed
4921 within `TP_FIELDS()` in `TRACEPOINT_EVENT()`, are:
4922
4923 [role="growable func-desc",cols="asciidoc,asciidoc"]
4924 .Available macros to define LTTng-UST tracepoint fields
4925 |====
4926 |Macro |Description and parameters
4927
4928 |
4929 +ctf_integer(__t__, __n__, __e__)+
4930
4931 +ctf_integer_nowrite(__t__, __n__, __e__)+
4932 |
4933 Standard integer, displayed in base 10.
4934
4935 +__t__+::
4936 Integer C type (`int`, `long`, `size_t`, ...).
4937
4938 +__n__+::
4939 Field name.
4940
4941 +__e__+::
4942 Argument expression.
4943
4944 |+ctf_integer_hex(__t__, __n__, __e__)+
4945 |
4946 Standard integer, displayed in base 16.
4947
4948 +__t__+::
4949 Integer C type.
4950
4951 +__n__+::
4952 Field name.
4953
4954 +__e__+::
4955 Argument expression.
4956
4957 |+ctf_integer_network(__t__, __n__, __e__)+
4958 |
4959 Integer in network byte order (big endian), displayed in base 10.
4960
4961 +__t__+::
4962 Integer C type.
4963
4964 +__n__+::
4965 Field name.
4966
4967 +__e__+::
4968 Argument expression.
4969
4970 |+ctf_integer_network_hex(__t__, __n__, __e__)+
4971 |
4972 Integer in network byte order, displayed in base 16.
4973
4974 +__t__+::
4975 Integer C type.
4976
4977 +__n__+::
4978 Field name.
4979
4980 +__e__+::
4981 Argument expression.
4982
4983 |
4984 +ctf_float(__t__, __n__, __e__)+
4985
4986 +ctf_float_nowrite(__t__, __n__, __e__)+
4987 |
4988 Floating point number.
4989
4990 +__t__+::
4991 Floating point number C type (`float` or `double`).
4992
4993 +__n__+::
4994 Field name.
4995
4996 +__e__+::
4997 Argument expression.
4998
4999 |
5000 +ctf_string(__n__, __e__)+
5001
5002 +ctf_string_nowrite(__n__, __e__)+
5003 |
5004 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
5005
5006 +__n__+::
5007 Field name.
5008
5009 +__e__+::
5010 Argument expression.
5011
5012 |
5013 +ctf_array(__t__, __n__, __e__, __s__)+
5014
5015 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
5016 |
5017 Statically-sized array of integers
5018
5019 +__t__+::
5020 Array element C type.
5021
5022 +__n__+::
5023 Field name.
5024
5025 +__e__+::
5026 Argument expression.
5027
5028 +__s__+::
5029 Number of elements.
5030
5031 |
5032 +ctf_array_text(__t__, __n__, __e__, __s__)+
5033
5034 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
5035 |
5036 Statically-sized array, printed as text.
5037
5038 The string does not need to be null-terminated.
5039
5040 +__t__+::
5041 Array element C type (always `char`).
5042
5043 +__n__+::
5044 Field name.
5045
5046 +__e__+::
5047 Argument expression.
5048
5049 +__s__+::
5050 Number of elements.
5051
5052 |
5053 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
5054
5055 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
5056 |
5057 Dynamically-sized array of integers.
5058
5059 The type of +__E__+ needs to be unsigned.
5060
5061 +__t__+::
5062 Array element C type.
5063
5064 +__n__+::
5065 Field name.
5066
5067 +__e__+::
5068 Argument expression.
5069
5070 +__T__+::
5071 Length expression C type.
5072
5073 +__E__+::
5074 Length expression.
5075
5076 |
5077 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
5078
5079 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
5080 |
5081 Dynamically-sized array, displayed as text.
5082
5083 The string does not need to be null-terminated.
5084
5085 The type of +__E__+ needs to be unsigned.
5086
5087 The behaviour is undefined if +__e__+ is `NULL`.
5088
5089 +__t__+::
5090 Sequence element C type (always `char`).
5091
5092 +__n__+::
5093 Field name.
5094
5095 +__e__+::
5096 Argument expression.
5097
5098 +__T__+::
5099 Length expression C type.
5100
5101 +__E__+::
5102 Length expression.
5103 |====
5104
5105 The `_nowrite` versions omit themselves from the session trace, but are
5106 otherwise identical. This means the `_nowrite` fields won't be written
5107 in the recorded trace. Their primary purpose is to make some
5108 of the event context available to the
5109 <<enabling-disabling-events,event filters>> without having to
5110 commit the data to sub-buffers.
5111
5112
5113 [[liblttng-ust-tracepoint-loglevel]]
5114 ===== Tracepoint log levels (for `TRACEPOINT_LOGLEVEL()`)
5115
5116 The following table shows the available log level values for the
5117 `TRACEPOINT_LOGLEVEL()` macro:
5118
5119 `TRACE_EMERG`::
5120 System is unusable.
5121
5122 `TRACE_ALERT`::
5123 Action must be taken immediately.
5124
5125 `TRACE_CRIT`::
5126 Critical conditions.
5127
5128 `TRACE_ERR`::
5129 Error conditions.
5130
5131 `TRACE_WARNING`::
5132 Warning conditions.
5133
5134 `TRACE_NOTICE`::
5135 Normal, but significant, condition.
5136
5137 `TRACE_INFO`::
5138 Informational message.
5139
5140 `TRACE_DEBUG_SYSTEM`::
5141 Debug information with system-level scope (set of programs).
5142
5143 `TRACE_DEBUG_PROGRAM`::
5144 Debug information with program-level scope (set of processes).
5145
5146 `TRACE_DEBUG_PROCESS`::
5147 Debug information with process-level scope (set of modules).
5148
5149 `TRACE_DEBUG_MODULE`::
5150 Debug information with module (executable/library) scope (set of units).
5151
5152 `TRACE_DEBUG_UNIT`::
5153 Debug information with compilation unit scope (set of functions).
5154
5155 `TRACE_DEBUG_FUNCTION`::
5156 Debug information with function-level scope.
5157
5158 `TRACE_DEBUG_LINE`::
5159 Debug information with line-level scope (TRACEPOINT_EVENT default).
5160
5161 `TRACE_DEBUG`::
5162 Debug-level message.
5163
5164 Log levels `TRACE_EMERG` through `TRACE_INFO` and `TRACE_DEBUG` match
5165 http://man7.org/linux/man-pages/man3/syslog.3.html[syslog]
5166 level semantics. Log levels `TRACE_DEBUG_SYSTEM` through `TRACE_DEBUG`
5167 offer more fine-grained selection of debug information.
5168
5169
5170 [[lttng-modules-ref]]
5171 === LTTng-modules
5172
5173 This section presents references of the LTTng-modules package.
5174
5175
5176 [[lttng-modules-tp-struct-entry]]
5177 ==== Tracepoint fields macros (for `TP_STRUCT__entry()`)
5178
5179 This table describes possible entries for the `TP_STRUCT__entry()` part
5180 of `LTTNG_TRACEPOINT_EVENT()`:
5181
5182 [role="growable func-desc",cols="asciidoc,asciidoc"]
5183 .Available entries for `TP_STRUCT__entry()` (in `LTTNG_TRACEPOINT_EVENT()`)
5184 |====
5185 |Macro |Description and parameters
5186
5187 |+\__field(__t__, __n__)+
5188 |
5189 Standard integer, displayed in base 10.
5190
5191 +__t__+::
5192 Integer C type (`int`, `unsigned char`, `size_t`, ...).
5193
5194 +__n__+::
5195 Field name.
5196
5197 |+\__field_hex(__t__, __n__)+
5198 |
5199 Standard integer, displayed in base 16.
5200
5201 +__t__+::
5202 Integer C type.
5203
5204 +__n__+::
5205 Field name.
5206
5207 |+\__field_oct(__t__, __n__)+
5208 |
5209 Standard integer, displayed in base 8.
5210
5211 +__t__+::
5212 Integer C type.
5213
5214 +__n__+::
5215 Field name.
5216
5217 |+\__field_network(__t__, __n__)+
5218 |
5219 Integer in network byte order (big endian), displayed in base 10.
5220
5221 +__t__+::
5222 Integer C type.
5223
5224 +__n__+::
5225 Field name.
5226
5227 |+\__field_network_hex(__t__, __n__)+
5228 |
5229 Integer in network byte order (big endian), displayed in base 16.
5230
5231 +__t__+::
5232 Integer C type.
5233
5234 +__n__+::
5235 Field name.
5236
5237 |+\__array(__t__, __n__, __s__)+
5238 |
5239 Statically-sized array, elements displayed in base 10.
5240
5241 +__t__+::
5242 Array element C type.
5243
5244 +__n__+::
5245 Field name.
5246
5247 +__s__+::
5248 Number of elements.
5249
5250 |+\__array_hex(__t__, __n__, __s__)+
5251 |
5252 Statically-sized array, elements displayed in base 16.
5253
5254 +__t__+::
5255 array element C type.
5256 +__n__+::
5257 field name.
5258 +__s__+::
5259 number of elements.
5260
5261 |+\__array_text(__t__, __n__, __s__)+
5262 |
5263 Statically-sized array, displayed as text.
5264
5265 +__t__+::
5266 Array element C type (always char).
5267
5268 +__n__+::
5269 Field name.
5270
5271 +__s__+::
5272 Number of elements.
5273
5274 |+\__dynamic_array(__t__, __n__, __s__)+
5275 |
5276 Dynamically-sized array, displayed in base 10.
5277
5278 +__t__+::
5279 Array element C type.
5280
5281 +__n__+::
5282 Field name.
5283
5284 +__s__+::
5285 Length C expression.
5286
5287 |+\__dynamic_array_hex(__t__, __n__, __s__)+
5288 |
5289 Dynamically-sized array, displayed in base 16.
5290
5291 +__t__+::
5292 Array element C type.
5293
5294 +__n__+::
5295 Field name.
5296
5297 +__s__+::
5298 Length C expression.
5299
5300 |+\__dynamic_array_text(__t__, __n__, __s__)+
5301 |
5302 Dynamically-sized array, displayed as text.
5303
5304 +__t__+::
5305 Array element C type (always char).
5306
5307 +__n__+::
5308 Field name.
5309
5310 +__s__+::
5311 Length C expression.
5312
5313 |+\__string(n, __s__)+
5314 |
5315 Null-terminated string.
5316
5317 The behaviour is undefined behavior if +__s__+ is `NULL`.
5318
5319 +__n__+::
5320 Field name.
5321
5322 +__s__+::
5323 String source (pointer).
5324 |====
5325
5326 The above macros should cover the majority of cases. For advanced items,
5327 see path:{probes/lttng-events.h}.
5328
5329
5330 [[lttng-modules-tp-fast-assign]]
5331 ==== Tracepoint assignment macros (for `TP_fast_assign()`)
5332
5333 This table describes possible entries for the `TP_fast_assign()` part
5334 of `LTTNG_TRACEPOINT_EVENT()`:
5335
5336 [role="growable func-desc",cols="asciidoc,asciidoc"]
5337 .Available entries for `TP_fast_assign()` (in `LTTNG_TRACEPOINT_EVENT()`)
5338 |====
5339 |Macro |Description and parameters
5340
5341 |+tp_assign(__d__, __s__)+
5342 |
5343 Assignment of C expression +__s__+ to tracepoint field +__d__+.
5344
5345 +__d__+::
5346 Name of destination tracepoint field.
5347
5348 +__s__+::
5349 Source C expression (may refer to tracepoint arguments).
5350
5351 |+tp_memcpy(__d__, __s__, __l__)+
5352 |
5353 Memory copy of +__l__+ bytes from +__s__+ to tracepoint field
5354 +__d__+ (use with array fields).
5355
5356 +__d__+::
5357 Name of destination tracepoint field.
5358
5359 +__s__+::
5360 Source C expression (may refer to tracepoint arguments).
5361
5362 +__l__+::
5363 Number of bytes to copy.
5364
5365 |+tp_memcpy_from_user(__d__, __s__, __l__)+
5366 |
5367 Memory copy of +__l__+ bytes from user space +__s__+ to tracepoint
5368 field +__d__+ (use with array fields).
5369
5370 +__d__+::
5371 Name of destination tracepoint field.
5372
5373 +__s__+::
5374 Source C expression (may refer to tracepoint arguments).
5375
5376 +__l__+::
5377 Number of bytes to copy.
5378
5379 |+tp_memcpy_dyn(__d__, __s__)+
5380 |
5381 Memory copy of dynamically-sized array from +__s__+ to tracepoint field
5382 +__d__+.
5383
5384 The number of bytes is known from the field's length expression
5385 (use with dynamically-sized array fields).
5386
5387 +__d__+::
5388 Name of destination tracepoint field.
5389
5390 +__s__+::
5391 Source C expression (may refer to tracepoint arguments).
5392
5393 +__l__+::
5394 Number of bytes to copy.
5395
5396 |+tp_strcpy(__d__, __s__)+
5397 |
5398 String copy of +__s__+ to tracepoint field +__d__+ (use with string
5399 fields).
5400
5401 +__d__+::
5402 Name of destination tracepoint field.
5403
5404 +__s__+::
5405 Source C expression (may refer to tracepoint arguments).
5406 |====
This page took 0.174517 seconds and 4 git commands to generate.