1b6bfbc59a083d5e9a144389cf811e46a3c259d2
[lttng-docs.git] / 2.6 / lttng-docs-2.6.txt
1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.6, May 26, 2016
5
6
7 include::../common/copyright.txt[]
8
9
10 include::../common/warning-not-maintained.txt[]
11
12
13 include::../common/welcome.txt[]
14
15
16 include::../common/audience.txt[]
17
18
19 [[chapters]]
20 === Chapter descriptions
21
22 What follows is a list of brief descriptions of this documentation's
23 chapters. The latter are ordered in such a way as to make the reading
24 as linear as possible.
25
26 . <<nuts-and-bolts,Nuts and bolts>> explains the
27 rudiments of software tracing and the rationale behind the
28 LTTng project.
29 . <<installing-lttng,Installing LTTng>> is divided into
30 sections describing the steps needed to get a working installation
31 of LTTng packages for common Linux distributions and from its
32 source.
33 . <<getting-started,Getting started>> is a very concise guide to
34 get started quickly with LTTng kernel and user space tracing. This
35 chapter is recommended if you're new to LTTng or software tracing
36 in general.
37 . <<understanding-lttng,Understanding LTTng>> deals with some
38 core concepts and components of the LTTng suite. Understanding
39 those is important since the next chapter assumes you're familiar
40 with them.
41 . <<using-lttng,Using LTTng>> is a complete user guide of the
42 LTTng project. It shows in great details how to instrument user
43 applications and the Linux kernel, how to control tracing sessions
44 using the `lttng` command line tool and miscellaneous practical use
45 cases.
46 . <<reference,Reference>> contains references of LTTng components,
47 like links to online manpages and various APIs.
48
49 We recommend that you read the above chapters in this order, although
50 some of them may be skipped depending on your situation. You may skip
51 <<nuts-and-bolts,Nuts and bolts>> if you're familiar with tracing
52 and LTTng. Also, you may jump over <<installing-lttng,Installing LTTng>>
53 if LTTng is already properly installed on your target system.
54
55
56 include::../common/convention.txt[]
57
58
59 include::../common/acknowledgements.txt[]
60
61
62 [[whats-new]]
63 == What's new in LTTng {revision}?
64
65 Most of the changes of LTTng {revision} are bug fixes, making the toolchain
66 more stable than ever before. Still, LTTng {revision} adds some interesting
67 features to the project.
68
69 LTTng 2.5 already supported the instrumentation and tracing of
70 <<java-application,Java applications>> through `java.util.logging`
71 (JUL). LTTng {revision} goes one step further by supporting
72 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2].
73 The new log4j domain is selected using the `--log4j` option in various
74 commands of the `lttng` tool.
75
76 LTTng-modules has supported system call tracing for a long time,
77 but until now, it was only possible to record either all of them,
78 or none of them. LTTng {revision} allows the user to record specific
79 system call events, for example:
80
81 [role="term"]
82 ----
83 lttng enable-event --kernel --syscall open,fork,chdir,pipe
84 ----
85
86 Finally, the `lttng` command line tool is not only able to communicate
87 with humans as it used to do, but also with machines thanks to its new
88 <<mi,machine interface>> feature.
89
90 To learn more about the new features of LTTng {revision}, see the
91 http://lttng.org/blog/2015/02/27/lttng-2.6-released/[release announcement].
92
93
94 [[nuts-and-bolts]]
95 == Nuts and bolts
96
97 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
98 generation_ is a modern toolkit for tracing Linux systems and
99 applications. So your first question might rather be: **what is
100 tracing?**
101
102
103 [[what-is-tracing]]
104 === What is tracing?
105
106 As the history of software engineering progressed and led to what
107 we now take for granted--complex, numerous and
108 interdependent software applications running in parallel on
109 sophisticated operating systems like Linux--the authors of such
110 components, or software developers, began feeling a natural
111 urge of having tools to ensure the robustness and good performance
112 of their masterpieces.
113
114 One major achievement in this field is, inarguably, the
115 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
116 which is an essential tool for developers to find and fix
117 bugs. But even the best debugger won't help make your software run
118 faster, and nowadays, faster software means either more work done by
119 the same hardware, or cheaper hardware for the same work.
120
121 A _profiler_ is often the tool of choice to identify performance
122 bottlenecks. Profiling is suitable to identify _where_ performance is
123 lost in a given software; the profiler outputs a profile, a
124 statistical summary of observed events, which you may use to discover
125 which functions took the most time to execute. However, a profiler
126 won't report _why_ some identified functions are the bottleneck.
127 Bottlenecks might only occur when specific conditions are met, sometimes
128 almost impossible to capture by a statistical profiler, or impossible to
129 reproduce with an application altered by the overhead of an event-based
130 profiler. For a thorough investigation of software performance issues,
131 a history of execution, with the recorded values of chosen variables
132 and context, is essential. This is where tracing comes in handy.
133
134 _Tracing_ is a technique used to understand what goes on in a running
135 software system. The software used for tracing is called a _tracer_,
136 which is conceptually similar to a tape recorder. When recording,
137 specific probes placed in the software source code generate events
138 that are saved on a giant tape: a _trace_ file. Both user applications
139 and the operating system may be traced at the same time, opening the
140 possibility of resolving a wide range of problems that are otherwise
141 extremely challenging.
142
143 Tracing is often compared to _logging_. However, tracers and loggers
144 are two different tools, serving two different purposes. Tracers are
145 designed to record much lower-level events that occur much more
146 frequently than log messages, often in the thousands per second range,
147 with very little execution overhead. Logging is more appropriate for
148 very high-level analysis of less frequent events: user accesses,
149 exceptional conditions (errors and warnings, for example), database
150 transactions, instant messaging communications, and such. More formally,
151 logging is one of several use cases that can be accomplished with
152 tracing.
153
154 The list of recorded events inside a trace file may be read manually
155 like a log file for the maximum level of detail, but it is generally
156 much more interesting to perform application-specific analyses to
157 produce reduced statistics and graphs that are useful to resolve a
158 given problem. Trace viewers and analysers are specialized tools
159 designed to do this.
160
161 So, in the end, this is what LTTng is: a powerful, open source set of
162 tools to trace the Linux kernel and user applications at the same time.
163 LTTng is composed of several components actively maintained and
164 developed by its link:/community/#where[community].
165
166
167 [[lttng-alternatives]]
168 === Alternatives to LTTng
169
170 Excluding proprietary solutions, a few competing software tracers
171 exist for Linux:
172
173 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
174 is the de facto function tracer of the Linux kernel. Its user
175 interface is a set of special files in sysfs.
176 * https://perf.wiki.kernel.org/[perf] is
177 a performance analyzing tool for Linux which supports hardware
178 performance counters, tracepoints, as well as other counters and
179 types of probes. perf's controlling utility is the `perf` command
180 line/curses tool.
181 * http://linux.die.net/man/1/strace[strace]
182 is a command line utility which records system calls made by a
183 user process, as well as signal deliveries and changes of process
184 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
185 to fulfill its function.
186 * https://sourceware.org/systemtap/[SystemTap]
187 is a Linux kernel and user space tracer which uses custom user scripts
188 to produce plain text traces. Scripts are converted to the C language,
189 then compiled as Linux kernel modules which are loaded to produce
190 trace data. SystemTap's primary user interface is the `stap`
191 command line tool.
192 * http://www.sysdig.org/[sysdig], like
193 SystemTap, uses scripts to analyze Linux kernel events. Scripts,
194 or _chisels_ in sysdig's jargon, are written in Lua and executed
195 while the system is being traced, or afterwards. sysdig's interface
196 is the `sysdig` command line tool as well as the curses-based
197 `csysdig` tool.
198
199 The main distinctive features of LTTng is that it produces correlated
200 kernel and user space traces, as well as doing so with the lowest
201 overhead amongst other solutions. It produces trace files in the
202 http://diamon.org/ctf[CTF] format, an optimized file format
203 for production and analyses of multi-gigabyte data. LTTng is the
204 result of close to 10 years of
205 active development by a community of passionate developers. LTTng {revision}
206 is currently available on some major desktop, server, and embedded Linux
207 distributions.
208
209 The main interface for tracing control is a single command line tool
210 named `lttng`. The latter can create several tracing sessions,
211 enable/disable events on the fly, filter them efficiently with custom
212 user expressions, start/stop tracing, and do much more. Traces can be
213 recorded on disk or sent over the network, kept totally or partially,
214 and viewed once tracing becomes inactive or in real-time.
215
216 <<installing-lttng,Install LTTng now>> and start tracing!
217
218
219 [[installing-lttng]]
220 == Installing LTTng
221
222 **LTTng** is a set of software components which interact to allow
223 instrumenting the Linux kernel and user applications as well as
224 controlling tracing sessions (starting/stopping tracing,
225 enabling/disabling events, and more). Those components are bundled into
226 the following packages:
227
228 LTTng-tools::
229 Libraries and command line interface to control tracing sessions.
230
231 LTTng-modules::
232 Linux kernel modules for tracing the kernel.
233
234 LTTng-UST::
235 User space tracing library.
236
237 Most distributions mark the LTTng-modules and LTTng-UST packages as
238 optional. In the following sections, the steps to install all three are
239 always provided, but note that LTTng-modules is only required if
240 you intend to trace the Linux kernel and LTTng-UST is only required if
241 you intend to trace user space applications.
242
243 This chapter shows how to install the above packages on a Linux system.
244 The easiest way is to use the package manager of the system's
245 distribution (<<desktop-distributions,desktop>> or
246 <<embedded-distributions,embedded>>). Support is also available for
247 <<enterprise-distributions,enterprise distributions>>, such as Red Hat
248 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES).
249 Otherwise, you can
250 <<building-from-source,build the LTTng packages from source>>.
251
252
253 [[desktop-distributions]]
254 === Desktop distributions
255
256 Official LTTng {revision} packages are available for
257 <<ubuntu,Ubuntu>>, <<fedora,Fedora>>, and
258 <<opensuse,openSUSE>> (and other RPM-based distributions).
259
260 More recent versions of LTTng are available for Debian and Arch Linux.
261
262 Should any issue arise when
263 following the procedures below, please inform the
264 link:/community[community] about it.
265
266
267 [[ubuntu]]
268 ==== Ubuntu
269
270 LTTng {revision} is packaged in Ubuntu 15.10 _Wily Werewolf_. For other
271 releases of Ubuntu, you need to build and install LTTng {revision}
272 <<building-from-source,from source>>. Ubuntu 15.04 _Vivid Vervet_
273 ships with link:/docs/v2.5/[LTTng 2.5], whilst
274 Ubuntu 16.04 _Xenial Xerus_ ships with
275 link:/docs/v2.7/[LTTng 2.7].
276
277 To install LTTng {revision} from the official Ubuntu repositories,
278 simply use `apt-get`:
279
280 [role="term"]
281 ----
282 sudo apt-get install lttng-tools
283 sudo apt-get install lttng-modules-dkms
284 sudo apt-get install liblttng-ust-dev
285 ----
286
287 If you need to trace
288 <<java-application,Java applications>>,
289 you need to install the LTTng-UST Java agent also:
290
291 [role="term"]
292 ----
293 sudo apt-get install liblttng-ust-agent-java
294 ----
295
296
297 [[fedora]]
298 ==== Fedora
299
300 Fedora 22 and Fedora 23 ship with official LTTng-tools {revision} and
301 LTTng-UST {revision} packages. Simply use `yum`:
302
303 [role="term"]
304 ----
305 sudo yum install lttng-tools
306 sudo yum install lttng-ust
307 sudo yum install lttng-ust-devel
308 ----
309
310 LTTng-modules {revision} still needs to be built and installed from
311 source. For that, make sure that the `kernel-devel` package is
312 already installed beforehand:
313
314 [role="term"]
315 ----
316 sudo yum install kernel-devel
317 ----
318
319 Proceed on to fetch
320 <<building-from-source,LTTng-modules {revision}'s source>>. Build and
321 install it as follows:
322
323 [role="term"]
324 ----
325 KERNELDIR=/usr/src/kernels/$(uname -r) make
326 sudo make modules_install
327 ----
328
329 NOTE: If you need to trace <<java-application,Java applications>> on
330 Fedora, you need to build and install LTTng-UST {revision}
331 <<building-from-source,from source>> and use the
332 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
333 `--enable-java-agent-all` options.
334
335
336 [[opensuse]]
337 ==== openSUSE/RPM
338
339 openSUSE 13.1 and openSUSE 13.2 have LTTng {revision} packages. To install
340 LTTng {revision}, you first need to add an entry to your repository
341 configuration. All LTTng repositories are available
342 http://download.opensuse.org/repositories/devel:/tools:/lttng/[here].
343 For example, the following commands adds the LTTng repository for
344 openSUSE{nbsp}13.1:
345
346 [role="term"]
347 ----
348 sudo zypper addrepo http://download.opensuse.org/repositories/devel:/tools:/lttng/openSUSE_13.1/devel:tools:lttng.repo
349 ----
350
351 Then, refresh the package database:
352
353 [role="term"]
354 ----
355 sudo zypper refresh
356 ----
357
358 and install `lttng-tools`, `lttng-modules` and `lttng-ust-devel`:
359
360 [role="term"]
361 ----
362 sudo zypper install lttng-tools
363 sudo zypper install lttng-modules
364 sudo zypper install lttng-ust-devel
365 ----
366
367 NOTE: If you need to trace <<java-application,Java applications>> on
368 openSUSE, you need to build and install LTTng-UST {revision}
369 <<building-from-source,from source>> and use the
370 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
371 `--enable-java-agent-all` options.
372
373
374 [[embedded-distributions]]
375 === Embedded distributions
376
377 LTTng is packaged by two popular
378 embedded Linux distributions: <<buildroot,Buildroot>> and
379 <<oe-yocto,OpenEmbedded/Yocto>>.
380
381
382 [[buildroot]]
383 ==== Buildroot
384
385 LTTng {revision} is available in Buildroot since Buildroot 2015.05. The
386 LTTng packages are named `lttng-tools`, `lttng-modules`, and `lttng-libust`.
387
388 To enable them, start the Buildroot configuration menu as usual:
389
390 [role="term"]
391 ----
392 make menuconfig
393 ----
394
395 In:
396
397 * _Kernel_: make sure _Linux kernel_ is enabled
398 * _Toolchain_: make sure the following options are enabled:
399 ** _Enable large file (files > 2GB) support_
400 ** _Enable WCHAR support_
401
402 In _Target packages_/_Debugging, profiling and benchmark_, enable
403 _lttng-modules_ and _lttng-tools_. In
404 _Target packages_/_Libraries_/_Other_, enable _lttng-libust_.
405
406 NOTE: If you need to trace <<java-application,Java applications>> on
407 Buildroot, you need to build and install LTTng-UST {revision}
408 <<building-from-source,from source>> and use the
409 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
410 `--enable-java-agent-all` options.
411
412
413 [[oe-yocto]]
414 ==== OpenEmbedded/Yocto
415
416 LTTng {revision} recipes are available in the
417 http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
418 layer of OpenEmbedded since February 8th, 2015 under the following names:
419
420 * `lttng-tools`
421 * `lttng-modules`
422 * `lttng-ust`
423
424 Using BitBake, the simplest way to include LTTng recipes in your
425 target image is to add them to `IMAGE_INSTALL_append` in
426 path:{conf/local.conf}:
427
428 ----
429 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
430 ----
431
432 If you're using Hob, click _Edit image recipe_ once you have selected
433 a machine and an image recipe. Then, under the _All recipes_ tab, search
434 for `lttng` and include the three LTTng recipes.
435
436 NOTE: If you need to trace <<java-application,Java applications>> on
437 OpenEmbedded/Yocto, you need to build and install LTTng-UST {revision}
438 <<building-from-source,from source>> and use the
439 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
440 `--enable-java-agent-all` options.
441
442
443 [[enterprise-distributions]]
444 === Enterprise distributions (RHEL, SLES)
445
446 To install LTTng on enterprise Linux distributions
447 (such as RHEL and SLES), please see
448 http://packages.efficios.com/[EfficiOS Enterprise Packages].
449
450
451 [[building-from-source]]
452 === Building from source
453
454 As <<installing-lttng,previously stated>>, LTTng is shipped as
455 three packages: LTTng-tools, LTTng-modules, and LTTng-UST. LTTng-tools
456 contains everything needed to control tracing sessions, while
457 LTTng-modules is only needed for Linux kernel tracing and LTTng-UST is
458 only needed for user space tracing.
459
460 The tarballs are available in the
461 http://lttng.org/download#build-from-source[Download section]
462 of the LTTng website.
463
464 Please refer to the path:{README.md} files provided by each package to
465 properly build and install them.
466
467 TIP: The aforementioned path:{README.md} files
468 are rendered as rich text when https://github.com/lttng[viewed on GitHub].
469
470
471 [[getting-started]]
472 == Getting started with LTTng
473
474 This is a small guide to get started quickly with LTTng kernel and user
475 space tracing. For a more thorough understanding of LTTng and intermediate
476 to advanced use cases and, see <<understanding-lttng,Understanding LTTng>>
477 and <<using-lttng,Using LTTng>>.
478
479 Before reading this guide, make sure LTTng
480 <<installing-lttng,is installed>>. LTTng-tools is required. Also install
481 LTTng-modules for
482 <<tracing-the-linux-kernel,tracing the Linux kernel>> and LTTng-UST
483 for
484 <<tracing-your-own-user-application,tracing your own user space applications>>.
485 When the traces are finally written and complete, the
486 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
487 section of this chapter will help you analyze your tracepoint events
488 to investigate.
489
490
491 [[tracing-the-linux-kernel]]
492 === Tracing the Linux kernel
493
494 Make sure LTTng-tools and LTTng-modules packages
495 <<installing-lttng,are installed>>.
496
497 Since you're about to trace the Linux kernel itself, let's look at the
498 available kernel events using the `lttng` tool, which has a
499 Git-like command line structure:
500
501 [role="term"]
502 ----
503 lttng list --kernel
504 ----
505
506 Before tracing, you need to create a session:
507
508 [role="term"]
509 ----
510 sudo lttng create
511 ----
512
513 TIP: You can avoid using `sudo` in the previous and following commands
514 if your user is a member of the <<lttng-sessiond,tracing group>>.
515
516 Let's now enable some events for this session:
517
518 [role="term"]
519 ----
520 sudo lttng enable-event --kernel sched_switch,sched_process_fork
521 ----
522
523 Or you might want to simply enable all available kernel events (beware
524 that trace files grow rapidly when doing this):
525
526 [role="term"]
527 ----
528 sudo lttng enable-event --kernel --all
529 ----
530
531 Start tracing:
532
533 [role="term"]
534 ----
535 sudo lttng start
536 ----
537
538 By default, traces are saved in
539 +\~/lttng-traces/__name__-__date__-__time__+,
540 where +__name__+ is the session name.
541
542 When you're done tracing:
543
544 [role="term"]
545 ----
546 sudo lttng stop
547 sudo lttng destroy
548 ----
549
550 Although `destroy` looks scary here, it doesn't actually destroy the
551 written trace files: it only destroys the tracing session.
552
553 What's next? Have a look at
554 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
555 to view and analyze the trace you just recorded.
556
557
558 [[tracing-your-own-user-application]]
559 === Tracing your own user application
560
561 The previous section helped you create a trace out of Linux kernel
562 events. This section steps you through a simple example showing you how
563 to trace a _Hello world_ program written in C.
564
565 Make sure the LTTng-tools and LTTng-UST packages
566 <<installing-lttng,are installed>>.
567
568 Tracing is just like having `printf()` calls at specific locations of
569 your source code, albeit LTTng is much faster and more flexible than
570 `printf()`. In the LTTng realm, **`tracepoint()`** is analogous to
571 `printf()`.
572
573 Unlike `printf()`, though, `tracepoint()` does not use a format string to
574 know the types of its arguments: the formats of all tracepoints must be
575 defined before using them. So before even writing our _Hello world_ program,
576 we need to define the format of our tracepoint. This is done by creating a
577 **tracepoint provider**, which consists of a tracepoint provider header
578 (`.h` file) and a tracepoint provider definition (`.c` file).
579
580 The tracepoint provider header contains some boilerplate as well as a
581 list of tracepoint definitions and other optional definition entries
582 which we skip for this quickstart. Each tracepoint is defined using the
583 `TRACEPOINT_EVENT()` macro. For each tracepoint, you must provide:
584
585 * a **provider name**, which is the "scope" or namespace of this
586 tracepoint (this usually includes the company and project names)
587 * a **tracepoint name**
588 * a **list of arguments** for the eventual `tracepoint()` call, each
589 item being:
590 ** the argument C type
591 ** the argument name
592 * a **list of fields**, which correspond to the actual fields of the
593 recorded events for this tracepoint
594
595 Here's an example of a simple tracepoint provider header with two
596 arguments: an integer and a string:
597
598 [source,c]
599 ----
600 #undef TRACEPOINT_PROVIDER
601 #define TRACEPOINT_PROVIDER hello_world
602
603 #undef TRACEPOINT_INCLUDE
604 #define TRACEPOINT_INCLUDE "./hello-tp.h"
605
606 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
607 #define _HELLO_TP_H
608
609 #include <lttng/tracepoint.h>
610
611 TRACEPOINT_EVENT(
612 hello_world,
613 my_first_tracepoint,
614 TP_ARGS(
615 int, my_integer_arg,
616 char*, my_string_arg
617 ),
618 TP_FIELDS(
619 ctf_string(my_string_field, my_string_arg)
620 ctf_integer(int, my_integer_field, my_integer_arg)
621 )
622 )
623
624 #endif /* _HELLO_TP_H */
625
626 #include <lttng/tracepoint-event.h>
627 ----
628
629 The exact syntax is well explained in the
630 <<c-application,C application>> instrumentation guide of the
631 <<using-lttng,Using LTTng>> chapter, as well as in
632 man:lttng-ust(3).
633
634 Save the above snippet as path:{hello-tp.h}.
635
636 Write the tracepoint provider definition as path:{hello-tp.c}:
637
638 [source,c]
639 ----
640 #define TRACEPOINT_CREATE_PROBES
641 #define TRACEPOINT_DEFINE
642
643 #include "hello-tp.h"
644 ----
645
646 Create the tracepoint provider:
647
648 [role="term"]
649 ----
650 gcc -c -I. hello-tp.c
651 ----
652
653 Now, by including path:{hello-tp.h} in your own application, you may use the
654 tracepoint defined above by properly refering to it when calling
655 `tracepoint()`:
656
657 [source,c]
658 ----
659 #include <stdio.h>
660 #include "hello-tp.h"
661
662 int main(int argc, char *argv[])
663 {
664 int x;
665
666 puts("Hello, World!\nPress Enter to continue...");
667
668 /*
669 * The following getchar() call is only placed here for the purpose
670 * of this demonstration, for pausing the application in order for
671 * you to have time to list its events. It's not needed otherwise.
672 */
673 getchar();
674
675 /*
676 * A tracepoint() call. Arguments, as defined in hello-tp.h:
677 *
678 * 1st: provider name (always)
679 * 2nd: tracepoint name (always)
680 * 3rd: my_integer_arg (first user-defined argument)
681 * 4th: my_string_arg (second user-defined argument)
682 *
683 * Notice the provider and tracepoint names are NOT strings;
684 * they are in fact parts of variables created by macros in
685 * hello-tp.h.
686 */
687 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
688
689 for (x = 0; x < argc; ++x) {
690 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
691 }
692
693 puts("Quitting now!");
694
695 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
696
697 return 0;
698 }
699 ----
700
701 Save this as path:{hello.c}, next to path:{hello-tp.c}.
702
703 Notice path:{hello-tp.h}, the tracepoint provider header, is included
704 by path:{hello.c}.
705
706 You are now ready to compile the application with LTTng-UST support:
707
708 [role="term"]
709 ----
710 gcc -c hello.c
711 gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
712 ----
713
714 Here's the whole build process:
715
716 [role="img-100"]
717 .User space tracing's build process.
718 image::ust-flow.png[]
719
720 If you followed the
721 <<tracing-the-linux-kernel,Tracing the Linux kernel>> tutorial, the
722 following steps should look familiar.
723
724 First, run the application with a few arguments:
725
726 [role="term"]
727 ----
728 ./hello world and beyond
729 ----
730
731 You should see
732
733 ----
734 Hello, World!
735 Press Enter to continue...
736 ----
737
738 Use the `lttng` tool to list all available user space events:
739
740 [role="term"]
741 ----
742 lttng list --userspace
743 ----
744
745 You should see the `hello_world:my_first_tracepoint` tracepoint listed
746 under the `./hello` process.
747
748 Create a tracing session:
749
750 [role="term"]
751 ----
752 lttng create
753 ----
754
755 Enable the `hello_world:my_first_tracepoint` tracepoint:
756
757 [role="term"]
758 ----
759 lttng enable-event --userspace hello_world:my_first_tracepoint
760 ----
761
762 Start tracing:
763
764 [role="term"]
765 ----
766 lttng start
767 ----
768
769 Go back to the running `hello` application and press Enter. All `tracepoint()`
770 calls are executed and the program finally exits.
771
772 Stop tracing:
773
774 [role="term"]
775 ----
776 lttng stop
777 ----
778
779 Done! You may use `lttng view` to list the recorded events. This command
780 starts http://diamon.org/babeltrace[`babeltrace`]
781 in the background, if it's installed:
782
783 [role="term"]
784 ----
785 lttng view
786 ----
787
788 should output something like:
789
790 ----
791 [18:10:27.684304496] (+?.?????????) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "hi there!", my_integer_field = 23 }
792 [18:10:27.684338440] (+0.000033944) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "./hello", my_integer_field = 0 }
793 [18:10:27.684340692] (+0.000002252) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "world", my_integer_field = 1 }
794 [18:10:27.684342616] (+0.000001924) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "and", my_integer_field = 2 }
795 [18:10:27.684343518] (+0.000000902) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "beyond", my_integer_field = 3 }
796 [18:10:27.684357978] (+0.000014460) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "x^2", my_integer_field = 16 }
797 ----
798
799 When you're done, you may destroy the tracing session, which does _not_
800 destroy the generated trace files, leaving them available for further
801 analysis:
802
803 [role="term"]
804 ----
805 lttng destroy
806 ----
807
808 The next section presents other alternatives to view and analyze your
809 LTTng traces.
810
811
812 [[viewing-and-analyzing-your-traces]]
813 === Viewing and analyzing your traces
814
815 This section describes how to visualize the data gathered after tracing
816 the Linux kernel or a user space application.
817
818 Many ways exist to read LTTng traces:
819
820 * **`babeltrace`** is a command line utility which converts trace formats;
821 it supports the format used by LTTng,
822 CTF, as well as a basic
823 text output which may be ++grep++ed. The `babeltrace` command is
824 part of the
825 http://diamon.org/babeltrace[Babeltrace] project.
826 * Babeltrace also includes **Python bindings** so that you may
827 easily open and read an LTTng trace with your own script, benefiting
828 from the power of Python.
829 * **http://tracecompass.org/[Trace Compass]**
830 is an Eclipse plugin used to visualize and analyze various types of
831 traces, including LTTng's. It also comes as a standalone application.
832
833 LTTng trace files are usually recorded in the dir:{~/lttng-traces} directory.
834 Let's now view the trace and perform a basic analysis using
835 `babeltrace`.
836
837 The simplest way to list all the recorded events of a trace is to pass its
838 path to `babeltrace` with no options:
839
840 [role="term"]
841 ----
842 babeltrace ~/lttng-traces/my-session
843 ----
844
845 `babeltrace` finds all traces recursively within the given path and
846 prints all their events, merging them in order of time.
847
848 Listing all the system calls of a Linux kernel trace with their arguments is
849 easy with `babeltrace` and `grep`:
850
851 [role="term"]
852 ----
853 babeltrace ~/lttng-traces/my-kernel-session | grep sys_
854 ----
855
856 Counting events is also straightforward:
857
858 [role="term"]
859 ----
860 babeltrace ~/lttng-traces/my-kernel-session | grep sys_read | wc --lines
861 ----
862
863 The text output of `babeltrace` is useful for isolating events by simple
864 matching using `grep` and similar utilities. However, more elaborate filters
865 such as keeping only events with a field value falling within a specific range
866 are not trivial to write using a shell. Moreover, reductions and even the
867 most basic computations involving multiple events are virtually impossible
868 to implement.
869
870 Fortunately, Babeltrace ships with Python 3 bindings which makes it
871 really easy to read the events of an LTTng trace sequentially and compute
872 the desired information.
873
874 Here's a simple example using the Babeltrace Python bindings. The following
875 script accepts an LTTng Linux kernel trace path as its first argument and
876 prints the short names of the top 5 running processes on CPU 0 during the
877 whole trace:
878
879 [source,python]
880 ----
881 import sys
882 from collections import Counter
883 import babeltrace
884
885
886 def top5proc():
887 if len(sys.argv) != 2:
888 msg = 'Usage: python {} TRACEPATH'.format(sys.argv[0])
889 raise ValueError(msg)
890
891 # a trace collection holds one to many traces
892 col = babeltrace.TraceCollection()
893
894 # add the trace provided by the user
895 # (LTTng traces always have the 'ctf' format)
896 if col.add_trace(sys.argv[1], 'ctf') is None:
897 raise RuntimeError('Cannot add trace')
898
899 # this counter dict will hold execution times:
900 #
901 # task command name -> total execution time (ns)
902 exec_times = Counter()
903
904 # this holds the last `sched_switch` timestamp
905 last_ts = None
906
907 # iterate events
908 for event in col.events:
909 # keep only `sched_switch` events
910 if event.name != 'sched_switch':
911 continue
912
913 # keep only events which happened on CPU 0
914 if event['cpu_id'] != 0:
915 continue
916
917 # event timestamp
918 cur_ts = event.timestamp
919
920 if last_ts is None:
921 # we start here
922 last_ts = cur_ts
923
924 # previous task command (short) name
925 prev_comm = event['prev_comm']
926
927 # initialize entry in our dict if not yet done
928 if prev_comm not in exec_times:
929 exec_times[prev_comm] = 0
930
931 # compute previous command execution time
932 diff = cur_ts - last_ts
933
934 # update execution time of this command
935 exec_times[prev_comm] += diff
936
937 # update last timestamp
938 last_ts = cur_ts
939
940 # display top 10
941 for name, ns in exec_times.most_common(5):
942 s = ns / 1000000000
943 print('{:20}{} s'.format(name, s))
944
945
946 if __name__ == '__main__':
947 top5proc()
948 ----
949
950 Save this script as path:{top5proc.py} and run it with Python 3, providing the
951 path to an LTTng Linux kernel trace as the first argument:
952
953 [role="term"]
954 ----
955 python3 top5proc.py ~/lttng-sessions/my-session-.../kernel
956 ----
957
958 Make sure the path you provide is the directory containing actual trace
959 files (`channel0_0`, `metadata`, and the rest): the `babeltrace` utility
960 recurses directories, but the Python bindings do not.
961
962 Here's an example of output:
963
964 ----
965 swapper/0 48.607245889 s
966 chromium 7.192738188 s
967 pavucontrol 0.709894415 s
968 Compositor 0.660867933 s
969 Xorg.bin 0.616753786 s
970 ----
971
972 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
973 weren't using the CPU that much when tracing, its first position in the list
974 makes sense.
975
976
977 [[understanding-lttng]]
978 == Understanding LTTng
979
980 If you're going to use LTTng in any serious way, it is fundamental that
981 you become familiar with its core concepts. Technical terms like
982 _tracing sessions_, _domains_, _channels_ and _events_ are used over
983 and over in the <<using-lttng,Using LTTng>> chapter,
984 and it is assumed that you understand what they mean when reading it.
985
986 LTTng, as you already know, is a _toolkit_. It would be wrong
987 to call it a simple _tool_ since it is composed of multiple interacting
988 components. This chapter also describes the latter, providing details
989 about their respective roles and how they connect together to form
990 the current LTTng ecosystem.
991
992
993 [[core-concepts]]
994 === Core concepts
995
996 This section explains the various elementary concepts a user has to deal
997 with when using LTTng. They are:
998
999 * <<tracing-session,tracing session>>
1000 * <<domain,domain>>
1001 * <<channel,channel>>
1002 * <<event,event>>
1003
1004
1005 [[tracing-session]]
1006 ==== Tracing session
1007
1008 A _tracing session_ is--like any session--a container of
1009 state. Anything that is done when tracing using LTTng happens in the
1010 scope of a tracing session. In this regard, it is analogous to a bank
1011 website's session: you can't interact online with your bank account
1012 unless you are logged in a session, except for reading a few static
1013 webpages (LTTng, too, can report some static information that does not
1014 need a created tracing session).
1015
1016 A tracing session holds the following attributes and objects (some of
1017 which are described in the following sections):
1018
1019 * a name
1020 * the tracing state (tracing started or stopped)
1021 * the trace data output path/URL (local path or sent over the network)
1022 * a mode (normal, snapshot or live)
1023 * the snapshot output paths/URLs (if applicable)
1024 * for each <<domain,domain>>, a list of <<channel,channels>>
1025 * for each channel:
1026 ** a name
1027 ** the channel state (enabled or disabled)
1028 ** its parameters (event loss mode, sub-buffers size and count,
1029 timer periods, output type, trace files size and count, and the rest)
1030 ** a list of added context information
1031 ** a list of <<event,events>>
1032 * for each event:
1033 ** its state (enabled or disabled)
1034 ** a list of instrumentation points (tracepoints, system calls,
1035 dynamic probes, other types of probes)
1036 ** associated log levels
1037 ** a filter expression
1038
1039 All this information is completely isolated between tracing sessions.
1040 As you can see in the list above, even the tracing state
1041 is a per-tracing session attribute, so that you may trace your target
1042 system/application in a given tracing session with a specific
1043 configuration while another one stays inactive.
1044
1045 [role="img-100"]
1046 .A _tracing session_ is a container of domains, channels, and events.
1047 image::concepts.png[]
1048
1049 Conceptually, a tracing session is a per-user object; the
1050 <<plumbing,Plumbing>> section shows how this is actually
1051 implemented. Any user may create as many concurrent tracing sessions
1052 as desired.
1053
1054 [role="img-100"]
1055 .Each user may create as many tracing sessions as desired.
1056 image::many-sessions.png[]
1057
1058 The trace data generated in a tracing session may be either saved
1059 to disk, sent over the network or not saved at all (in which case
1060 snapshots may still be saved to disk or sent to a remote machine).
1061
1062
1063 [[domain]]
1064 ==== Domain
1065
1066 A tracing _domain_ is the official term the LTTng project uses to
1067 designate a tracer category.
1068
1069 There are currently four known domains:
1070
1071 * Linux kernel
1072 * user space
1073 * `java.util.logging` (JUL)
1074 * log4j
1075
1076 Different tracers expose common features in their own interfaces, but,
1077 from a user's perspective, you still need to target a specific type of
1078 tracer to perform some actions. For example, since both kernel and user
1079 space tracers support named tracepoints (probes manually inserted in
1080 source code), you need to specify which one is concerned when enabling
1081 an event because both domains could have existing events with the same
1082 name.
1083
1084 Some features are not available in all domains. Filtering enabled
1085 events using custom expressions, for example, is currently not
1086 supported in the kernel domain, but support could be added in the
1087 future.
1088
1089
1090 [[channel]]
1091 ==== Channel
1092
1093 A _channel_ is a set of events with specific parameters and potential
1094 added context information. Channels have unique names per domain within
1095 a tracing session. A given event is always registered to at least one
1096 channel; having the same enabled event in two channels makes
1097 this event being recorded twice everytime it occurs.
1098
1099 Channels may be individually enabled or disabled. Occurring events of
1100 a disabled channel never make it to recorded events.
1101
1102 The fundamental role of a channel is to keep a shared ring buffer, where
1103 events are eventually recorded by the tracer and consumed by a consumer
1104 daemon. This internal ring buffer is divided into many sub-buffers of
1105 equal size.
1106
1107 Channels, when created, may be fine-tuned thanks to a few parameters,
1108 many of them related to sub-buffers. The following subsections explain
1109 what those parameters are and in which situations you should manually
1110 adjust them.
1111
1112
1113 [[channel-overwrite-mode-vs-discard-mode]]
1114 ===== Overwrite and discard event loss modes
1115
1116 As previously mentioned, a channel's ring buffer is divided into many
1117 equally sized sub-buffers.
1118
1119 As events occur, they are serialized as trace data into a specific
1120 sub-buffer (yellow arc in the following animation) until it is full:
1121 when this happens, the sub-buffer is marked as consumable (red) and
1122 another, _empty_ (white) sub-buffer starts receiving the following
1123 events. The marked sub-buffer is eventually consumed by a consumer
1124 daemon (returns to white).
1125
1126 [NOTE]
1127 [role="docsvg-channel-subbuf-anim"]
1128 ====
1129 {note-no-anim}
1130 ====
1131
1132 In an ideal world, sub-buffers are consumed faster than filled, like it
1133 is the case above. In the real world, however, all sub-buffers could be
1134 full at some point, leaving no space to record the following events. By
1135 design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer
1136 exists, losing events is acceptable when the alternative would be to
1137 cause substantial delays in the instrumented application's execution.
1138 LTTng privileges performance over integrity, aiming at perturbing the
1139 traced system as little as possible in order to make tracing of subtle
1140 race conditions and rare interrupt cascades possible.
1141
1142 When it comes to losing events because no empty sub-buffer is available,
1143 the channel's _event loss mode_ determines what to do amongst:
1144
1145 Discard::
1146 Drop the newest events until a sub-buffer is released.
1147
1148 Overwrite::
1149 Clear the sub-buffer containing the oldest recorded
1150 events and start recording the newest events there. This mode is
1151 sometimes called _flight recorder mode_ because it behaves like a
1152 flight recorder: always keep a fixed amount of the latest data.
1153
1154 Which mechanism you should choose depends on your context: prioritize
1155 the newest or the oldest events in the ring buffer?
1156
1157 Beware that, in overwrite mode, a whole sub-buffer is abandoned as soon
1158 as a new event doesn't find an empty sub-buffer, whereas in discard
1159 mode, only the event that doesn't fit is discarded.
1160
1161 Also note that a count of lost events is incremented and saved in
1162 the trace itself when an event is lost in discard mode, whereas no
1163 information is kept when a sub-buffer gets overwritten before being
1164 committed.
1165
1166 There are known ways to decrease your probability of losing events. The
1167 next section shows how tuning the sub-buffers count and size can be
1168 used to virtually stop losing events.
1169
1170
1171 [[channel-subbuf-size-vs-subbuf-count]]
1172 ===== Sub-buffers count and size
1173
1174 For each channel, an LTTng user may set its number of sub-buffers and
1175 their size.
1176
1177 Note that there is a noticeable tracer's CPU overhead introduced when
1178 switching sub-buffers (marking a full one as consumable and switching
1179 to an empty one for the following events to be recorded). Knowing this,
1180 the following list presents a few practical situations along with how
1181 to configure sub-buffers for them:
1182
1183 High event throughput::
1184 In general, prefer bigger sub-buffers to
1185 lower the risk of losing events. Having bigger sub-buffers
1186 also ensures a lower sub-buffer switching frequency. The number of
1187 sub-buffers is only meaningful if the channel is enabled in
1188 overwrite mode: in this case, if a sub-buffer overwrite happens, the
1189 other sub-buffers are left unaltered.
1190
1191 Low event throughput::
1192 In general, prefer smaller sub-buffers
1193 since the risk of losing events is already low. Since events
1194 happen less frequently, the sub-buffer switching frequency should
1195 remain low and thus the tracer's overhead should not be a problem.
1196
1197 Low memory system::
1198 If your target system has a low memory
1199 limit, prefer fewer first, then smaller sub-buffers. Even if the
1200 system is limited in memory, you want to keep the sub-buffers as
1201 big as possible to avoid a high sub-buffer switching frequency.
1202
1203 You should know that LTTng uses CTF as its trace format, which means
1204 event data is very compact. For example, the average LTTng Linux kernel
1205 event weights about 32{nbsp}bytes. A sub-buffer size of 1{nbsp}MiB is
1206 thus considered big.
1207
1208 The previous situations highlight the major trade-off between a few big
1209 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1210 frequency vs. how much data is lost in overwrite mode. Assuming a
1211 constant event throughput and using the overwrite mode, the two
1212 following configurations have the same ring buffer total size:
1213
1214 [NOTE]
1215 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1216 ====
1217 {note-no-anim}
1218 ====
1219
1220 * **2 sub-buffers of 4 MiB each** lead to a very low sub-buffer
1221 switching frequency, but if a sub-buffer overwrite happens, half of
1222 the recorded events so far (4{nbsp}MiB) are definitely lost.
1223 * **8 sub-buffers of 1 MiB each** lead to 4{nbsp}times the tracer's
1224 overhead as the previous configuration, but if a sub-buffer
1225 overwrite happens, only the eighth of events recorded so far are
1226 definitely lost.
1227
1228 In discard mode, the sub-buffers count parameter is pointless: use two
1229 sub-buffers and set their size according to the requirements of your
1230 situation.
1231
1232
1233 [[channel-switch-timer]]
1234 ===== Switch timer
1235
1236 The _switch timer_ period is another important configurable feature of
1237 channels to ensure periodic sub-buffer flushing.
1238
1239 When the _switch timer_ fires, a sub-buffer switch happens. This timer
1240 may be used to ensure that event data is consumed and committed to
1241 trace files periodically in case of a low event throughput:
1242
1243 [NOTE]
1244 [role="docsvg-channel-switch-timer"]
1245 ====
1246 {note-no-anim}
1247 ====
1248
1249 It's also convenient when big sub-buffers are used to cope with
1250 sporadic high event throughput, even if the throughput is normally
1251 lower.
1252
1253
1254 [[channel-buffering-schemes]]
1255 ===== Buffering schemes
1256
1257 In the user space tracing domain, two **buffering schemes** are
1258 available when creating a channel:
1259
1260 Per-PID buffering::
1261 Keep one ring buffer per process.
1262
1263 Per-UID buffering::
1264 Keep one ring buffer for all processes of a single user.
1265
1266 The per-PID buffering scheme consumes more memory than the per-UID
1267 option if more than one process is instrumented for LTTng-UST. However,
1268 per-PID buffering ensures that one process having a high event
1269 throughput won't fill all the shared sub-buffers, only its own.
1270
1271 The Linux kernel tracing domain only has one available buffering scheme
1272 which is to use a single ring buffer for the whole system.
1273
1274
1275 [[event]]
1276 ==== Event
1277
1278 An _event_, in LTTng's realm, is a term often used metonymically,
1279 having multiple definitions depending on the context:
1280
1281 . When tracing, an event is a _point in space-time_. Space, in a
1282 tracing context, is the set of all executable positions of a
1283 compiled application by a logical processor. When a program is
1284 executed by a processor and some instrumentation point, or
1285 _probe_, is encountered, an event occurs. This event is accompanied
1286 by some contextual payload (values of specific variables at this
1287 point of execution) which may or may not be recorded.
1288 . In the context of a recorded trace file, the term _event_ implies
1289 a _recorded event_.
1290 . When configuring a tracing session, _enabled events_ refer to
1291 specific rules which could lead to the transfer of actual
1292 occurring events (1) to recorded events (2).
1293
1294 The whole <<core-concepts,Core concepts>> section focuses on the
1295 third definition. An event is always registered to _one or more_
1296 channels and may be enabled or disabled at will per channel. A disabled
1297 event never leads to a recorded event, even if its channel is enabled.
1298
1299 An event (3) is enabled with a few conditions that must _all_ be met
1300 when an event (1) happens in order to generate a recorded event (2):
1301
1302 . A _probe_ or group of probes in the traced application must be
1303 executed.
1304 . **Optionally**, the probe must have a log level matching a
1305 log level range specified when enabling the event.
1306 . **Optionally**, the occurring event must satisfy a custom
1307 expression, or _filter_, specified when enabling the event.
1308
1309
1310 [[plumbing]]
1311 === Plumbing
1312
1313 The previous section described the concepts at the heart of LTTng.
1314 This section summarizes LTTng's implementation: how those objects are
1315 managed by different applications and libraries working together to
1316 form the toolkit.
1317
1318
1319 [[plumbing-overview]]
1320 ==== Overview
1321
1322 As <<installing-lttng,mentioned previously>>, the whole LTTng suite
1323 is made of the LTTng-tools, LTTng-UST, and
1324 LTTng-modules packages. Together, they provide different daemons, libraries,
1325 kernel modules and command line interfaces. The following tree shows
1326 which usable component belongs to which package:
1327
1328 * **LTTng-tools**:
1329 ** session daemon (`lttng-sessiond`)
1330 ** consumer daemon (`lttng-consumerd`)
1331 ** relay daemon (`lttng-relayd`)
1332 ** tracing control library (`liblttng-ctl`)
1333 ** tracing control command line tool (`lttng`)
1334 * **LTTng-UST**:
1335 ** user space tracing library (`liblttng-ust`) and its headers
1336 ** preloadable user space tracing helpers
1337 (`liblttng-ust-libc-wrapper`, `liblttng-ust-pthread-wrapper`,
1338 `liblttng-ust-cyg-profile`, `liblttng-ust-cyg-profile-fast`
1339 and `liblttng-ust-dl`)
1340 ** user space tracepoint code generator command line tool
1341 (`lttng-gen-tp`)
1342 ** `java.util.logging`/log4j tracepoint providers
1343 (`liblttng-ust-jul-jni` and `liblttng-ust-log4j-jni`) and JAR
1344 file (path:{liblttng-ust-agent.jar})
1345 * **LTTng-modules**:
1346 ** LTTng Linux kernel tracer module
1347 ** tracing ring buffer kernel modules
1348 ** many LTTng probe kernel modules
1349
1350 The following diagram shows how the most important LTTng components
1351 interact. Plain purple arrows represent trace data paths while dashed
1352 red arrows indicate control communications. The LTTng relay daemon is
1353 shown running on a remote system, although it could as well run on the
1354 target (monitored) system.
1355
1356 [role="img-100"]
1357 .Control and data paths between LTTng components.
1358 image::plumbing-26.png[]
1359
1360 Each component is described in the following subsections.
1361
1362
1363 [[lttng-sessiond]]
1364 ==== Session daemon
1365
1366 At the heart of LTTng's plumbing is the _session daemon_, often called
1367 by its command name, `lttng-sessiond`.
1368
1369 The session daemon is responsible for managing tracing sessions and
1370 what they logically contain (channel properties, enabled/disabled
1371 events, and the rest). By communicating locally with instrumented
1372 applications (using LTTng-UST) and with the LTTng Linux kernel modules
1373 (LTTng-modules), it oversees all tracing activities.
1374
1375 One of the many things that `lttng-sessiond` does is to keep
1376 track of the available event types. User space applications and
1377 libraries actively connect and register to the session daemon when they
1378 start. By contrast, `lttng-sessiond` seeks out and loads the appropriate
1379 LTTng kernel modules as part of its own initialization. Kernel event
1380 types are _pulled_ by `lttng-sessiond`, whereas user space event types
1381 are _pushed_ to it by the various user space tracepoint providers.
1382
1383 Using a specific inter-process communication protocol with Linux kernel
1384 and user space tracers, the session daemon can send channel information
1385 so that they are initialized, enable/disable specific probes based on
1386 enabled/disabled events by the user, send event filters information to
1387 LTTng tracers so that filtering actually happens at the tracer site,
1388 start/stop tracing a specific application or the Linux kernel, and more.
1389
1390 The session daemon is not useful without some user controlling it,
1391 because it's only a sophisticated control interchange and thus
1392 doesn't make any decision on its own. `lttng-sessiond` opens a local
1393 socket for controlling it, albeit the preferred way to control it is
1394 using `liblttng-ctl`, an installed C library hiding the communication
1395 protocol behind an easy-to-use API. The `lttng` tool makes use of
1396 `liblttng-ctl` to implement a user-friendly command line interface.
1397
1398 `lttng-sessiond` does not receive any trace data from instrumented
1399 applications; the _consumer daemons_ are the programs responsible for
1400 collecting trace data using shared ring buffers. However, the session
1401 daemon is the one that must spawn a consumer daemon and establish
1402 a control communication with it.
1403
1404 Session daemons run on a per-user basis. Knowing this, multiple
1405 instances of `lttng-sessiond` may run simultaneously, each belonging
1406 to a different user and each operating independently of the others.
1407 Only `root`'s session daemon, however, may control LTTng kernel modules
1408 (that is, the kernel tracer). With that in mind, if a user has no root
1409 access on the target system, he cannot trace the system's kernel, but
1410 should still be able to trace its own instrumented applications.
1411
1412 It has to be noted that, although only `root`'s session daemon may
1413 control the kernel tracer, the `lttng-sessiond` command has a `--group`
1414 option which may be used to specify the name of a special user group
1415 allowed to communicate with `root`'s session daemon and thus record
1416 kernel traces. By default, this group is named `tracing`.
1417
1418 If not done yet, the `lttng` tool, by default, automatically starts a
1419 session daemon. `lttng-sessiond` may also be started manually:
1420
1421 [role="term"]
1422 ----
1423 lttng-sessiond
1424 ----
1425
1426 This starts the session daemon in foreground. Use
1427
1428 [role="term"]
1429 ----
1430 lttng-sessiond --daemonize
1431 ----
1432
1433 to start it as a true daemon.
1434
1435 To kill the current user's session daemon, `pkill` may be used:
1436
1437 [role="term"]
1438 ----
1439 pkill lttng-sessiond
1440 ----
1441
1442 The default `SIGTERM` signal terminates it cleanly.
1443
1444 Several other options are available and described in
1445 man:lttng-sessiond(8) or by running `lttng-sessiond --help`.
1446
1447
1448 [[lttng-consumerd]]
1449 ==== Consumer daemon
1450
1451 The _consumer daemon_, or `lttng-consumerd`, is a program sharing some
1452 ring buffers with user applications or the LTTng kernel modules to
1453 collect trace data and output it at some place (on disk or sent over
1454 the network to an LTTng relay daemon).
1455
1456 Consumer daemons are created by a session daemon as soon as events are
1457 enabled within a tracing session, well before tracing is activated
1458 for the latter. Entirely managed by session daemons,
1459 consumer daemons survive session destruction to be reused later,
1460 should a new tracing session be created. Consumer daemons are always
1461 owned by the same user as their session daemon. When its owner session
1462 daemon is killed, the consumer daemon also exits. This is because
1463 the consumer daemon is always the child process of a session daemon.
1464 Consumer daemons should never be started manually. For this reason,
1465 they are not installed in one of the usual locations listed in the
1466 `PATH` environment variable. `lttng-sessiond` has, however, a
1467 bunch of options (see man:lttng-sessiond(8)) to
1468 specify custom consumer daemon paths if, for some reason, a consumer
1469 daemon other than the default installed one is needed.
1470
1471 There are up to two running consumer daemons per user, whereas only one
1472 session daemon may run per user. This is because each process has
1473 independent bitness: if the target system runs a mixture of 32-bit and
1474 64-bit processes, it is more efficient to have separate corresponding
1475 32-bit and 64-bit consumer daemons. The `root` user is an exception: it
1476 may have up to _three_ running consumer daemons: 32-bit and 64-bit
1477 instances for its user space applications and one more reserved for
1478 collecting kernel trace data.
1479
1480 As new tracing domains are added to LTTng, the development community's
1481 intent is to minimize the need for additionnal consumer daemon instances
1482 dedicated to them. For instance, the `java.util.logging` (JUL) domain
1483 events are in fact mapped to the user space domain, thus tracing this
1484 particular domain is handled by existing user space domain consumer
1485 daemons.
1486
1487
1488 [[lttng-relayd]]
1489 ==== Relay daemon
1490
1491 When a tracing session is configured to send its trace data over the
1492 network, an LTTng _relay daemon_ must be used at the other end to
1493 receive trace packets and serialize them to trace files. This setup
1494 makes it possible to trace a target system without ever committing trace
1495 data to its local storage, a feature which is useful for embedded
1496 systems, amongst others. The command implementing the relay daemon
1497 is `lttng-relayd`.
1498
1499 The basic use case of `lttng-relayd` is to transfer trace data received
1500 over the network to trace files on the local file system. The relay
1501 daemon must listen on two TCP ports to achieve this: one control port,
1502 used by the target session daemon, and one data port, used by the
1503 target consumer daemon. The relay and session daemons agree on common
1504 default ports when custom ones are not specified.
1505
1506 Since the communication transport protocol for both ports is standard
1507 TCP, the relay daemon may be started either remotely or locally (on the
1508 target system).
1509
1510 While two instances of consumer daemons (32-bit and 64-bit) may run
1511 concurrently for a given user, `lttng-relayd` needs only be of its
1512 host operating system's bitness.
1513
1514 The other important feature of LTTng's relay daemon is the support of
1515 _LTTng live_. LTTng live is an application protocol to view events as
1516 they arrive. The relay daemon still records events in trace files,
1517 but a _tee_ allows to inspect incoming events.
1518
1519 [role="img-100"]
1520 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a live viewer.
1521 image::lttng-live.png[]
1522
1523 Using LTTng live locally thus requires to run a local relay daemon.
1524
1525
1526 [[liblttng-ctl-lttng]]
1527 ==== [[lttng-cli]]Control library and command line interface
1528
1529 The LTTng control library, `liblttng-ctl`, can be used to communicate
1530 with the session daemon using a C API that hides the underlying
1531 protocol's details. `liblttng-ctl` is part of LTTng-tools.
1532
1533 `liblttng-ctl` may be used by including its "master" header:
1534
1535 [source,c]
1536 ----
1537 #include <lttng/lttng.h>
1538 ----
1539
1540 Some objects are referred by name (C string), such as tracing sessions,
1541 but most of them require creating a handle first using
1542 `lttng_create_handle()`. The best available developer documentation for
1543 `liblttng-ctl` is, for the moment, its installed header files as such.
1544 Every function/structure is thoroughly documented.
1545
1546 The `lttng` program is the _de facto_ standard user interface to
1547 control LTTng tracing sessions. `lttng` uses `liblttng-ctl` to
1548 communicate with session daemons behind the scenes.
1549 Its man page, man:lttng(1), is exhaustive, as well as its command
1550 line help (+lttng _cmd_ --help+, where +_cmd_+ is the command name).
1551
1552 The <<controlling-tracing,Controlling tracing>> section is a feature
1553 tour of the `lttng` tool.
1554
1555
1556 [[lttng-ust]]
1557 ==== User space tracing library
1558
1559 The user space tracing part of LTTng is possible thanks to the user
1560 space tracing library, `liblttng-ust`, which is part of the LTTng-UST
1561 package.
1562
1563 `liblttng-ust` provides header files containing macros used to define
1564 tracepoints and create tracepoint providers, as well as a shared object
1565 that must be linked to individual applications to connect to and
1566 communicate with a session daemon and a consumer daemon as soon as the
1567 application starts.
1568
1569 The exact mechanism by which an application is registered to the
1570 session daemon is beyond the scope of this documentation. The only thing
1571 you need to know is that, since the library constructor does this job
1572 automatically, tracepoints may be safely inserted anywhere in the source
1573 code without prior manual initialization of `liblttng-ust`.
1574
1575 The `liblttng-ust`-session daemon collaboration also provides an
1576 interesting feature: user space events may be enabled _before_
1577 applications actually start. By doing this and starting tracing before
1578 launching the instrumented application, you make sure that even the
1579 earliest occurring events can be recorded.
1580
1581 The <<c-application,C application>> instrumenting guide of the
1582 <<using-lttng,Using LTTng>> chapter focuses on using `liblttng-ust`:
1583 instrumenting, building/linking and running a user application.
1584
1585
1586 [[lttng-modules]]
1587 ==== LTTng kernel modules
1588
1589 The LTTng Linux kernel modules provide everything needed to trace the
1590 Linux kernel: various probes, a ring buffer implementation for a
1591 consumer daemon to read trace data and the tracer itself.
1592
1593 Only in exceptional circumstances should you ever need to load the
1594 LTTng kernel modules manually: it is normally the responsability of
1595 `root`'s session daemon to do so. Even if you were to develop your
1596 own LTTng probe module--for tracing a custom kernel or some kernel
1597 module (this topic is covered in the
1598 <<instrumenting-linux-kernel,Linux kernel>> instrumenting guide of
1599 the <<using-lttng,Using LTTng>> chapter)&#8212;you
1600 should use the `--extra-kmod-probes` option of the session daemon to
1601 append your probe to the default list. The session and consumer daemons
1602 of regular users do not interact with the LTTng kernel modules at all.
1603
1604 LTTng kernel modules are installed, by default, in
1605 +/usr/lib/modules/_release_/extra+, where +_release_+ is the
1606 kernel release (see `uname --kernel-release`).
1607
1608
1609 [[using-lttng]]
1610 == Using LTTng
1611
1612 Using LTTng involves two main activities: **instrumenting** and
1613 **controlling tracing**.
1614
1615 _<<instrumenting,Instrumenting>>_ is the process of inserting probes
1616 into some source code. It can be done manually, by writing tracepoint
1617 calls at specific locations in the source code of the program to trace,
1618 or more automatically using dynamic probes (address in assembled code,
1619 symbol name, function entry/return, and others).
1620
1621 It has to be noted that, as an LTTng user, you may not have to worry
1622 about the instrumentation process. Indeed, you may want to trace a
1623 program already instrumented. As an example, the Linux kernel is
1624 thoroughly instrumented, which is why you can trace it without caring
1625 about adding probes.
1626
1627 _<<controlling-tracing,Controlling tracing>>_ is everything
1628 that can be done by the LTTng session daemon, which is controlled using
1629 `liblttng-ctl` or its command line utility, `lttng`: creating tracing
1630 sessions, listing tracing sessions and events, enabling/disabling
1631 events, starting/stopping the tracers, taking snapshots, amongst many
1632 other commands.
1633
1634 This chapter is a complete user guide of both activities,
1635 with common use cases of LTTng exposed throughout the text. It is
1636 assumed that you are familiar with LTTng's concepts (events, channels,
1637 domains, tracing sessions) and that you understand the roles of its
1638 components (daemons, libraries, command line tools); if not, we invite
1639 you to read the <<understanding-lttng,Understanding LTTng>> chapter
1640 before you begin reading this one.
1641
1642 If you're new to LTTng, we suggest that you rather start with the
1643 <<getting-started,Getting started>> small guide first, then come
1644 back here to broaden your knowledge.
1645
1646 If you're only interested in tracing the Linux kernel with its current
1647 instrumentation, you may skip the
1648 <<instrumenting,Instrumenting>> section.
1649
1650
1651 [[instrumenting]]
1652 === Instrumenting
1653
1654 There are many examples of tracing and monitoring in our everyday life.
1655 You have access to real-time and historical weather reports and forecasts
1656 thanks to weather stations installed around the country. You know your
1657 possibly hospitalized friends' and family's hearts are safe thanks to
1658 electrocardiography. You make sure not to drive your car too fast
1659 and have enough fuel to reach your destination thanks to gauges visible
1660 on your dashboard.
1661
1662 All the previous examples have something in common: they rely on
1663 **probes**. Without electrodes attached to the surface of a body's
1664 skin, cardiac monitoring would be futile.
1665
1666 LTTng, as a tracer, is no different from the real life examples above.
1667 If you're about to trace a software system or, put in other words, record its
1668 history of execution, you better have probes in the subject you're
1669 tracing: the actual software. Various ways were developed to do this.
1670 The most straightforward one is to manually place probes, called
1671 _tracepoints_, in the software's source code. The Linux kernel tracing
1672 domain also allows probes added dynamically.
1673
1674 If you're only interested in tracing the Linux kernel, it may very well
1675 be that your tracing needs are already appropriately covered by LTTng's
1676 built-in Linux kernel tracepoints and other probes. Or you may be in
1677 possession of a user space application which has already been
1678 instrumented. In such cases, the work resides entirely in the design
1679 and execution of tracing sessions, allowing you to jump to
1680 <<controlling-tracing,Controlling tracing>> right now.
1681
1682 This chapter focuses on the following use cases of instrumentation:
1683
1684 * <<c-application,C>> and <<cxx-application,$$C++$$>> applications
1685 * <<prebuilt-ust-helpers,prebuilt user space tracing helpers>>
1686 * <<java-application,Java application>>
1687 * <<instrumenting-linux-kernel,Linux kernel>> module or the
1688 kernel itself
1689 * the <<proc-lttng-logger-abi,path:{/proc/lttng-logger} ABI>>
1690
1691 Some advanced techniques are also presented at the very end of this
1692 chapter.
1693
1694
1695 [[c-application]]
1696 ==== C application
1697
1698 Instrumenting a C (or $$C++$$) application, be it an executable program
1699 or a library, implies using LTTng-UST, the
1700 user space tracing component of LTTng. For C/$$C++$$ applications, the
1701 LTTng-UST package includes a dynamically loaded library
1702 (`liblttng-ust`), C headers and the `lttng-gen-tp` command line utility.
1703
1704 Since C and $$C++$$ are the base languages of virtually all other
1705 programming languages
1706 (Java virtual machine, Python, Perl, PHP and Node.js interpreters, to
1707 name a few), implementing user space tracing for an unsupported language
1708 is just a matter of using the LTTng-UST C API at the right places.
1709
1710 The usual work flow to instrument a user space C application with
1711 LTTng-UST is:
1712
1713 . Define tracepoints (actual probes)
1714 . Write tracepoint providers
1715 . Insert tracepoints into target source code
1716 . Package (build) tracepoint providers
1717 . Build user application and link it with tracepoint providers
1718
1719 The steps above are discussed in greater detail in the following
1720 subsections.
1721
1722
1723 [[tracepoint-provider]]
1724 ===== Tracepoint provider
1725
1726 Before jumping into defining tracepoints and inserting
1727 them into the application source code, you must understand what a
1728 _tracepoint provider_ is.
1729
1730 For the sake of this guide, consider the following two files:
1731
1732 [source,c]
1733 .path:{tp.h}
1734 ----
1735 #undef TRACEPOINT_PROVIDER
1736 #define TRACEPOINT_PROVIDER my_provider
1737
1738 #undef TRACEPOINT_INCLUDE
1739 #define TRACEPOINT_INCLUDE "./tp.h"
1740
1741 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1742 #define _TP_H
1743
1744 #include <lttng/tracepoint.h>
1745
1746 TRACEPOINT_EVENT(
1747 my_provider,
1748 my_first_tracepoint,
1749 TP_ARGS(
1750 int, my_integer_arg,
1751 char*, my_string_arg
1752 ),
1753 TP_FIELDS(
1754 ctf_string(my_string_field, my_string_arg)
1755 ctf_integer(int, my_integer_field, my_integer_arg)
1756 )
1757 )
1758
1759 TRACEPOINT_EVENT(
1760 my_provider,
1761 my_other_tracepoint,
1762 TP_ARGS(
1763 int, my_int
1764 ),
1765 TP_FIELDS(
1766 ctf_integer(int, some_field, my_int)
1767 )
1768 )
1769
1770 #endif /* _TP_H */
1771
1772 #include <lttng/tracepoint-event.h>
1773 ----
1774
1775 [source,c]
1776 .path:{tp.c}
1777 ----
1778 #define TRACEPOINT_CREATE_PROBES
1779
1780 #include "tp.h"
1781 ----
1782
1783 The two files above are defining a _tracepoint provider_. A tracepoint
1784 provider is some sort of namespace for _tracepoint definitions_. Tracepoint
1785 definitions are written above with the `TRACEPOINT_EVENT()` macro, and allow
1786 eventual `tracepoint()` calls respecting their definitions to be inserted
1787 into the user application's C source code (we explore this in a
1788 later section).
1789
1790 Many tracepoint definitions may be part of the same tracepoint provider
1791 and many tracepoint providers may coexist in a user space application. A
1792 tracepoint provider is packaged either:
1793
1794 * directly into an existing user application's C source file
1795 * as an object file
1796 * as a static library
1797 * as a shared library
1798
1799 The two files above, path:{tp.h} and path:{tp.c}, show a typical template for
1800 writing a tracepoint provider. LTTng-UST was designed so that two
1801 tracepoint providers should not be defined in the same header file.
1802
1803 We will now go through the various parts of the above files and
1804 give them a meaning. As you may have noticed, the LTTng-UST API for
1805 C/$$C++$$ applications is some preprocessor sorcery. The LTTng-UST macros
1806 used in your application and those in the LTTng-UST headers are
1807 combined to produce actual source code needed to make tracing possible
1808 using LTTng.
1809
1810 Let's start with the header file, path:{tp.h}. It begins with
1811
1812 [source,c]
1813 ----
1814 #undef TRACEPOINT_PROVIDER
1815 #define TRACEPOINT_PROVIDER my_provider
1816 ----
1817
1818 `TRACEPOINT_PROVIDER` defines the name of the provider to which the
1819 following tracepoint definitions belong. It is used internally by
1820 LTTng-UST headers and _must_ be defined. Since `TRACEPOINT_PROVIDER`
1821 could have been defined by another header file also included by the same
1822 C source file, the best practice is to undefine it first.
1823
1824 NOTE: Names in LTTng-UST follow the C
1825 _identifier_ syntax (starting with a letter and containing either
1826 letters, numbers or underscores); they are _not_ C strings
1827 (not surrounded by double quotes). This is because LTTng-UST macros
1828 use those identifier-like strings to create symbols (named types and
1829 variables).
1830
1831 The tracepoint provider is a group of tracepoint definitions; its chosen
1832 name should reflect this. A hierarchy like Java packages is recommended,
1833 using underscores instead of dots, for example,
1834 `org_company_project_component`.
1835
1836 Next is `TRACEPOINT_INCLUDE`:
1837
1838 [source,c]
1839 ----
1840 #undef TRACEPOINT_INCLUDE
1841 #define TRACEPOINT_INCLUDE "./tp.h"
1842 ----
1843
1844 This little bit of instrospection is needed by LTTng-UST to include
1845 your header at various predefined places.
1846
1847 Include guard follows:
1848
1849 [source,c]
1850 ----
1851 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1852 #define _TP_H
1853 ----
1854
1855 Add these precompiler conditionals to ensure the tracepoint event
1856 generation can include this file more than once.
1857
1858 The `TRACEPOINT_EVENT()` macro is defined in a LTTng-UST header file which
1859 must be included:
1860
1861 [source,c]
1862 ----
1863 #include <lttng/tracepoint.h>
1864 ----
1865
1866 This also allows the application to use the `tracepoint()` macro.
1867
1868 Next is a list of `TRACEPOINT_EVENT()` macro calls which create the
1869 actual tracepoint definitions. We skip this for the moment and
1870 come back to how to use `TRACEPOINT_EVENT()`
1871 <<defining-tracepoints,in a later section>>. Just pay attention to
1872 the first argument: it's always the name of the tracepoint provider
1873 being defined in this header file.
1874
1875 End of include guard:
1876
1877 [source,c]
1878 ----
1879 #endif /* _TP_H */
1880 ----
1881
1882 Finally, include `<lttng/tracepoint-event.h>` to expand the macros:
1883
1884 [source,c]
1885 ----
1886 #include <lttng/tracepoint-event.h>
1887 ----
1888
1889 That's it for path:{tp.h}. Of course, this is only a header file; it must be
1890 included in some C source file to actually use it. This is the job of
1891 path:{tp.c}:
1892
1893 [source,c]
1894 ----
1895 #define TRACEPOINT_CREATE_PROBES
1896
1897 #include "tp.h"
1898 ----
1899
1900 When `TRACEPOINT_CREATE_PROBES` is defined, the macros used in path:{tp.h},
1901 which is included just after, actually create the source code for
1902 LTTng-UST probes (global data structures and functions) out of your
1903 tracepoint definitions. How exactly this is done is out of this text's scope.
1904 `TRACEPOINT_CREATE_PROBES` is discussed further
1905 in
1906 <<building-tracepoint-providers-and-user-application,Building/linking
1907 tracepoint providers and the user application>>.
1908
1909 You could include other header files like path:{tp.h} here to create the probes
1910 of different tracepoint providers, for example:
1911
1912 [source,c]
1913 ----
1914 #define TRACEPOINT_CREATE_PROBES
1915
1916 #include "tp1.h"
1917 #include "tp2.h"
1918 ----
1919
1920 The rule is: probes of a given tracepoint provider
1921 must be created in exactly one source file. This source file could be one
1922 of your project's; it doesn't have to be on its own like
1923 path:{tp.c}, although
1924 <<building-tracepoint-providers-and-user-application,a later section>>
1925 shows that doing so allows packaging the tracepoint providers
1926 independently and keep them out of your application, also making it
1927 possible to reuse them between projects.
1928
1929 The following sections explain how to define tracepoints, how to use the
1930 `tracepoint()` macro to instrument your user space C application and how
1931 to build/link tracepoint providers and your application with LTTng-UST
1932 support.
1933
1934
1935 [[lttng-gen-tp]]
1936 ===== Using `lttng-gen-tp`
1937
1938 LTTng-UST ships with `lttng-gen-tp`, a handy command line utility for
1939 generating most of the stuff discussed above. It takes a _template file_,
1940 with a name usually ending with the `.tp` extension, containing only
1941 tracepoint definitions, and outputs a tracepoint provider (either a C
1942 source file or a precompiled object file) with its header file.
1943
1944 `lttng-gen-tp` should suffice in <<static-linking,static linking>>
1945 situations. When using it, write a template file containing a list of
1946 `TRACEPOINT_EVENT()` macro calls. The tool finds the provider names
1947 used and generate the appropriate files which are going to look a lot
1948 like path:{tp.h} and path:{tp.c} above.
1949
1950 Just call `lttng-gen-tp` like this:
1951
1952 [role="term"]
1953 ----
1954 lttng-gen-tp my-template.tp
1955 ----
1956
1957 path:{my-template.c}, path:{my-template.o} and path:{my-template.h}
1958 are created in the same directory.
1959
1960 You may specify custom C flags passed to the compiler invoked by
1961 `lttng-gen-tp` using the `CFLAGS` environment variable:
1962
1963 [role="term"]
1964 ----
1965 CFLAGS=-I/custom/include/path lttng-gen-tp my-template.tp
1966 ----
1967
1968 For more information on `lttng-gen-tp`, see man:lttng-gen-tp(1).
1969
1970
1971 [[defining-tracepoints]]
1972 ===== Defining tracepoints
1973
1974 As written in <<tracepoint-provider,Tracepoint provider>>,
1975 tracepoints are defined using the
1976 `TRACEPOINT_EVENT()` macro. Each tracepoint, when called using the
1977 `tracepoint()` macro in the actual application's source code, generates
1978 a specific event type with its own fields.
1979
1980 Let's have another look at the example above, with a few added comments:
1981
1982 [source,c]
1983 ----
1984 TRACEPOINT_EVENT(
1985 /* tracepoint provider name */
1986 my_provider,
1987
1988 /* tracepoint/event name */
1989 my_first_tracepoint,
1990
1991 /* list of tracepoint arguments */
1992 TP_ARGS(
1993 int, my_integer_arg,
1994 char*, my_string_arg
1995 ),
1996
1997 /* list of fields of eventual event */
1998 TP_FIELDS(
1999 ctf_string(my_string_field, my_string_arg)
2000 ctf_integer(int, my_integer_field, my_integer_arg)
2001 )
2002 )
2003 ----
2004
2005 The tracepoint provider name must match the name of the tracepoint
2006 provider in which this tracepoint is defined
2007 (see <<tracepoint-provider,Tracepoint provider>>). In other words,
2008 always use the same string as the value of `TRACEPOINT_PROVIDER` above.
2009
2010 The tracepoint name becomes the event name once events are recorded
2011 by the LTTng-UST tracer. It must follow the tracepoint provider name
2012 syntax: start with a letter and contain either letters, numbers or
2013 underscores. Two tracepoints under the same provider cannot have the
2014 same name. In other words, you cannot overload a tracepoint like you
2015 would overload functions and methods in $$C++$$/Java.
2016
2017 NOTE: The concatenation of the tracepoint
2018 provider name and the tracepoint name cannot exceed 254 characters. If
2019 it does, the instrumented application compiles and runs, but LTTng
2020 issues multiple warnings and you could experience serious problems.
2021
2022 The list of tracepoint arguments gives this tracepoint its signature:
2023 see it like the declaration of a C function. The format of `TP_ARGS()`
2024 arguments is: C type, then argument name; repeat as needed, up to ten
2025 times. For example, if we were to replicate the signature of C standard
2026 library's `fseek()`, the `TP_ARGS()` part would look like:
2027
2028 [source,c]
2029 ----
2030 TP_ARGS(
2031 FILE*, stream,
2032 long int, offset,
2033 int, origin
2034 ),
2035 ----
2036
2037 Of course, you need to include appropriate header files before
2038 the `TRACEPOINT_EVENT()` macro calls if any argument has a complex type.
2039
2040 `TP_ARGS()` may not be omitted, but may be empty. `TP_ARGS(void)` is
2041 also accepted.
2042
2043 The list of fields is where the fun really begins. The fields defined
2044 in this list are the fields of the events generated by the execution
2045 of this tracepoint. Each tracepoint field definition has a C
2046 _argument expression_ which is evaluated when the execution reaches
2047 the tracepoint. Tracepoint arguments _may be_ used freely in those
2048 argument expressions, but they _don't_ have to.
2049
2050 There are several types of tracepoint fields available. The macros to
2051 define them are given and explained in the
2052 <<liblttng-ust-tp-fields,LTTng-UST library reference>> section.
2053
2054 Field names must follow the standard C identifier syntax: letter, then
2055 optional sequence of letters, numbers or underscores. Each field must have
2056 a different name.
2057
2058 Those `ctf_*()` macros are added to the `TP_FIELDS()` part of
2059 `TRACEPOINT_EVENT()`. Note that they are not delimited by commas.
2060 `TP_FIELDS()` may be empty, but the `TP_FIELDS(void)` form is _not_
2061 accepted.
2062
2063 The following snippet shows how argument expressions may be used in
2064 tracepoint fields and how they may refer freely to tracepoint arguments.
2065
2066 [source,c]
2067 ----
2068 /* for struct stat */
2069 #include <sys/types.h>
2070 #include <sys/stat.h>
2071 #include <unistd.h>
2072
2073 TRACEPOINT_EVENT(
2074 my_provider,
2075 my_tracepoint,
2076 TP_ARGS(
2077 int, my_int_arg,
2078 char*, my_str_arg,
2079 struct stat*, st
2080 ),
2081 TP_FIELDS(
2082 /* simple integer field with constant value */
2083 ctf_integer(
2084 int, /* field C type */
2085 my_constant_field, /* field name */
2086 23 + 17 /* argument expression */
2087 )
2088
2089 /* my_int_arg tracepoint argument */
2090 ctf_integer(
2091 int,
2092 my_int_arg_field,
2093 my_int_arg
2094 )
2095
2096 /* my_int_arg squared */
2097 ctf_integer(
2098 int,
2099 my_int_arg_field2,
2100 my_int_arg * my_int_arg
2101 )
2102
2103 /* sum of first 4 characters of my_str_arg */
2104 ctf_integer(
2105 int,
2106 sum4,
2107 my_str_arg[0] + my_str_arg[1] +
2108 my_str_arg[2] + my_str_arg[3]
2109 )
2110
2111 /* my_str_arg as string field */
2112 ctf_string(
2113 my_str_arg_field, /* field name */
2114 my_str_arg /* argument expression */
2115 )
2116
2117 /* st_size member of st tracepoint argument, hexadecimal */
2118 ctf_integer_hex(
2119 off_t, /* field C type */
2120 size_field, /* field name */
2121 st->st_size /* argument expression */
2122 )
2123
2124 /* st_size member of st tracepoint argument, as double */
2125 ctf_float(
2126 double, /* field C type */
2127 size_dbl_field, /* field name */
2128 (double) st->st_size /* argument expression */
2129 )
2130
2131 /* half of my_str_arg string as text sequence */
2132 ctf_sequence_text(
2133 char, /* element C type */
2134 half_my_str_arg_field, /* field name */
2135 my_str_arg, /* argument expression */
2136 size_t, /* length expression C type */
2137 strlen(my_str_arg) / 2 /* length expression */
2138 )
2139 )
2140 )
2141 ----
2142
2143 As you can see, having a custom argument expression for each field
2144 makes tracepoints very flexible for tracing a user space C application.
2145 This tracepoint definition is reused later in this guide, when
2146 actually using tracepoints in a user space application.
2147
2148
2149 [[using-tracepoint-classes]]
2150 ===== Using tracepoint classes
2151
2152 In LTTng-UST, a _tracepoint class_ is a class of tracepoints sharing the
2153 same field types and names. A _tracepoint instance_ is one instance of
2154 such a declared tracepoint class, with its own event name and tracepoint
2155 provider name.
2156
2157 What is documented in <<defining-tracepoints,Defining tracepoints>>
2158 is actually how to declare a _tracepoint class_ and define a
2159 _tracepoint instance_ at the same time. Without revealing the internals
2160 of LTTng-UST too much, it has to be noted that one serialization
2161 function is created for each tracepoint class. A serialization
2162 function is responsible for serializing the fields of a tracepoint
2163 into a sub-buffer when tracing. For various performance reasons, when
2164 your situation requires multiple tracepoints with different names, but
2165 with the same fields layout, the best practice is to manually create
2166 a tracepoint class and instantiate as many tracepoint instances as
2167 needed. One positive effect of such a design, amongst other advantages,
2168 is that all tracepoint instances of the same tracepoint class
2169 reuse the same serialization function, thus reducing cache pollution.
2170
2171 As an example, here are three tracepoint definitions as we know them:
2172
2173 [source,c]
2174 ----
2175 TRACEPOINT_EVENT(
2176 my_app,
2177 get_account,
2178 TP_ARGS(
2179 int, userid,
2180 size_t, len
2181 ),
2182 TP_FIELDS(
2183 ctf_integer(int, userid, userid)
2184 ctf_integer(size_t, len, len)
2185 )
2186 )
2187
2188 TRACEPOINT_EVENT(
2189 my_app,
2190 get_settings,
2191 TP_ARGS(
2192 int, userid,
2193 size_t, len
2194 ),
2195 TP_FIELDS(
2196 ctf_integer(int, userid, userid)
2197 ctf_integer(size_t, len, len)
2198 )
2199 )
2200
2201 TRACEPOINT_EVENT(
2202 my_app,
2203 get_transaction,
2204 TP_ARGS(
2205 int, userid,
2206 size_t, len
2207 ),
2208 TP_FIELDS(
2209 ctf_integer(int, userid, userid)
2210 ctf_integer(size_t, len, len)
2211 )
2212 )
2213 ----
2214
2215 In this case, three tracepoint classes are created, with one tracepoint
2216 instance for each of them: `get_account`, `get_settings` and
2217 `get_transaction`. However, they all share the same field names and
2218 types. Declaring one tracepoint class and three tracepoint instances of
2219 the latter is a better design choice:
2220
2221 [source,c]
2222 ----
2223 /* the tracepoint class */
2224 TRACEPOINT_EVENT_CLASS(
2225 /* tracepoint provider name */
2226 my_app,
2227
2228 /* tracepoint class name */
2229 my_class,
2230
2231 /* arguments */
2232 TP_ARGS(
2233 int, userid,
2234 size_t, len
2235 ),
2236
2237 /* fields */
2238 TP_FIELDS(
2239 ctf_integer(int, userid, userid)
2240 ctf_integer(size_t, len, len)
2241 )
2242 )
2243
2244 /* the tracepoint instances */
2245 TRACEPOINT_EVENT_INSTANCE(
2246 /* tracepoint provider name */
2247 my_app,
2248
2249 /* tracepoint class name */
2250 my_class,
2251
2252 /* tracepoint/event name */
2253 get_account,
2254
2255 /* arguments */
2256 TP_ARGS(
2257 int, userid,
2258 size_t, len
2259 )
2260 )
2261 TRACEPOINT_EVENT_INSTANCE(
2262 my_app,
2263 my_class,
2264 get_settings,
2265 TP_ARGS(
2266 int, userid,
2267 size_t, len
2268 )
2269 )
2270 TRACEPOINT_EVENT_INSTANCE(
2271 my_app,
2272 my_class,
2273 get_transaction,
2274 TP_ARGS(
2275 int, userid,
2276 size_t, len
2277 )
2278 )
2279 ----
2280
2281 Of course, all those names and `TP_ARGS()` invocations are redundant,
2282 but some C preprocessor magic can solve this:
2283
2284 [source,c]
2285 ----
2286 #define MY_TRACEPOINT_ARGS \
2287 TP_ARGS( \
2288 int, userid, \
2289 size_t, len \
2290 )
2291
2292 TRACEPOINT_EVENT_CLASS(
2293 my_app,
2294 my_class,
2295 MY_TRACEPOINT_ARGS,
2296 TP_FIELDS(
2297 ctf_integer(int, userid, userid)
2298 ctf_integer(size_t, len, len)
2299 )
2300 )
2301
2302 #define MY_APP_TRACEPOINT_INSTANCE(name) \
2303 TRACEPOINT_EVENT_INSTANCE( \
2304 my_app, \
2305 my_class, \
2306 name, \
2307 MY_TRACEPOINT_ARGS \
2308 )
2309
2310 MY_APP_TRACEPOINT_INSTANCE(get_account)
2311 MY_APP_TRACEPOINT_INSTANCE(get_settings)
2312 MY_APP_TRACEPOINT_INSTANCE(get_transaction)
2313 ----
2314
2315
2316 [[assigning-log-levels]]
2317 ===== Assigning log levels to tracepoints
2318
2319 Optionally, a log level can be assigned to a defined tracepoint.
2320 Assigning different levels of importance to tracepoints can be useful;
2321 when controlling tracing sessions,
2322 <<controlling-tracing,you can choose>> to only enable tracepoints
2323 falling into a specific log level range.
2324
2325 Log levels are assigned to defined tracepoints using the
2326 `TRACEPOINT_LOGLEVEL()` macro. The latter must be used _after_ having
2327 used `TRACEPOINT_EVENT()` for a given tracepoint. The
2328 `TRACEPOINT_LOGLEVEL()` macro has the following construct:
2329
2330 [source,c]
2331 ----
2332 TRACEPOINT_LOGLEVEL(PROVIDER_NAME, TRACEPOINT_NAME, LOG_LEVEL)
2333 ----
2334
2335 where the first two arguments are the same as the first two arguments
2336 of `TRACEPOINT_EVENT()` and `LOG_LEVEL` is one
2337 of the values given in the
2338 <<liblttng-ust-tracepoint-loglevel,LTTng-UST library reference>>
2339 section.
2340
2341 As an example, let's assign a `TRACE_DEBUG_UNIT` log level to our
2342 previous tracepoint definition:
2343
2344 [source,c]
2345 ----
2346 TRACEPOINT_LOGLEVEL(my_provider, my_tracepoint, TRACE_DEBUG_UNIT)
2347 ----
2348
2349
2350 [[probing-the-application-source-code]]
2351 ===== Probing the application's source code
2352
2353 Once tracepoints are properly defined within a tracepoint provider,
2354 they may be inserted into the user application to be instrumented
2355 using the `tracepoint()` macro. Its first argument is the tracepoint
2356 provider name and its second is the tracepoint name. The next, optional
2357 arguments are defined by the `TP_ARGS()` part of the definition of
2358 the tracepoint to use.
2359
2360 As an example, let us again take the following tracepoint definition:
2361
2362 [source,c]
2363 ----
2364 TRACEPOINT_EVENT(
2365 /* tracepoint provider name */
2366 my_provider,
2367
2368 /* tracepoint/event name */
2369 my_first_tracepoint,
2370
2371 /* list of tracepoint arguments */
2372 TP_ARGS(
2373 int, my_integer_arg,
2374 char*, my_string_arg
2375 ),
2376
2377 /* list of fields of eventual event */
2378 TP_FIELDS(
2379 ctf_string(my_string_field, my_string_arg)
2380 ctf_integer(int, my_integer_field, my_integer_arg)
2381 )
2382 )
2383 ----
2384
2385 Assuming this is part of a file named path:{tp.h} which defines the tracepoint
2386 provider and which is included by path:{tp.c}, here's a complete C application
2387 calling this tracepoint (multiple times):
2388
2389 [source,c]
2390 ----
2391 #define TRACEPOINT_DEFINE
2392 #include "tp.h"
2393
2394 int main(int argc, char* argv[])
2395 {
2396 int i;
2397
2398 tracepoint(my_provider, my_first_tracepoint, 23, "Hello, World!");
2399
2400 for (i = 0; i < argc; ++i) {
2401 tracepoint(my_provider, my_first_tracepoint, i, argv[i]);
2402 }
2403
2404 return 0;
2405 }
2406 ----
2407
2408 For each tracepoint provider, `TRACEPOINT_DEFINE` must be defined into
2409 exactly one translation unit (C source file) of the user application,
2410 before including the tracepoint provider header file. In other words,
2411 for a given tracepoint provider, you cannot define `TRACEPOINT_DEFINE`,
2412 and then include its header file in two separate C source files of
2413 the same application. `TRACEPOINT_DEFINE` is discussed further in
2414 <<building-tracepoint-providers-and-user-application,Building/linking
2415 tracepoint providers and the user application>>.
2416
2417 As another example, remember this definition we wrote in a previous
2418 section (comments are stripped):
2419
2420 [source,c]
2421 ----
2422 /* for struct stat */
2423 #include <sys/types.h>
2424 #include <sys/stat.h>
2425 #include <unistd.h>
2426
2427 TRACEPOINT_EVENT(
2428 my_provider,
2429 my_tracepoint,
2430 TP_ARGS(
2431 int, my_int_arg,
2432 char*, my_str_arg,
2433 struct stat*, st
2434 ),
2435 TP_FIELDS(
2436 ctf_integer(int, my_constant_field, 23 + 17)
2437 ctf_integer(int, my_int_arg_field, my_int_arg)
2438 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2439 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2440 my_str_arg[2] + my_str_arg[3])
2441 ctf_string(my_str_arg_field, my_str_arg)
2442 ctf_integer_hex(off_t, size_field, st->st_size)
2443 ctf_float(double, size_dbl_field, (double) st->st_size)
2444 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2445 size_t, strlen(my_str_arg) / 2)
2446 )
2447 )
2448 ----
2449
2450 Here's an example of calling it:
2451
2452 [source,c]
2453 ----
2454 #define TRACEPOINT_DEFINE
2455 #include "tp.h"
2456
2457 int main(void)
2458 {
2459 struct stat s;
2460
2461 stat("/etc/fstab", &s);
2462
2463 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2464
2465 return 0;
2466 }
2467 ----
2468
2469 When viewing the trace, assuming the file size of path:{/etc/fstab} is
2470 301{nbsp}bytes, the event generated by the execution of this tracepoint
2471 should have the following fields, in this order:
2472
2473 ----
2474 my_constant_field 40
2475 my_int_arg_field 23
2476 my_int_arg_field2 529
2477 sum4_field 389
2478 my_str_arg_field "Hello, World!"
2479 size_field 0x12d
2480 size_dbl_field 301.0
2481 half_my_str_arg_field "Hello,"
2482 ----
2483
2484
2485 [[building-tracepoint-providers-and-user-application]]
2486 ===== Building/linking tracepoint providers and the user application
2487
2488 The final step of using LTTng-UST for tracing a user space C application
2489 (beside running the application) is building and linking tracepoint
2490 providers and the application itself.
2491
2492 As discussed above, the macros used by the user-written tracepoint provider
2493 header file are useless until actually used to create probes code
2494 (global data structures and functions) in a translation unit (C source file).
2495 This is accomplished by defining `TRACEPOINT_CREATE_PROBES` in a translation
2496 unit and then including the tracepoint provider header file.
2497 When `TRACEPOINT_CREATE_PROBES` is defined, macros used and included by
2498 the tracepoint provider header produce actual source code needed by any
2499 application using the defined tracepoints. Defining
2500 `TRACEPOINT_CREATE_PROBES` produces code used when registering
2501 tracepoint providers when the tracepoint provider package loads.
2502
2503 The other important definition is `TRACEPOINT_DEFINE`. This one creates
2504 global, per-tracepoint structures referencing the tracepoint providers
2505 data. Those structures are required by the actual functions inserted
2506 where `tracepoint()` macros are placed and need to be defined by the
2507 instrumented application.
2508
2509 Both `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` need to be defined
2510 at some places in order to trace a user space C application using LTTng.
2511 Although explaining their exact mechanism is beyond the scope of this
2512 document, the reason they both exist separately is to allow the trace
2513 providers to be packaged as a shared object (dynamically loaded library).
2514
2515 There are two ways to compile and link the tracepoint providers
2516 with the application: _<<static-linking,statically>>_ or
2517 _<<dynamic-linking,dynamically>>_. Both methods are covered in the
2518 following subsections.
2519
2520
2521 [[static-linking]]
2522 ===== Static linking the tracepoint providers to the application
2523
2524 With the static linking method, compiled tracepoint providers are copied
2525 into the target application. There are three ways to do this:
2526
2527 . Use one of your **existing C source files** to create probes.
2528 . Create probes in a separate C source file and build it as an
2529 **object file** to be linked with the application (more decoupled).
2530 . Create probes in a separate C source file, build it as an
2531 object file and archive it to create a **static library**
2532 (more decoupled, more portable).
2533
2534 The first approach is to define `TRACEPOINT_CREATE_PROBES` and include
2535 your tracepoint provider(s) header file(s) directly into an existing C
2536 source file. Here's an example:
2537
2538 [source,c]
2539 ----
2540 #include <stdlib.h>
2541 #include <stdio.h>
2542 /* ... */
2543
2544 #define TRACEPOINT_CREATE_PROBES
2545 #define TRACEPOINT_DEFINE
2546 #include "tp.h"
2547
2548 /* ... */
2549
2550 int my_func(int a, const char* b)
2551 {
2552 /* ... */
2553
2554 tracepoint(my_provider, my_tracepoint, buf, sz, limit, &tt)
2555
2556 /* ... */
2557 }
2558
2559 /* ... */
2560 ----
2561
2562 Again, before including a given tracepoint provider header file,
2563 `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` must be defined in
2564 one, **and only one**, translation unit. Other C source files of the
2565 same application may include path:{tp.h} to use tracepoints with
2566 the `tracepoint()` macro, but must not define
2567 `TRACEPOINT_CREATE_PROBES`/`TRACEPOINT_DEFINE` again.
2568
2569 This translation unit may be built as an object file by making sure to
2570 add `.` to the include path:
2571
2572 [role="term"]
2573 ----
2574 gcc -c -I. file.c
2575 ----
2576
2577 The second approach is to isolate the tracepoint provider code into a
2578 separate object file by using a dedicated C source file to create probes:
2579
2580 [source,c]
2581 ----
2582 #define TRACEPOINT_CREATE_PROBES
2583
2584 #include "tp.h"
2585 ----
2586
2587 `TRACEPOINT_DEFINE` must be defined by a translation unit of the
2588 application. Since we're talking about static linking here, it could as
2589 well be defined directly in the file above, before `#include "tp.h"`:
2590
2591 [source,c]
2592 ----
2593 #define TRACEPOINT_CREATE_PROBES
2594 #define TRACEPOINT_DEFINE
2595
2596 #include "tp.h"
2597 ----
2598
2599 This is actually what <<lttng-gen-tp,`lttng-gen-tp`>> does, and is
2600 the recommended practice.
2601
2602 Build the tracepoint provider:
2603
2604 [role="term"]
2605 ----
2606 gcc -c -I. tp.c
2607 ----
2608
2609 Finally, the resulting object file may be archived to create a
2610 more portable tracepoint provider static library:
2611
2612 [role="term"]
2613 ----
2614 ar rc tp.a tp.o
2615 ----
2616
2617 Using a static library does have the advantage of centralising the
2618 tracepoint providers objects so they can be shared between multiple
2619 applications. This way, when the tracepoint provider is modified, the
2620 source code changes don't have to be patched into each application's source
2621 code tree. The applications need to be relinked after each change, but need
2622 not to be otherwise recompiled (unless the tracepoint provider's API
2623 changes).
2624
2625 Regardless of which method you choose, you end up with an object file
2626 (potentially archived) containing the trace providers assembled code.
2627 To link this code with the rest of your application, you must also link
2628 with `liblttng-ust` and `libdl`:
2629
2630 [role="term"]
2631 ----
2632 gcc -o app tp.o other.o files.o of.o your.o app.o -llttng-ust -ldl
2633 ----
2634
2635 or
2636
2637 [role="term"]
2638 ----
2639 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -ldl
2640 ----
2641
2642 If you're using a BSD
2643 system, replace `-ldl` with `-lc`:
2644
2645 [role="term"]
2646 ----
2647 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -lc
2648 ----
2649
2650 The application can be started as usual, for example:
2651
2652 [role="term"]
2653 ----
2654 ./app
2655 ----
2656
2657 The `lttng` command line tool can be used to
2658 <<controlling-tracing,control tracing>>.
2659
2660
2661 [[dynamic-linking]]
2662 ===== Dynamic linking the tracepoint providers to the application
2663
2664 The second approach to package the tracepoint providers is to use
2665 dynamic linking: the library and its member functions are explicitly
2666 sought, loaded and unloaded at runtime using `libdl`.
2667
2668 It has to be noted that, for a variety of reasons, the created shared
2669 library is be dynamically _loaded_, as opposed to dynamically
2670 _linked_. The tracepoint provider shared object is, however, linked
2671 with `liblttng-ust`, so that `liblttng-ust` is guaranteed to be loaded
2672 as soon as the tracepoint provider is. If the tracepoint provider is
2673 not loaded, since the application itself is not linked with
2674 `liblttng-ust`, the latter is not loaded at all and the tracepoint calls
2675 become inert.
2676
2677 The process to create the tracepoint provider shared object is pretty
2678 much the same as the static library method, except that:
2679
2680 * since the tracepoint provider is not part of the application
2681 anymore, `TRACEPOINT_DEFINE` _must_ be defined, for each tracepoint
2682 provider, in exactly one translation unit (C source file) of the
2683 _application_;
2684 * `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` must be defined next to
2685 `TRACEPOINT_DEFINE`.
2686
2687 Regarding `TRACEPOINT_DEFINE` and `TRACEPOINT_PROBE_DYNAMIC_LINKAGE`,
2688 the recommended practice is to use a separate C source file in your
2689 application to define them, then include the tracepoint provider
2690 header files afterwards. For example:
2691
2692 [source,c]
2693 ----
2694 #define TRACEPOINT_DEFINE
2695 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2696
2697 /* include the header files of one or more tracepoint providers below */
2698 #include "tp1.h"
2699 #include "tp2.h"
2700 #include "tp3.h"
2701 ----
2702
2703 `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` makes the macros included afterwards
2704 (by including the tracepoint provider header, which itself includes
2705 LTTng-UST headers) aware that the tracepoint provider is to be loaded
2706 dynamically and not part of the application's executable.
2707
2708 The tracepoint provider object file used to create the shared library
2709 is built like it is using the static library method, only with the
2710 `-fpic` option added:
2711
2712 [role="term"]
2713 ----
2714 gcc -c -fpic -I. tp.c
2715 ----
2716
2717 It is then linked as a shared library like this:
2718
2719 [role="term"]
2720 ----
2721 gcc -shared -Wl,--no-as-needed -o tp.so -llttng-ust tp.o
2722 ----
2723
2724 As previously stated, this tracepoint provider shared object isn't
2725 linked with the user application: it's loaded manually. This is
2726 why the application is built with no mention of this tracepoint
2727 provider, but still needs `libdl`:
2728
2729 [role="term"]
2730 ----
2731 gcc -o app other.o files.o of.o your.o app.o -ldl
2732 ----
2733
2734 Now, to make LTTng-UST tracing available to the application, the
2735 `LD_PRELOAD` environment variable is used to preload the tracepoint
2736 provider shared library _before_ the application actually starts:
2737
2738 [role="term"]
2739 ----
2740 LD_PRELOAD=/path/to/tp.so ./app
2741 ----
2742
2743 [NOTE]
2744 ====
2745 It is not safe to use
2746 `dlclose()` on a tracepoint provider shared object that
2747 is being actively used for tracing, due to a lack of reference
2748 counting from LTTng-UST to the shared object.
2749
2750 For example, statically linking a tracepoint provider to a
2751 shared object which is to be dynamically loaded by an application
2752 (a plugin, for example) is not safe: the shared object, which
2753 contains the tracepoint provider, could be dynamically closed
2754 (`dlclose()`) at any time by the application.
2755
2756 To instrument a shared object, either:
2757
2758 * Statically link the tracepoint provider to the _application_, or
2759 * Build the tracepoint provider as a shared object (following
2760 the procedure shown in this section), and preload it when
2761 tracing is needed using the `LD_PRELOAD`
2762 environment variable.
2763 ====
2764
2765 Your application will still work without this preloading, albeit without
2766 LTTng-UST tracing support:
2767
2768 [role="term"]
2769 ----
2770 ./app
2771 ----
2772
2773
2774 [[using-lttng-ust-with-daemons]]
2775 ===== Using LTTng-UST with daemons
2776
2777 Some extra care is needed when using `liblttng-ust` with daemon
2778 applications that call `fork()`, `clone()` or BSD's `rfork()` without
2779 a following `exec()` family system call. The `liblttng-ust-fork`
2780 library must be preloaded for the application.
2781
2782 Example:
2783
2784 [role="term"]
2785 ----
2786 LD_PRELOAD=liblttng-ust-fork.so ./app
2787 ----
2788
2789 Or, if you're using a tracepoint provider shared library:
2790
2791 [role="term"]
2792 ----
2793 LD_PRELOAD="liblttng-ust-fork.so /path/to/tp.so" ./app
2794 ----
2795
2796
2797 [[lttng-ust-pkg-config]]
2798 ===== Using pkg-config
2799
2800 On some distributions, LTTng-UST is shipped with a pkg-config metadata
2801 file, so that you may use the `pkg-config` tool:
2802
2803 [role="term"]
2804 ----
2805 pkg-config --libs lttng-ust
2806 ----
2807
2808 This prints `-llttng-ust -ldl` on Linux systems.
2809
2810 You may also check the LTTng-UST version using `pkg-config`:
2811
2812 [role="term"]
2813 ----
2814 pkg-config --modversion lttng-ust
2815 ----
2816
2817 For more information about pkg-config, see
2818 http://linux.die.net/man/1/pkg-config[its manpage].
2819
2820
2821 [role="since-2.5"]
2822 [[tracef]]
2823 ===== Using `tracef()`
2824
2825 `tracef()` is a small LTTng-UST API to avoid defining your own
2826 tracepoints and tracepoint providers. The signature of `tracef()` is
2827 the same as `printf()`'s.
2828
2829 The `tracef()` utility function was developed to make user space tracing
2830 super simple, albeit with notable disadvantages compared to custom,
2831 full-fledged tracepoint providers:
2832
2833 * All generated events have the same provider/event names, respectively
2834 `lttng_ust_tracef` and `event`.
2835 * There's no static type checking.
2836 * The only event field you actually get, named `msg`, is a string
2837 potentially containing the values you passed to the function
2838 using your own format. This also means that you cannot use filtering
2839 using a custom expression at runtime because there are no isolated
2840 fields.
2841 * Since `tracef()` uses C standard library's `vasprintf()` function
2842 in the background to format the strings at runtime, its
2843 expected performance is lower than using custom tracepoint providers
2844 with typed fields, which do not require a conversion to a string.
2845
2846 Thus, `tracef()` is useful for quick prototyping and debugging, but
2847 should not be considered for any permanent/serious application
2848 instrumentation.
2849
2850 To use `tracef()`, first include `<lttng/tracef.h>` in the C source file
2851 where you need to insert probes:
2852
2853 [source,c]
2854 ----
2855 #include <lttng/tracef.h>
2856 ----
2857
2858 Use `tracef()` like you would use `printf()` in your source code, for
2859 example:
2860
2861 [source,c]
2862 ----
2863 /* ... */
2864
2865 tracef("my message, my integer: %d", my_integer);
2866
2867 /* ... */
2868 ----
2869
2870 Link your application with `liblttng-ust`:
2871
2872 [role="term"]
2873 ----
2874 gcc -o app app.c -llttng-ust
2875 ----
2876
2877 Execute the application as usual:
2878
2879 [role="term"]
2880 ----
2881 ./app
2882 ----
2883
2884 Voilà! Use the `lttng` command line tool to
2885 <<controlling-tracing,control tracing>>. You can enable `tracef()`
2886 events like this:
2887
2888 [role="term"]
2889 ----
2890 lttng enable-event --userspace 'lttng_ust_tracef:*'
2891 ----
2892
2893
2894 [[lttng-ust-environment-variables-compiler-flags]]
2895 ===== LTTng-UST environment variables and special compilation flags
2896
2897 A few special environment variables and compile flags may affect the
2898 behavior of LTTng-UST.
2899
2900 LTTng-UST's debugging can be activated by setting the environment
2901 variable `LTTNG_UST_DEBUG` to `1` when launching the application. It
2902 can also be enabled at compile time by defining `LTTNG_UST_DEBUG` when
2903 compiling LTTng-UST (using the `-DLTTNG_UST_DEBUG` compiler option).
2904
2905 The environment variable `LTTNG_UST_REGISTER_TIMEOUT` can be used to
2906 specify how long the application should wait for the
2907 <<lttng-sessiond,session daemon>>'s _registration done_ command
2908 before proceeding to execute the main program. The timeout value is
2909 specified in milliseconds. 0 means _don't wait_. -1 means
2910 _wait forever_. Setting this environment variable to 0 is recommended
2911 for applications with time contraints on the process startup time.
2912
2913 The default value of `LTTNG_UST_REGISTER_TIMEOUT` (when not defined)
2914 is **3000{nbsp}ms**.
2915
2916 The compilation definition `LTTNG_UST_DEBUG_VALGRIND` should be enabled
2917 at build time (`-DLTTNG_UST_DEBUG_VALGRIND`) to allow `liblttng-ust`
2918 to be used with http://valgrind.org/[Valgrind].
2919 The side effect of defining `LTTNG_UST_DEBUG_VALGRIND` is that per-CPU
2920 buffering is disabled.
2921
2922
2923 [[cxx-application]]
2924 ==== $$C++$$ application
2925
2926 Because of $$C++$$'s cross-compatibility with the C language, $$C++$$
2927 applications can be readily instrumented with the LTTng-UST C API.
2928
2929 Follow the <<c-application,C application>> user guide above. It
2930 should be noted that, in this case, tracepoint providers should have
2931 the typical `.cpp`, `.cxx` or `.cc` extension and be built with `g++`
2932 instead of `gcc`. This is the easiest way of avoiding linking errors
2933 due to symbol name mangling incompatibilities between both languages.
2934
2935
2936 [[prebuilt-ust-helpers]]
2937 ==== Prebuilt user space tracing helpers
2938
2939 The LTTng-UST package provides a few helpers that one may find
2940 useful in some situations. They all work the same way: you must
2941 preload the appropriate shared object before running the user
2942 application (using the `LD_PRELOAD` environment variable).
2943
2944 The shared objects are normally found in dir:{/usr/lib}.
2945
2946 The current installed helpers are:
2947
2948 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}::
2949 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
2950 and POSIX threads tracing>>.
2951
2952 path:{liblttng-ust-cyg-profile.so} and path:{liblttng-ust-cyg-profile-fast.so}::
2953 <<liblttng-ust-cyg-profile,Function tracing>>.
2954
2955 path:{liblttng-ust-dl.so}::
2956 <<liblttng-ust-dl,Dynamic linker tracing>>.
2957
2958 The following subsections document what helpers instrument exactly
2959 and how to use them.
2960
2961
2962 [role="since-2.3"]
2963 [[liblttng-ust-libc-pthread-wrapper]]
2964 ===== C standard library and POSIX threads tracing
2965
2966 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}
2967 can add instrumentation to respectively some C standard library and
2968 POSIX threads functions.
2969
2970 The following functions are traceable by path:{liblttng-ust-libc-wrapper.so}:
2971
2972 [role="growable"]
2973 .Functions instrumented by path:{liblttng-ust-libc-wrapper.so}
2974 |====
2975 |TP provider name |TP name |Instrumented function
2976
2977 .6+|`ust_libc` |`malloc` |`malloc()`
2978 |`calloc` |`calloc()`
2979 |`realloc` |`realloc()`
2980 |`free` |`free()`
2981 |`memalign` |`memalign()`
2982 |`posix_memalign` |`posix_memalign()`
2983 |====
2984
2985 The following functions are traceable by
2986 path:{liblttng-ust-pthread-wrapper.so}:
2987
2988 [role="growable"]
2989 .Functions instrumented by path:{liblttng-ust-pthread-wrapper.so}
2990 |====
2991 |TP provider name |TP name |Instrumented function
2992
2993 .4+|`ust_pthread` |`pthread_mutex_lock_req` |`pthread_mutex_lock()` (request time)
2994 |`pthread_mutex_lock_acq` |`pthread_mutex_lock()` (acquire time)
2995 |`pthread_mutex_trylock` |`pthread_mutex_trylock()`
2996 |`pthread_mutex_unlock` |`pthread_mutex_unlock()`
2997 |====
2998
2999 All tracepoints have fields corresponding to the arguments of the
3000 function they instrument.
3001
3002 To use one or the other with any user application, independently of
3003 how the latter is built, do:
3004
3005 [role="term"]
3006 ----
3007 LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
3008 ----
3009
3010 or
3011
3012 [role="term"]
3013 ----
3014 LD_PRELOAD=liblttng-ust-pthread-wrapper.so my-app
3015 ----
3016
3017 To use both, do:
3018
3019 [role="term"]
3020 ----
3021 LD_PRELOAD="liblttng-ust-libc-wrapper.so liblttng-ust-pthread-wrapper.so" my-app
3022 ----
3023
3024 When the shared object is preloaded, it effectively replaces the
3025 functions listed in the above tables by wrappers which add tracepoints
3026 and call the replaced functions.
3027
3028 Of course, like any other tracepoint, the ones above need to be enabled
3029 in order for LTTng-UST to generate events. This is done using the
3030 `lttng` command line tool
3031 (see <<controlling-tracing,Controlling tracing>>).
3032
3033
3034 [[liblttng-ust-cyg-profile]]
3035 ===== Function tracing
3036
3037 Function tracing is the recording of which functions are entered and
3038 left during the execution of an application. Like with any LTTng event,
3039 the precise time at which this happens is also kept.
3040
3041 GCC and clang have an option named
3042 https://gcc.gnu.org/onlinedocs/gcc-4.9.1/gcc/Code-Gen-Options.html[`-finstrument-functions`]
3043 which generates instrumentation calls for entry and exit to functions.
3044 The LTTng-UST function tracing helpers, path:{liblttng-ust-cyg-profile.so}
3045 and path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
3046 to add instrumentation to the two generated functions (which contain
3047 `cyg_profile` in their names, hence the shared object's name).
3048
3049 In order to use LTTng-UST function tracing, the translation units to
3050 instrument must be built using the `-finstrument-functions` compiler
3051 flag.
3052
3053 LTTng-UST function tracing comes in two flavors, each providing
3054 different trade-offs: path:{liblttng-ust-cyg-profile-fast.so} and
3055 path:{liblttng-ust-cyg-profile.so}.
3056
3057 **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant that
3058 should only be used where it can be _guaranteed_ that the complete event
3059 stream is recorded without any missing events. Any kind of duplicate
3060 information is left out. This version registers the following
3061 tracepoints:
3062
3063 [role="growable",options="header,autowidth"]
3064 .Functions instrumented by path:{liblttng-ust-cyg-profile-fast.so}
3065 |====
3066 |TP provider name |TP name |Instrumented function
3067
3068 .2+|`lttng_ust_cyg_profile_fast`
3069
3070 |`func_entry`
3071 a|Function entry
3072
3073 `addr`::
3074 Address of called function.
3075
3076 |`func_exit`
3077 |Function exit
3078 |====
3079
3080 Assuming no event is lost, having only the function addresses on entry
3081 is enough for creating a call graph (remember that a recorded event
3082 always contains the ID of the CPU that generated it). A tool like
3083 https://sourceware.org/binutils/docs/binutils/addr2line.html[`addr2line`]
3084 may be used to convert function addresses back to source files names
3085 and line numbers.
3086
3087 The other helper,
3088 **path:{liblttng-ust-cyg-profile.so}**,
3089 is a more robust variant which also works for use cases where
3090 events might get discarded or not recorded from application startup.
3091 In these cases, the trace analyzer needs extra information to be
3092 able to reconstruct the program flow. This version registers the
3093 following tracepoints:
3094
3095 [role="growable",options="header,autowidth"]
3096 .Functions instrumented by path:{liblttng-ust-cyg-profile.so}
3097 |====
3098 |TP provider name |TP name |Instrumented function
3099
3100 .2+|`lttng_ust_cyg_profile`
3101
3102 |`func_entry`
3103 a|Function entry
3104
3105 `addr`::
3106 Address of called function.
3107
3108 `call_site`::
3109 Call site address.
3110
3111 |`func_exit`
3112 a|Function exit
3113
3114 `addr`::
3115 Address of called function.
3116
3117 `call_site`::
3118 Call site address.
3119 |====
3120
3121 To use one or the other variant with any user application, assuming at
3122 least one translation unit of the latter is compiled with the
3123 `-finstrument-functions` option, do:
3124
3125 [role="term"]
3126 ----
3127 LD_PRELOAD=liblttng-ust-cyg-profile-fast.so my-app
3128 ----
3129
3130 or
3131
3132 [role="term"]
3133 ----
3134 LD_PRELOAD=liblttng-ust-cyg-profile.so my-app
3135 ----
3136
3137 It might be necessary to limit the number of source files where
3138 `-finstrument-functions` is used to prevent excessive amount of trace
3139 data to be generated at runtime.
3140
3141 TIP: When using GCC, at least, you can use
3142 the `-finstrument-functions-exclude-function-list`
3143 option to avoid instrumenting entries and exits of specific
3144 symbol names.
3145
3146 All events generated from LTTng-UST function tracing are provided on
3147 log level `TRACE_DEBUG_FUNCTION`, which is useful to easily enable
3148 function tracing events in your tracing session using the
3149 `--loglevel-only` option of `lttng enable-event`
3150 (see <<controlling-tracing,Controlling tracing>>).
3151
3152
3153 [role="since-2.4"]
3154 [[liblttng-ust-dl]]
3155 ===== Dynamic linker tracing
3156
3157 This LTTng-UST helper causes all calls to `dlopen()` and `dlclose()`
3158 in the target application to be traced with LTTng.
3159
3160 The helper's shared object, path:{liblttng-ust-dl.so}, registers the
3161 following tracepoints when preloaded:
3162
3163 [role="growable",options="header,autowidth"]
3164 .Functions instrumented by path:{liblttng-ust-dl.so}
3165 |====
3166 |TP provider name |TP name |Instrumented function
3167
3168 .2+|`ust_baddr`
3169
3170 |`push`
3171 a|`dlopen()` call
3172
3173 `baddr`::
3174 Memory base address (where the dynamic linker placed the shared
3175 object).
3176
3177 `sopath`::
3178 File system path to the loaded shared object.
3179
3180 `size`::
3181 File size of the the loaded shared object.
3182
3183 `mtime`::
3184 Last modification time (seconds since Epoch time) of the loaded shared
3185 object.
3186
3187 |`pop`
3188 a|Function exit
3189
3190 `baddr`::
3191 Memory base address (where the dynamic linker placed the shared
3192 object).
3193 |====
3194
3195 To use this LTTng-UST helper with any user application, independently of
3196 how the latter is built, do:
3197
3198 [role="term"]
3199 ----
3200 LD_PRELOAD=liblttng-ust-dl.so my-app
3201 ----
3202
3203 Of course, like any other tracepoint, the ones above need to be enabled
3204 in order for LTTng-UST to generate events. This is done using the
3205 `lttng` command line tool
3206 (see <<controlling-tracing,Controlling tracing>>).
3207
3208
3209 [role="since-2.4"]
3210 [[java-application]]
3211 ==== Java application
3212
3213 LTTng-UST provides a _logging_ back-end for Java applications using either
3214 http://docs.oracle.com/javase/7/docs/api/java/util/logging/Logger.html[`java.util.logging`]
3215 (JUL) or
3216 http://logging.apache.org/log4j/1.2/[Apache log4j 1.2]
3217 This back-end is called the _LTTng-UST Java agent_, and it is responsible
3218 for the communications with an LTTng session daemon.
3219
3220 From the user's point of view, once the LTTng-UST Java agent has been
3221 initialized, JUL and log4j loggers may be created and used as usual.
3222 The agent adds its own handler to the _root logger_, so that all
3223 loggers may generate LTTng events with no effort.
3224
3225 Common JUL/log4j features are supported using the `lttng` tool
3226 (see <<controlling-tracing,Controlling tracing>>):
3227
3228 * listing all logger names
3229 * enabling/disabling events per logger name
3230 * JUL/log4j log levels
3231
3232
3233 [role="since-2.1"]
3234 [[jul]]
3235 ===== `java.util.logging`
3236
3237 Here's an example of tracing a Java application which is using
3238 **`java.util.logging`**:
3239
3240 [source,java]
3241 ----
3242 import java.util.logging.Logger;
3243 import org.lttng.ust.agent.LTTngAgent;
3244
3245 public class Test
3246 {
3247 private static final int answer = 42;
3248
3249 public static void main(String[] argv) throws Exception
3250 {
3251 // create a logger
3252 Logger logger = Logger.getLogger("jello");
3253
3254 // call this as soon as possible (before logging)
3255 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3256
3257 // log at will!
3258 logger.info("some info");
3259 logger.warning("some warning");
3260 Thread.sleep(500);
3261 logger.finer("finer information; the answer is " + answer);
3262 Thread.sleep(123);
3263 logger.severe("error!");
3264
3265 // not mandatory, but cleaner
3266 lttngAgent.dispose();
3267 }
3268 }
3269 ----
3270
3271 The LTTng-UST Java agent is packaged in a JAR file named
3272 `liblttng-ust-agent.jar` It is typically located in
3273 dir:{/usr/lib/lttng/java}. To compile the snippet above
3274 (saved as `Test.java`), do:
3275
3276 [role="term"]
3277 ----
3278 javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar Test.java
3279 ----
3280
3281 You can run the resulting compiled class like this:
3282
3283 [role="term"]
3284 ----
3285 java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:. Test
3286 ----
3287
3288 NOTE: http://openjdk.java.net/[OpenJDK] 7 is used for development and
3289 continuous integration, thus this version is directly supported.
3290 However, the LTTng-UST Java agent has also been tested with OpenJDK 6.
3291
3292
3293 [role="since-2.6"]
3294 [[log4j]]
3295 ===== Apache log4j 1.2
3296
3297 LTTng features an Apache log4j 1.2 agent, which means your existing
3298 Java applications using log4j 1.2 for logging can record events to
3299 LTTng traces with just a minor source code modification.
3300
3301 NOTE: This version of LTTng does not support Log4j 2.
3302
3303 Here's an example:
3304
3305 [source,java]
3306 ----
3307 import org.apache.log4j.Logger;
3308 import org.apache.log4j.BasicConfigurator;
3309 import org.lttng.ust.agent.LTTngAgent;
3310
3311 public class Test
3312 {
3313 private static final int answer = 42;
3314
3315 public static void main(String[] argv) throws Exception
3316 {
3317 // create and configure a logger
3318 Logger logger = Logger.getLogger(Test.class);
3319 BasicConfigurator.configure();
3320
3321 // call this as soon as possible (before logging)
3322 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3323
3324 // log at will!
3325 logger.info("some info");
3326 logger.warn("some warning");
3327 Thread.sleep(500);
3328 logger.debug("debug information; the answer is " + answer);
3329 Thread.sleep(123);
3330 logger.error("error!");
3331 logger.fatal("fatal error!");
3332
3333 // not mandatory, but cleaner
3334 lttngAgent.dispose();
3335 }
3336 }
3337 ----
3338
3339 To compile the snippet above, do:
3340
3341 [role="term"]
3342 ----
3343 javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP Test.java
3344 ----
3345
3346 where `$LOG4JCP` is the log4j 1.2 JAR file path.
3347
3348 You can run the resulting compiled class like this:
3349
3350 [role="term"]
3351 ----
3352 java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP:. Test
3353 ----
3354
3355
3356 [[instrumenting-linux-kernel]]
3357 ==== Linux kernel
3358
3359 The Linux kernel can be instrumented for LTTng tracing, either its core
3360 source code or a kernel module. It has to be noted that Linux is
3361 readily traceable using LTTng since many parts of its source code are
3362 already instrumented: this is the job of the upstream
3363 http://git.lttng.org/?p=lttng-modules.git[LTTng-modules]
3364 package. This section presents how to add LTTng instrumentation where it
3365 does not currently exist and how to instrument custom kernel modules.
3366
3367 All LTTng instrumentation in the Linux kernel is based on an existing
3368 infrastructure which bears the name of its main macro, `TRACE_EVENT()`.
3369 This macro is used to define tracepoints,
3370 each tracepoint having a name, usually with the
3371 +__subsys__&#95;__name__+ format,
3372 +_subsys_+ being the subsystem name and
3373 +_name_+ the specific event name.
3374
3375 Tracepoints defined with `TRACE_EVENT()` may be inserted anywhere in
3376 the Linux kernel source code, after what callbacks, called _probes_,
3377 may be registered to execute some action when a tracepoint is
3378 executed. This mechanism is directly used by ftrace and perf,
3379 but cannot be used as is by LTTng: an adaptation layer is added to
3380 satisfy LTTng's specific needs.
3381
3382 With that in mind, this documentation does not cover the `TRACE_EVENT()`
3383 format and how to use it, but it is mandatory to understand it and use
3384 it to instrument Linux for LTTng. A series of
3385 LWN articles explain
3386 `TRACE_EVENT()` in details:
3387 http://lwn.net/Articles/379903/[part 1],
3388 http://lwn.net/Articles/381064/[part 2], and
3389 http://lwn.net/Articles/383362/[part 3].
3390 Once you master `TRACE_EVENT()` enough for your use case, continue
3391 reading this section so that you can add the LTTng adaptation layer of
3392 instrumentation.
3393
3394 This section first discusses the general method of instrumenting the
3395 Linux kernel for LTTng. This method is then reused for the specific
3396 case of instrumenting a kernel module.
3397
3398
3399 [[instrumenting-linux-kernel-itself]]
3400 ===== Instrumenting the Linux kernel for LTTng
3401
3402 The following subsections explain strictly how to add custom LTTng
3403 instrumentation to the Linux kernel. They do not explain how the
3404 macros actually work and the internal mechanics of the tracer.
3405
3406 You should have a Linux kernel source code tree to work with.
3407 Throughout this section, all file paths are relative to the root of
3408 this tree unless otherwise stated.
3409
3410 You need a copy of the LTTng-modules Git repository:
3411
3412 [role="term"]
3413 ----
3414 git clone git://git.lttng.org/lttng-modules.git
3415 ----
3416
3417 The steps to add custom LTTng instrumentation to a Linux kernel
3418 involves defining and using the mainline `TRACE_EVENT()` tracepoints
3419 first, then writing and using the LTTng adaptation layer.
3420
3421
3422 [[mainline-trace-event]]
3423 ===== Defining/using tracepoints with mainline `TRACE_EVENT()` infrastructure
3424
3425 The first step is to define tracepoints using the mainline Linux
3426 `TRACE_EVENT()` macro and insert tracepoints where you want them.
3427 Your tracepoint definitions reside in a header file in
3428 dir:{include/trace/events}. If you're adding tracepoints to an existing
3429 subsystem, edit its appropriate header file.
3430
3431 As an example, the following header file (let's call it
3432 dir:{include/trace/events/hello.h}) defines one tracepoint using
3433 `TRACE_EVENT()`:
3434
3435 [source,c]
3436 ----
3437 /* subsystem name is "hello" */
3438 #undef TRACE_SYSTEM
3439 #define TRACE_SYSTEM hello
3440
3441 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3442 #define _TRACE_HELLO_H
3443
3444 #include <linux/tracepoint.h>
3445
3446 TRACE_EVENT(
3447 /* "hello" is the subsystem name, "world" is the event name */
3448 hello_world,
3449
3450 /* tracepoint function prototype */
3451 TP_PROTO(int foo, const char* bar),
3452
3453 /* arguments for this tracepoint */
3454 TP_ARGS(foo, bar),
3455
3456 /* LTTng doesn't need those */
3457 TP_STRUCT__entry(),
3458 TP_fast_assign(),
3459 TP_printk("", 0)
3460 );
3461
3462 #endif
3463
3464 /* this part must be outside protection */
3465 #include <trace/define_trace.h>
3466 ----
3467
3468 Notice that we don't use any of the last three arguments: they
3469 are left empty here because LTTng doesn't need them. You would only fill
3470 `TP_STRUCT__entry()`, `TP_fast_assign()` and `TP_printk()` if you were
3471 to also use this tracepoint for ftrace/perf.
3472
3473 Once this is done, you may place calls to `trace_hello_world()`
3474 wherever you want in the Linux source code. As an example, let us place
3475 such a tracepoint in the `usb_probe_device()` static function
3476 (path:{drivers/usb/core/driver.c}):
3477
3478 [source,c]
3479 ----
3480 /* called from driver core with dev locked */
3481 static int usb_probe_device(struct device *dev)
3482 {
3483 struct usb_device_driver *udriver = to_usb_device_driver(dev->driver);
3484 struct usb_device *udev = to_usb_device(dev);
3485 int error = 0;
3486
3487 trace_hello_world(udev->devnum, udev->product);
3488
3489 /* ... */
3490 }
3491 ----
3492
3493 This tracepoint should fire every time a USB device is plugged in.
3494
3495 At the top of path:{driver.c}, we need to include our actual tracepoint
3496 definition and, in this case (one place per subsystem), define
3497 `CREATE_TRACE_POINTS`, which creates our tracepoint:
3498
3499 [source,c]
3500 ----
3501 /* ... */
3502
3503 #include "usb.h"
3504
3505 #define CREATE_TRACE_POINTS
3506 #include <trace/events/hello.h>
3507
3508 /* ... */
3509 ----
3510
3511 Build your custom Linux kernel. In order to use LTTng, make sure the
3512 following kernel configuration options are enabled:
3513
3514 * `CONFIG_MODULES` (loadable module support)
3515 * `CONFIG_KALLSYMS` (load all symbols for debugging/kksymoops)
3516 * `CONFIG_HIGH_RES_TIMERS` (high resolution timer support)
3517 * `CONFIG_TRACEPOINTS` (kernel tracepoint instrumentation)
3518
3519 Boot the custom kernel. The directory
3520 dir:{/sys/kernel/debug/tracing/events/hello} should exist if everything
3521 went right, with a dir:{hello_world} subdirectory.
3522
3523
3524 [[lttng-adaptation-layer]]
3525 ===== Adding the LTTng adaptation layer
3526
3527 The steps to write the LTTng adaptation layer are, in your
3528 LTTng-modules copy's source code tree:
3529
3530 . In dir:{instrumentation/events/lttng-module},
3531 add a header +__subsys__.h+ for your custom
3532 subsystem +__subsys__+ and write your
3533 tracepoint definitions using LTTng-modules macros in it.
3534 Those macros look like the mainline kernel equivalents,
3535 but they present subtle, yet important differences.
3536 . In dir:{probes}, create the C source file of the LTTng probe kernel
3537 module for your subsystem. It should be named
3538 +lttng-probe-__subsys__.c+.
3539 . Edit path:{probes/Makefile} so that the LTTng-modules project
3540 builds your custom LTTng probe kernel module.
3541 . Build and install LTTng kernel modules.
3542
3543 Following our `hello_world` event example, here's the content of
3544 path:{instrumentation/events/lttng-module/hello.h}:
3545
3546 [source,c]
3547 ----
3548 #undef TRACE_SYSTEM
3549 #define TRACE_SYSTEM hello
3550
3551 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3552 #define _TRACE_HELLO_H
3553
3554 #include "../../../probes/lttng-tracepoint-event.h"
3555 #include <linux/tracepoint.h>
3556
3557 LTTNG_TRACEPOINT_EVENT(
3558 /* format identical to mainline version for those */
3559 hello_world,
3560 TP_PROTO(int foo, const char* bar),
3561 TP_ARGS(foo, bar),
3562
3563 /* possible differences */
3564 TP_STRUCT__entry(
3565 __field(int, my_int)
3566 __field(char, char0)
3567 __field(char, char1)
3568 __string(product, bar)
3569 ),
3570
3571 /* notice the use of tp_assign()/tp_strcpy() and no semicolons */
3572 TP_fast_assign(
3573 tp_assign(my_int, foo)
3574 tp_assign(char0, bar[0])
3575 tp_assign(char1, bar[1])
3576 tp_strcpy(product, bar)
3577 ),
3578
3579 /* This one is actually not used by LTTng either, but must be
3580 * present for the moment.
3581 */
3582 TP_printk("", 0)
3583
3584 /* no semicolon after this either */
3585 )
3586
3587 #endif
3588
3589 /* other difference: do NOT include <trace/define_trace.h> */
3590 #include "../../../probes/define_trace.h"
3591 ----
3592
3593 Some possible entries for `TP_STRUCT__entry()` and `TP_fast_assign()`,
3594 in the case of LTTng-modules, are shown in the
3595 <<lttng-modules-ref,LTTng-modules reference>> section.
3596
3597 The best way to learn how to use the above macros is to inspect
3598 existing LTTng tracepoint definitions in
3599 dir:{instrumentation/events/lttng-module} header files. Compare
3600 them with the Linux kernel mainline versions in
3601 dir:{include/trace/events}.
3602
3603 The next step is writing the LTTng probe kernel module C source file.
3604 This one is named +lttng-probe-__subsys__.c+
3605 in dir:{probes}. You may always use the following template:
3606
3607 [source,c]
3608 ----
3609 #include <linux/module.h>
3610 #include "../lttng-tracer.h"
3611
3612 /* Build time verification of mismatch between mainline TRACE_EVENT()
3613 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3614 */
3615 #include <trace/events/hello.h>
3616
3617 /* create LTTng tracepoint probes */
3618 #define LTTNG_PACKAGE_BUILD
3619 #define CREATE_TRACE_POINTS
3620 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
3621
3622 #include "../instrumentation/events/lttng-module/hello.h"
3623
3624 MODULE_LICENSE("GPL and additional rights");
3625 MODULE_AUTHOR("Your name <your-email>");
3626 MODULE_DESCRIPTION("LTTng hello probes");
3627 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
3628 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
3629 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
3630 LTTNG_MODULES_EXTRAVERSION);
3631 ----
3632
3633 Just replace `hello` with your subsystem name. In this example,
3634 `<trace/events/hello.h>`, which is the original mainline tracepoint
3635 definition header, is included for verification purposes: the
3636 LTTng-modules build system is able to emit an error at build time when
3637 the arguments of the mainline `TRACE_EVENT()` definitions do not match
3638 the ones of the LTTng-modules adaptation layer
3639 (`LTTNG_TRACEPOINT_EVENT()`).
3640
3641 Edit path:{probes/Makefile} and add your new kernel module object
3642 next to existing ones:
3643
3644 [source,make]
3645 ----
3646 # ...
3647
3648 obj-m += lttng-probe-module.o
3649 obj-m += lttng-probe-power.o
3650
3651 obj-m += lttng-probe-hello.o
3652
3653 # ...
3654 ----
3655
3656 Time to build! Point to your custom Linux kernel source tree using
3657 the `KERNELDIR` variable:
3658
3659 [role="term"]
3660 ----
3661 make KERNELDIR=/path/to/custom/linux
3662 ----
3663
3664 Finally, install modules:
3665
3666 [role="term"]
3667 ----
3668 sudo make modules_install
3669 ----
3670
3671
3672 [[instrumenting-linux-kernel-tracing]]
3673 ===== Tracing
3674
3675 The <<controlling-tracing,Controlling tracing>> section explains
3676 how to use the `lttng` tool to create and control tracing sessions.
3677 Although the `lttng` tool loads the appropriate _known_ LTTng kernel
3678 modules when needed (by launching `root`'s session daemon), it won't
3679 load your custom `lttng-probe-hello` module by default. You need to
3680 manually start an LTTng session daemon as `root` and use the
3681 `--extra-kmod-probes` option to append your custom probe module to the
3682 default list:
3683
3684 [role="term"]
3685 ----
3686 sudo pkill -u root lttng-sessiond
3687 sudo lttng-sessiond --extra-kmod-probes=hello
3688 ----
3689
3690 The first command makes sure any existing instance is killed. If
3691 you're not interested in using the default probes, or if you only
3692 want to use a few of them, you could use `--kmod-probes` instead,
3693 which specifies an absolute list:
3694
3695 [role="term"]
3696 ----
3697 sudo lttng-sessiond --kmod-probes=hello,ext4,net,block,signal,sched
3698 ----
3699
3700 Confirm the custom probe module is loaded:
3701
3702 [role="term"]
3703 ----
3704 lsmod | grep lttng_probe_hello
3705 ----
3706
3707 The `hello_world` event should appear in the list when doing
3708
3709 [role="term"]
3710 ----
3711 lttng list --kernel | grep hello
3712 ----
3713
3714 You may now create an LTTng tracing session, enable the `hello_world`
3715 kernel event (and others if you wish) and start tracing:
3716
3717 [role="term"]
3718 ----
3719 sudo lttng create my-session
3720 sudo lttng enable-event --kernel hello_world
3721 sudo lttng start
3722 ----
3723
3724 Plug a few USB devices, then stop tracing and inspect the trace (if
3725 http://diamon.org/babeltrace[Babeltrace]
3726 is installed):
3727
3728 [role="term"]
3729 ----
3730 sudo lttng stop
3731 sudo lttng view
3732 ----
3733
3734 Here's a sample output:
3735
3736 ----
3737 [15:30:34.835895035] (+?.?????????) hostname hello_world: { cpu_id = 1 }, { my_int = 8, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3738 [15:30:42.262781421] (+7.426886386) hostname hello_world: { cpu_id = 1 }, { my_int = 9, char0 = 80, char1 = 97, product = "Patriot Memory" }
3739 [15:30:48.175621778] (+5.912840357) hostname hello_world: { cpu_id = 1 }, { my_int = 10, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3740 ----
3741
3742 Two USB flash drives were used for this test.
3743
3744 You may change your LTTng custom probe, rebuild it and reload it at
3745 any time when not tracing. Make sure you remove the old module
3746 (either by killing the root LTTng session daemon which loaded the
3747 module in the first place, or by using `modprobe --remove` directly)
3748 before loading the updated one.
3749
3750
3751 [[instrumenting-out-of-tree-linux-kernel]]
3752 ===== Advanced: Instrumenting an out-of-tree Linux kernel module for LTTng
3753
3754 Instrumenting a custom Linux kernel module for LTTng follows the exact
3755 same steps as
3756 <<instrumenting-linux-kernel-itself,adding instrumentation
3757 to the Linux kernel itself>>,
3758 the only difference being that your mainline tracepoint definition
3759 header doesn't reside in the mainline source tree, but in your
3760 kernel module source tree.
3761
3762 The only reference to this mainline header is in the LTTng custom
3763 probe's source code (path:{probes/lttng-probe-hello.c} in our example),
3764 for build time verification:
3765
3766 [source,c]
3767 ----
3768 /* ... */
3769
3770 /* Build time verification of mismatch between mainline TRACE_EVENT()
3771 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3772 */
3773 #include <trace/events/hello.h>
3774
3775 /* ... */
3776 ----
3777
3778 The preferred, flexible way to include your module's mainline
3779 tracepoint definition header is to put it in a specific directory
3780 relative to your module's root (`tracepoints`, for example) and include it
3781 relative to your module's root directory in the LTTng custom probe's
3782 source:
3783
3784 [source,c]
3785 ----
3786 #include <tracepoints/hello.h>
3787 ----
3788
3789 You may then build LTTng-modules by adding your module's root
3790 directory as an include path to the extra C flags:
3791
3792 [role="term"]
3793 ----
3794 make ccflags-y=-I/path/to/kernel/module KERNELDIR=/path/to/custom/linux
3795 ----
3796
3797 Using `ccflags-y` allows you to move your kernel module to another
3798 directory and rebuild the LTTng-modules project with no change to
3799 source files.
3800
3801
3802 [role="since-2.5"]
3803 [[proc-lttng-logger-abi]]
3804 ==== LTTng logger ABI
3805
3806 The `lttng-tracer` Linux kernel module, installed by the LTTng-modules
3807 package, creates a special LTTng logger ABI file path:{/proc/lttng-logger}
3808 when loaded. Writing text data to this file generates an LTTng kernel
3809 domain event named `lttng_logger`.
3810
3811 Unlike other kernel domain events, `lttng_logger` may be enabled by
3812 any user, not only root users or members of the tracing group.
3813
3814 To use the LTTng logger ABI, simply write a string to
3815 path:{/proc/lttng-logger}:
3816
3817 [role="term"]
3818 ----
3819 echo -n 'Hello, World!' > /proc/lttng-logger
3820 ----
3821
3822 The `msg` field of the `lttng_logger` event contains the recorded
3823 message.
3824
3825 NOTE: Messages are split in chunks of 1024{nbsp}bytes.
3826
3827 The LTTng logger ABI is a quick and easy way to trace some events from
3828 user space through the kernel tracer. However, it is much more basic
3829 than LTTng-UST: it's slower (involves system call round-trip to the
3830 kernel and only supports logging strings). The LTTng logger ABI is
3831 particularly useful for recording logs as LTTng traces from shell
3832 scripts, potentially combining them with other Linux kernel/user space
3833 events.
3834
3835
3836 [[instrumenting-32-bit-app-on-64-bit-system]]
3837 ==== Advanced: Instrumenting a 32-bit application on a 64-bit system
3838
3839 [[advanced-instrumenting-techniques]]In order to trace a 32-bit
3840 application running on a 64-bit system,
3841 LTTng must use a dedicated 32-bit
3842 <<lttng-consumerd,consumer daemon>>. This section discusses how to
3843 build that daemon (which is _not_ part of the default 64-bit LTTng
3844 build) and the LTTng 32-bit tracing libraries, and how to instrument
3845 a 32-bit application in that context.
3846
3847 Make sure you install all 32-bit versions of LTTng dependencies.
3848 Their names can be found in the `README.md` files of each LTTng package
3849 source. How to find and install them depends on your target's
3850 Linux distribution. `gcc-multilib` is a common package name for the
3851 multilib version of GCC, which you also need.
3852
3853 The following packages will be built for 32-bit support on a 64-bit
3854 system: http://urcu.so/[Userspace RCU],
3855 LTTng-UST and LTTng-tools.
3856
3857
3858 [[building-32-bit-userspace-rcu]]
3859 ===== Building 32-bit Userspace RCU
3860
3861 Follow this:
3862
3863 [role="term"]
3864 ----
3865 git clone git://git.urcu.so/urcu.git
3866 cd urcu
3867 ./bootstrap
3868 ./configure --libdir=/usr/lib32 CFLAGS=-m32
3869 make
3870 sudo make install
3871 sudo ldconfig
3872 ----
3873
3874 The `-m32` C compiler flag creates 32-bit object files and `--libdir`
3875 indicates where to install the resulting libraries.
3876
3877
3878 [[building-32-bit-lttng-ust]]
3879 ===== Building 32-bit LTTng-UST
3880
3881 Follow this:
3882
3883 [role="term"]
3884 ----
3885 git clone http://git.lttng.org/lttng-ust.git
3886 cd lttng-ust
3887 ./bootstrap
3888 ./configure --prefix=/usr \
3889 --libdir=/usr/lib32 \
3890 CFLAGS=-m32 CXXFLAGS=-m32 \
3891 LDFLAGS=-L/usr/lib32
3892 make
3893 sudo make install
3894 sudo ldconfig
3895 ----
3896
3897 `-L/usr/lib32` is required for the build to find the 32-bit versions
3898 of Userspace RCU and other dependencies.
3899
3900 [NOTE]
3901 ====
3902 Depending on your Linux distribution,
3903 32-bit libraries could be installed at a different location than
3904 dir:{/usr/lib32}. For example, Debian is known to install
3905 some 32-bit libraries in dir:{/usr/lib/i386-linux-gnu}.
3906
3907 In this case, make sure to set `LDFLAGS` to all the
3908 relevant 32-bit library paths, for example,
3909 `LDFLAGS="-L/usr/lib32 -L/usr/lib/i386-linux-gnu"`.
3910 ====
3911
3912 NOTE: You may add options to path:{./configure} if you need them, e.g., for
3913 Java and SystemTap support. Look at `./configure --help` for more
3914 information.
3915
3916
3917 [[building-32-bit-lttng-tools]]
3918 ===== Building 32-bit LTTng-tools
3919
3920 Since the host is a 64-bit system, most 32-bit binaries and libraries of
3921 LTTng-tools are not needed; the host uses their 64-bit counterparts.
3922 The required step here is building and installing a 32-bit consumer
3923 daemon.
3924
3925 Follow this:
3926
3927 [role="term"]
3928 ----
3929 git clone http://git.lttng.org/lttng-tools.git
3930 cd lttng-ust
3931 ./bootstrap
3932 ./configure --prefix=/usr \
3933 --libdir=/usr/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3934 LDFLAGS=-L/usr/lib32
3935 make
3936 cd src/bin/lttng-consumerd
3937 sudo make install
3938 sudo ldconfig
3939 ----
3940
3941 The above commands build all the LTTng-tools project as 32-bit
3942 applications, but only installs the 32-bit consumer daemon.
3943
3944
3945 [[building-64-bit-lttng-tools]]
3946 ===== Building 64-bit LTTng-tools
3947
3948 Finally, you need to build a 64-bit version of LTTng-tools which is
3949 aware of the 32-bit consumer daemon previously built and installed:
3950
3951 [role="term"]
3952 ----
3953 make clean
3954 ./bootstrap
3955 ./configure --prefix=/usr \
3956 --with-consumerd32-libdir=/usr/lib32 \
3957 --with-consumerd32-bin=/usr/lib32/lttng/libexec/lttng-consumerd
3958 make
3959 sudo make install
3960 sudo ldconfig
3961 ----
3962
3963 Henceforth, the 64-bit session daemon automatically finds the
3964 32-bit consumer daemon if required.
3965
3966
3967 [[building-instrumented-32-bit-c-application]]
3968 ===== Building an instrumented 32-bit C application
3969
3970 Let us reuse the _Hello world_ example of
3971 <<tracing-your-own-user-application,Tracing your own user application>>
3972 (<<getting-started,Getting started>> chapter).
3973
3974 The instrumentation process is unaltered.
3975
3976 First, a typical 64-bit build (assuming you're running a 64-bit system):
3977
3978 [role="term"]
3979 ----
3980 gcc -o hello64 -I. hello.c hello-tp.c -ldl -llttng-ust
3981 ----
3982
3983 Now, a 32-bit build:
3984
3985 [role="term"]
3986 ----
3987 gcc -o hello32 -I. -m32 hello.c hello-tp.c -L/usr/lib32 \
3988 -ldl -llttng-ust -Wl,-rpath,/usr/lib32
3989 ----
3990
3991 The `-rpath` option, passed to the linker, makes the dynamic loader
3992 check for libraries in dir:{/usr/lib32} before looking in its default paths,
3993 where it should find the 32-bit version of `liblttng-ust`.
3994
3995
3996 [[running-32-bit-and-64-bit-c-applications]]
3997 ===== Running 32-bit and 64-bit versions of an instrumented C application
3998
3999 Now, both 32-bit and 64-bit versions of the _Hello world_ example above
4000 can be traced in the same tracing session. Use the `lttng` tool as usual
4001 to create a tracing session and start tracing:
4002
4003 [role="term"]
4004 ----
4005 lttng create session-3264
4006 lttng enable-event -u -a
4007 ./hello32
4008 ./hello64
4009 lttng stop
4010 ----
4011
4012 Use `lttng view` to verify both processes were
4013 successfully traced.
4014
4015
4016 [[controlling-tracing]]
4017 === Controlling tracing
4018
4019 Once you're in possession of a software that is properly
4020 <<instrumenting,instrumented>> for LTTng tracing, be it thanks to
4021 the built-in LTTng probes for the Linux kernel, a custom user
4022 application or a custom Linux kernel, all that is left is actually
4023 tracing it. As a user, you control LTTng tracing using a single command
4024 line interface: the `lttng` tool. This tool uses `liblttng-ctl` behind
4025 the scene to connect to and communicate with session daemons. LTTng
4026 session daemons may either be started manually (`lttng-sessiond`) or
4027 automatically by the `lttng` command when needed. Trace data may
4028 be forwarded to the network and used elsewhere using an LTTng relay
4029 daemon (`lttng-relayd`).
4030
4031 The manpages of `lttng`, `lttng-sessiond` and `lttng-relayd` are pretty
4032 complete, thus this section is not an online copy of the latter (we
4033 leave this contents for the
4034 <<online-lttng-manpages,Online LTTng manpages>> section).
4035 This section is rather a tour of LTTng
4036 features through practical examples and tips.
4037
4038 If not already done, make sure you understand the core concepts
4039 and how LTTng components connect together by reading the
4040 <<understanding-lttng,Understanding LTTng>> chapter; this section
4041 assumes you are familiar with them.
4042
4043
4044 [[creating-destroying-tracing-sessions]]
4045 ==== Creating and destroying tracing sessions
4046
4047 Whatever you want to do with `lttng`, it has to happen inside a
4048 **tracing session**, created beforehand. A session, in general, is a
4049 per-user container of state. A tracing session is no different; it
4050 keeps a specific state of stuff like:
4051
4052 * session name
4053 * enabled/disabled channels with associated parameters
4054 * enabled/disabled events with associated log levels and filters
4055 * context information added to channels
4056 * tracing activity (started or stopped)
4057
4058 and more.
4059
4060 A single user may have many active tracing sessions. LTTng session
4061 daemons are the ultimate owners and managers of tracing sessions. For
4062 user space tracing, each user has its own session daemon. Since Linux
4063 kernel tracing requires root privileges, only `root`'s session daemon
4064 may enable and trace kernel events. However, `lttng` has a `--group`
4065 option (which is passed to `lttng-sessiond` when starting it) to
4066 specify the name of a _tracing group_ which selected users may be part
4067 of to be allowed to communicate with `root`'s session daemon. By
4068 default, the tracing group name is `tracing`.
4069
4070 To create a tracing session, do:
4071
4072 [role="term"]
4073 ----
4074 lttng create my-session
4075 ----
4076
4077 This creates a new tracing session named `my-session` and make it
4078 the current one. If you don't specify a name (running only
4079 `lttng create`), your tracing session is named `auto` followed by the
4080 current date and time. Traces
4081 are written in +\~/lttng-traces/__session__-+ followed
4082 by the tracing session's creation date/time by default, where
4083 +__session__+ is the tracing session name. To save them
4084 at a different location, use the `--output` option:
4085
4086 [role="term"]
4087 ----
4088 lttng create --output /tmp/some-directory my-session
4089 ----
4090
4091 You may create as many tracing sessions as you wish:
4092
4093 [role="term"]
4094 ----
4095 lttng create other-session
4096 lttng create yet-another-session
4097 ----
4098
4099 You may view all existing tracing sessions using the `list` command:
4100
4101 [role="term"]
4102 ----
4103 lttng list
4104 ----
4105
4106 The state of a _current tracing session_ is kept in path:{~/.lttngrc}. Each
4107 invocation of `lttng` reads this file to set its current tracing
4108 session name so that you don't have to specify a session name for each
4109 command. You could edit this file manually, but the preferred way to
4110 set the current tracing session is to use the `set-session` command:
4111
4112 [role="term"]
4113 ----
4114 lttng set-session other-session
4115 ----
4116
4117 Most `lttng` commands accept a `--session` option to specify the name
4118 of the target tracing session.
4119
4120 Any existing tracing session may be destroyed using the `destroy`
4121 command:
4122
4123 [role="term"]
4124 ----
4125 lttng destroy my-session
4126 ----
4127
4128 Providing no argument to `lttng destroy` destroys the current
4129 tracing session. Destroying a tracing session stops any tracing
4130 running within the latter. Destroying a tracing session frees resources
4131 acquired by the session daemon and tracer side, making sure to flush
4132 all trace data.
4133
4134 You can't do much with LTTng using only the `create`, `set-session`
4135 and `destroy` commands of `lttng`, but it is essential to know them in
4136 order to control LTTng tracing, which always happen within the scope of
4137 a tracing session.
4138
4139
4140 [[enabling-disabling-events]]
4141 ==== Enabling and disabling events
4142
4143 Inside a tracing session, individual events may be enabled or disabled
4144 so that tracing them may or may not generate trace data.
4145
4146 We sometimes use the term _event_ metonymically throughout this text to
4147 refer to a specific condition, or _rule_, that could lead, when
4148 satisfied, to an actual occurring event (a point at a specific position
4149 in source code/binary program, logical processor and time capturing
4150 some payload) being recorded as trace data. This specific condition is
4151 composed of:
4152
4153 . A **domain** (kernel, user space, `java.util.logging`, or log4j)
4154 (required).
4155 . One or many **instrumentation points** in source code or binary
4156 program (tracepoint name, address, symbol name, function name,
4157 logger name, amongst other types of probes) to be executed (required).
4158 . A **log level** (each instrumentation point declares its own log
4159 level) or log level range to match (optional; only valid for user
4160 space domain).
4161 . A **custom user expression**, or **filter**, that must evaluate to
4162 _true_ when a tracepoint is executed (optional; only valid for user
4163 space domain).
4164
4165 All conditions are specified using arguments passed to the
4166 `enable-event` command of the `lttng` tool.
4167
4168 Condition 1 is specified using either `--kernel`/`-k` (kernel),
4169 `--userspace`/`-u` (user space), `--jul`/`-j`
4170 (JUL), or `--log4j`/`-l` (log4j).
4171 Exactly one of those four arguments must be specified.
4172
4173 Condition 2 is specified using one of:
4174
4175 `--tracepoint`::
4176 Tracepoint.
4177
4178 `--probe`::
4179 Dynamic probe (address, symbol name or combination
4180 of both in binary program; only valid for kernel domain).
4181
4182 `--function`::
4183 function entry/exit (address, symbol name or
4184 combination of both in binary program; only valid for kernel domain).
4185
4186 `--syscall`::
4187 System call entry/exit (only valid for kernel domain).
4188
4189 When none of the above is specified, `enable-event` defaults to
4190 using `--tracepoint`.
4191
4192 Condition 3 is specified using one of:
4193
4194 `--loglevel`::
4195 Log level range from the specified level to the most severe
4196 level.
4197
4198 `--loglevel-only`::
4199 Specific log level.
4200
4201 See `lttng enable-event --help` for the complete list of log level
4202 names.
4203
4204 Condition 4 is specified using the `--filter` option. This filter is
4205 a C-like expression, potentially reading real-time values of event
4206 fields, that has to evaluate to _true_ for the condition to be satisfied.
4207 Event fields are read using plain identifiers while context fields
4208 must be prefixed with `$ctx.`. See `lttng enable-event --help` for
4209 all usage details.
4210
4211 The aforementioned arguments are combined to create and enable events.
4212 Each unique combination of arguments leads to a different
4213 _enabled event_. The log level and filter arguments are optional, their
4214 default values being respectively all log levels and a filter which
4215 always returns _true_.
4216
4217 Here are a few examples (you must
4218 <<creating-destroying-tracing-sessions,create a tracing session>>
4219 first):
4220
4221 [role="term"]
4222 ----
4223 lttng enable-event -u --tracepoint my_app:hello_world
4224 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_WARNING
4225 lttng enable-event -u --tracepoint 'my_other_app:*'
4226 lttng enable-event -u --tracepoint my_app:foo_bar \
4227 --filter 'some_field <= 23 && !other_field'
4228 lttng enable-event -k --tracepoint sched_switch
4229 lttng enable-event -k --tracepoint gpio_value
4230 lttng enable-event -k --function usb_probe_device usb_probe_device
4231 lttng enable-event -k --syscall --all
4232 ----
4233
4234 The wildcard symbol, `*`, matches _anything_ and may only be used at
4235 the end of the string when specifying a _tracepoint_. Make sure to
4236 use it between single quotes in your favorite shell to avoid
4237 undesired shell expansion.
4238
4239 System call events can be enabled individually, too:
4240
4241 [role="term"]
4242 ----
4243 lttng enable-event -k --syscall open
4244 lttng enable-event -k --syscall read
4245 lttng enable-event -k --syscall fork,chdir,pipe
4246 ----
4247
4248 The complete list of available system call events can be
4249 obtained using
4250
4251 [role="term"]
4252 ----
4253 lttng list --kernel --syscall
4254 ----
4255
4256 You can see a list of events (enabled or disabled) using
4257
4258 [role="term"]
4259 ----
4260 lttng list some-session
4261 ----
4262
4263 where `some-session` is the name of the desired tracing session.
4264
4265 What you're actually doing when enabling events with specific conditions
4266 is creating a **whitelist** of traceable events for a given channel.
4267 Thus, the following case presents redundancy:
4268
4269 [role="term"]
4270 ----
4271 lttng enable-event -u --tracepoint my_app:hello_you
4272 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_DEBUG
4273 ----
4274
4275 The second command, matching a log level range, is useless since the first
4276 command enables all tracepoints matching the same name,
4277 `my_app:hello_you`.
4278
4279 Disabling an event is simpler: you only need to provide the event
4280 name to the `disable-event` command:
4281
4282 [role="term"]
4283 ----
4284 lttng disable-event --userspace my_app:hello_you
4285 ----
4286
4287 This name has to match a name previously given to `enable-event` (it
4288 has to be listed in the output of `lttng list some-session`).
4289 The `*` wildcard is supported, as long as you also used it in a
4290 previous `enable-event` invocation.
4291
4292 Disabling an event does not add it to some blacklist: it simply removes
4293 it from its channel's whitelist. This is why you cannot disable an event
4294 which wasn't previously enabled.
4295
4296 A disabled event doesn't generate any trace data, even if all its
4297 specified conditions are met.
4298
4299 Events may be enabled and disabled at will, either when LTTng tracers
4300 are active or not. Events may be enabled before a user space application
4301 is even started.
4302
4303
4304 [[basic-tracing-session-control]]
4305 ==== Basic tracing session control
4306
4307 Once you have
4308 <<creating-destroying-tracing-sessions,created a tracing session>>
4309 and <<enabling-disabling-events,enabled one or more events>>,
4310 you may activate the LTTng tracers for the current tracing session at
4311 any time:
4312
4313 [role="term"]
4314 ----
4315 lttng start
4316 ----
4317
4318 Subsequently, you may stop the tracers:
4319
4320 [role="term"]
4321 ----
4322 lttng stop
4323 ----
4324
4325 LTTng is very flexible: user space applications may be launched before
4326 or after the tracers are started. Events are only recorded if they
4327 are properly enabled and if they occur while tracers are active.
4328
4329 A tracing session name may be passed to both the `start` and `stop`
4330 commands to start/stop tracing a session other than the current one.
4331
4332
4333 [[enabling-disabling-channels]]
4334 ==== Enabling and disabling channels
4335
4336 <<event,As mentioned>> in the
4337 <<understanding-lttng,Understanding LTTng>> chapter, enabled
4338 events are contained in a specific channel, itself contained in a
4339 specific tracing session. A channel is a group of events with
4340 tunable parameters (event loss mode, sub-buffer size, number of
4341 sub-buffers, trace file sizes and count, to name a few). A given channel
4342 may only be responsible for enabled events belonging to one domain:
4343 either kernel or user space.
4344
4345 If you only used the `create`, `enable-event` and `start`/`stop`
4346 commands of the `lttng` tool so far, one or two channels were
4347 automatically created for you (one for the kernel domain and/or one
4348 for the user space domain). The default channels are both named
4349 `channel0`; channels from different domains may have the same name.
4350
4351 The current channels of a given tracing session can be viewed with
4352
4353 [role="term"]
4354 ----
4355 lttng list some-session
4356 ----
4357
4358 where `some-session` is the name of the desired tracing session.
4359
4360 To create and enable a channel, use the `enable-channel` command:
4361
4362 [role="term"]
4363 ----
4364 lttng enable-channel --kernel my-channel
4365 ----
4366
4367 This creates a kernel domain channel named `my-channel` with
4368 default parameters in the current tracing session.
4369
4370 [NOTE]
4371 ====
4372 Because of a current limitation, all
4373 channels must be _created_ prior to beginning tracing in a
4374 given tracing session, that is before the first time you do
4375 `lttng start`.
4376
4377 Since a channel is automatically created by
4378 `enable-event` only for the specified domain, you cannot,
4379 for example, enable a kernel domain event, start tracing and then
4380 enable a user space domain event because no user space channel
4381 exists yet and it's too late to create one.
4382
4383 For this reason, make sure to configure your channels properly
4384 before starting the tracers for the first time!
4385 ====
4386
4387 Here's another example:
4388
4389 [role="term"]
4390 ----
4391 lttng enable-channel --userspace --session other-session --overwrite \
4392 --tracefile-size 1048576 1mib-channel
4393 ----
4394
4395 This creates a user space domain channel named `1mib-channel` in
4396 the tracing session named `other-session` that loses new events by
4397 overwriting previously recorded events (instead of the default mode of
4398 discarding newer ones) and saves trace files with a maximum size of
4399 1{nbsp}MiB each.
4400
4401 Note that channels may also be created using the `--channel` option of
4402 the `enable-event` command when the provided channel name doesn't exist
4403 for the specified domain:
4404
4405 [role="term"]
4406 ----
4407 lttng enable-event --kernel --channel some-channel sched_switch
4408 ----
4409
4410 If no kernel domain channel named `some-channel` existed before calling
4411 the above command, it would be created with default parameters.
4412
4413 You may enable the same event in two different channels:
4414
4415 [role="term"]
4416 ----
4417 lttng enable-event --userspace --channel my-channel app:tp
4418 lttng enable-event --userspace --channel other-channel app:tp
4419 ----
4420
4421 If both channels are enabled, the occurring `app:tp` event
4422 generates two recorded events, one for each channel.
4423
4424 Disabling a channel is done with the `disable-event` command:
4425
4426 [role="term"]
4427 ----
4428 lttng disable-event --kernel some-channel
4429 ----
4430
4431 The state of a channel precedes the individual states of events within
4432 it: events belonging to a disabled channel, even if they are
4433 enabled, won't be recorded.
4434
4435
4436
4437 [[fine-tuning-channels]]
4438 ===== Fine-tuning channels
4439
4440 There are various parameters that may be fine-tuned with the
4441 `enable-channel` command. The latter are well documented in
4442 man:lttng(1) and in the <<channel,Channel>> section of the
4443 <<understanding-lttng,Understanding LTTng>> chapter. For basic
4444 tracing needs, their default values should be just fine, but here are a
4445 few examples to break the ice.
4446
4447 As the frequency of recorded events increases--either because the
4448 event throughput is actually higher or because you enabled more events
4449 than usual&#8212;__event loss__ might be experienced. Since LTTng never
4450 waits, by design, for sub-buffer space availability (non-blocking
4451 tracer), when a sub-buffer is full and no empty sub-buffers are left,
4452 there are two possible outcomes: either the new events that do not fit
4453 are rejected, or they start replacing the oldest recorded events.
4454 The choice of which algorithm to use is a per-channel parameter, the
4455 default being discarding the newest events until there is some space
4456 left. If your situation always needs the latest events at the expense
4457 of writing over the oldest ones, create a channel with the `--overwrite`
4458 option:
4459
4460 [role="term"]
4461 ----
4462 lttng enable-channel --kernel --overwrite my-channel
4463 ----
4464
4465 When an event is lost, it means no space was available in any
4466 sub-buffer to accommodate it. Thus, if you want to cope with sporadic
4467 high event throughput situations and avoid losing events, you need to
4468 allocate more room for storing them in memory. This can be done by
4469 either increasing the size of sub-buffers or by adding sub-buffers.
4470 The following example creates a user space domain channel with
4471 16{nbsp}sub-buffers of 512{nbsp}kiB each:
4472
4473 [role="term"]
4474 ----
4475 lttng enable-channel --userspace --num-subbuf 16 --subbuf-size 512k big-channel
4476 ----
4477
4478 Both values need to be powers of two, otherwise they are rounded up
4479 to the next one.
4480
4481 Two other interesting available parameters of `enable-channel` are
4482 `--tracefile-size` and `--tracefile-count`, which respectively limit
4483 the size of each trace file and the their count for a given channel.
4484 When the number of written trace files reaches its limit for a given
4485 channel-CPU pair, the next trace file overwrites the very first
4486 one. The following example creates a kernel domain channel with a
4487 maximum of three trace files of 1{nbsp}MiB each:
4488
4489 [role="term"]
4490 ----
4491 lttng enable-channel --kernel --tracefile-size 1M --tracefile-count 3 my-channel
4492 ----
4493
4494 An efficient way to make sure lots of events are generated is enabling
4495 all kernel events in this channel and starting the tracer:
4496
4497 [role="term"]
4498 ----
4499 lttng enable-event --kernel --all --channel my-channel
4500 lttng start
4501 ----
4502
4503 After a few seconds, look at trace files in your tracing session
4504 output directory. For two CPUs, it should look like:
4505
4506 ----
4507 my-channel_0_0 my-channel_1_0
4508 my-channel_0_1 my-channel_1_1
4509 my-channel_0_2 my-channel_1_2
4510 ----
4511
4512 Amongst the files above, you might see one in each group with a size
4513 lower than 1{nbsp}MiB: they are the files currently being written.
4514
4515 Since all those small files are valid LTTng trace files, LTTng trace
4516 viewers may read them. It is the viewer's responsibility to properly
4517 merge the streams so as to present an ordered list to the user.
4518 http://diamon.org/babeltrace[Babeltrace]
4519 merges LTTng trace files correctly and is fast at doing it.
4520
4521
4522 [[adding-context]]
4523 ==== Adding some context to channels
4524
4525 If you read all the sections of
4526 <<controlling-tracing,Controlling tracing>> so far, you should be
4527 able to create tracing sessions, create and enable channels and events
4528 within them and start/stop the LTTng tracers. Event fields recorded in
4529 trace files provide important information about occurring events, but
4530 sometimes external context may help you solve a problem faster. This
4531 section discusses how to add context information to events of a
4532 specific channel using the `lttng` tool.
4533
4534 There are various available context values which can accompany events
4535 recorded by LTTng, for example:
4536
4537 * **process information**:
4538 ** identifier (PID)
4539 ** name
4540 ** priority
4541 ** scheduling priority (niceness)
4542 ** thread identifier (TID)
4543 * the **hostname** of the system on which the event occurred
4544 * plenty of **performance counters** using perf, for example:
4545 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types
4546 ** cache misses
4547 ** branch instructions, misses, loads
4548 ** CPU faults
4549
4550 The full list is available in the output of `lttng add-context --help`.
4551 Some of them are reserved for a specific domain (kernel or
4552 user space) while others are available for both.
4553
4554 To add context information to one or all channels of a given tracing
4555 session, use the `add-context` command:
4556
4557 [role="term"]
4558 ----
4559 lttng add-context --userspace --type vpid --type perf:thread:cpu-cycles
4560 ----
4561
4562 The above example adds the virtual process identifier and per-thread
4563 CPU cycles count values to all recorded user space domain events of the
4564 current tracing session. Use the `--channel` option to select a specific
4565 channel:
4566
4567 [role="term"]
4568 ----
4569 lttng add-context --kernel --channel my-channel --type tid
4570 ----
4571
4572 adds the thread identifier value to all recorded kernel domain events
4573 in the channel `my-channel` of the current tracing session.
4574
4575 Beware that context information cannot be removed from channels once
4576 it's added for a given tracing session.
4577
4578
4579 [role="since-2.5"]
4580 [[saving-loading-tracing-session]]
4581 ==== Saving and loading tracing session configurations
4582
4583 Configuring a tracing session may be long: creating and enabling
4584 channels with specific parameters, enabling kernel and user space
4585 domain events with specific log levels and filters, and adding context
4586 to some channels are just a few of the many possible operations using
4587 the `lttng` command line tool. If you're going to use LTTng to solve real
4588 world problems, chances are you're going to have to record events using
4589 the same tracing session setup over and over, modifying a few variables
4590 each time in your instrumented program or environment. To avoid
4591 constant tracing session reconfiguration, the `lttng` tool is able to
4592 save and load tracing session configurations to/from XML files.
4593
4594 To save a given tracing session configuration, do:
4595
4596 [role="term"]
4597 ----
4598 lttng save my-session
4599 ----
4600
4601 where `my-session` is the name of the tracing session to save. Tracing
4602 session configurations are saved to dir:{~/.lttng/sessions} by default;
4603 use the `--output-path` option to change this destination directory.
4604
4605 All configuration parameters are saved:
4606
4607 * tracing session name
4608 * trace data output path
4609 * channels with their state and all their parameters
4610 * context information added to channels
4611 * events with their state, log level and filter
4612 * tracing activity (started or stopped)
4613
4614 To load a tracing session, simply do:
4615
4616 [role="term"]
4617 ----
4618 lttng load my-session
4619 ----
4620
4621 or, if you used a custom path:
4622
4623 [role="term"]
4624 ----
4625 lttng load --input-path /path/to/my-session.lttng
4626 ----
4627
4628 Your saved tracing session is restored as if you just configured
4629 it manually.
4630
4631
4632 [[sending-trace-data-over-the-network]]
4633 ==== Sending trace data over the network
4634
4635 The possibility of sending trace data over the network comes as a
4636 built-in feature of LTTng-tools. For this to be possible, an LTTng
4637 _relay daemon_ must be executed and listening on the machine where
4638 trace data is to be received, and the user must create a tracing
4639 session using appropriate options to forward trace data to the remote
4640 relay daemon.
4641
4642 The relay daemon listens on two different TCP ports: one for control
4643 information and the other for actual trace data.
4644
4645 Starting the relay daemon on the remote machine is easy:
4646
4647 [role="term"]
4648 ----
4649 lttng-relayd
4650 ----
4651
4652 This makes it listen to its default ports: 5342 for control and
4653 5343 for trace data. The `--control-port` and `--data-port` options may
4654 be used to specify different ports.
4655
4656 Traces written by `lttng-relayd` are written to
4657 +\~/lttng-traces/__hostname__/__session__+ by
4658 default, where +__hostname__+ is the host name of the
4659 traced (monitored) system and +__session__+ is the
4660 tracing session name. Use the `--output` option to write trace data
4661 outside dir:{~/lttng-traces}.
4662
4663 On the sending side, a tracing session must be created using the
4664 `lttng` tool with the `--set-url` option to connect to the distant
4665 relay daemon:
4666
4667 [role="term"]
4668 ----
4669 lttng create my-session --set-url net://distant-host
4670 ----
4671
4672 The URL format is described in the output of `lttng create --help`.
4673 The above example uses the default ports; the `--ctrl-url` and
4674 `--data-url` options may be used to set the control and data URLs
4675 individually.
4676
4677 Once this basic setup is completed and the connection is established,
4678 you may use the `lttng` tool on the target machine as usual; everything
4679 you do is transparently forwarded to the remote machine if needed.
4680 For example, a parameter changing the maximum size of trace files
4681 only has an effect on the distant relay daemon actually writing
4682 the trace.
4683
4684
4685 [role="since-2.4"]
4686 [[lttng-live]]
4687 ==== Viewing events as they arrive
4688
4689 We have seen how trace files may be produced by LTTng out of generated
4690 application and Linux kernel events. We have seen that those trace files
4691 may be either recorded locally by consumer daemons or remotely using
4692 a relay daemon. And we have seen that the maximum size and count of
4693 trace files is configurable for each channel. With all those features,
4694 it's still not possible to read a trace file as it is being written
4695 because it could be incomplete and appear corrupted to the viewer.
4696 There is a way to view events as they arrive, however: using
4697 _LTTng live_.
4698
4699 LTTng live is implemented, in LTTng, solely on the relay daemon side.
4700 As trace data is sent over the network to a relay daemon by a (possibly
4701 remote) consumer daemon, a _tee_ is created: trace data is recorded to
4702 trace files _as well as_ being transmitted to a connected live viewer:
4703
4704 [role="img-90"]
4705 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a live viewer.
4706 image::lttng-live.png[]
4707
4708 In order to use this feature, a tracing session must created in live
4709 mode on the target system:
4710
4711 [role="term"]
4712 ----
4713 lttng create --live
4714 ----
4715
4716 An optional parameter may be passed to `--live` to set the period
4717 (in microseconds) between flushes to the network
4718 (1{nbsp}second is the default). With:
4719
4720 [role="term"]
4721 ----
4722 lttng create --live 100000
4723 ----
4724
4725 the daemons flush their data every 100{nbsp}ms.
4726
4727 If no network output is specified to the `create` command, a local
4728 relay daemon is spawned. In this very common case, viewing a live
4729 trace is easy: enable events and start tracing as usual, then use
4730 `lttng view` to start the default live viewer:
4731
4732 [role="term"]
4733 ----
4734 lttng view
4735 ----
4736
4737 The correct arguments are passed to the live viewer so that it
4738 may connect to the local relay daemon and start reading live events.
4739
4740 You may also wish to use a live viewer not running on the target
4741 system. In this case, you should specify a network output when using
4742 the `create` command (`--set-url` or `--ctrl-url`/`--data-url` options).
4743 A distant LTTng relay daemon should also be started to receive control
4744 and trace data. By default, `lttng-relayd` listens on 127.0.0.1:5344
4745 for an LTTng live connection. Otherwise, the desired URL may be
4746 specified using its `--live-port` option.
4747
4748 The
4749 http://diamon.org/babeltrace[`babeltrace`]
4750 viewer supports LTTng live as one of its input formats. `babeltrace` is
4751 the default viewer when using `lttng view`. To use it manually, first
4752 list active tracing sessions by doing the following (assuming the relay
4753 daemon to connect to runs on the same host):
4754
4755 [role="term"]
4756 ----
4757 babeltrace --input-format lttng-live net://localhost
4758 ----
4759
4760 Then, choose a tracing session and start viewing events as they arrive
4761 using LTTng live:
4762
4763 [role="term"]
4764 ----
4765 babeltrace --input-format lttng-live net://localhost/host/hostname/my-session
4766 ----
4767
4768
4769 [role="since-2.3"]
4770 [[taking-a-snapshot]]
4771 ==== Taking a snapshot
4772
4773 The normal behavior of LTTng is to record trace data as trace files.
4774 This is ideal for keeping a long history of events that occurred on
4775 the target system and applications, but may be too much data in some
4776 situations. For example, you may wish to trace your application
4777 continuously until some critical situation happens, in which case you
4778 would only need the latest few recorded events to perform the desired
4779 analysis, not multi-gigabyte trace files.
4780
4781 LTTng has an interesting feature called _snapshots_. When creating
4782 a tracing session in snapshot mode, no trace files are written; the
4783 tracers' sub-buffers are constantly overwriting the oldest recorded
4784 events with the newest. At any time, either when the tracers are started
4785 or stopped, you may take a snapshot of those sub-buffers.
4786
4787 There is no difference between the format of a normal trace file and the
4788 format of a snapshot: viewers of LTTng traces also support LTTng
4789 snapshots. By default, snapshots are written to disk, but they may also
4790 be sent over the network.
4791
4792 To create a tracing session in snapshot mode, do:
4793
4794 [role="term"]
4795 ----
4796 lttng create --snapshot my-snapshot-session
4797 ----
4798
4799 Next, enable channels, events and add context to channels as usual.
4800 Once a tracing session is created in snapshot mode, channels are
4801 forced to use the
4802 <<channel-overwrite-mode-vs-discard-mode,overwrite>> mode
4803 (`--overwrite` option of the `enable-channel` command; also called
4804 _flight recorder mode_) and have an `mmap()` channel type
4805 (`--output mmap`).
4806
4807 Start tracing. When you're ready to take a snapshot, do:
4808
4809 [role="term"]
4810 ----
4811 lttng snapshot record --name my-snapshot
4812 ----
4813
4814 This records a snapshot named `my-snapshot` of all channels of
4815 all domains of the current tracing session. By default, snapshots files
4816 are recorded in the path returned by `lttng snapshot list-output`. You
4817 may change this path or decide to send snapshots over the network
4818 using either:
4819
4820 . an output path/URL specified when creating the tracing session
4821 (`lttng create`)
4822 . an added snapshot output path/URL using
4823 `lttng snapshot add-output`
4824 . an output path/URL provided directly to the
4825 `lttng snapshot record` command
4826
4827 Method 3 overrides method 2 which overrides method 1. When specifying
4828 a URL, a relay daemon must be listening on some machine (see
4829 <<sending-trace-data-over-the-network,Sending trace data over the network>>).
4830
4831 If you need to make absolutely sure that the output file won't be
4832 larger than a certain limit, you can set a maximum snapshot size when
4833 taking it with the `--max-size` option:
4834
4835 [role="term"]
4836 ----
4837 lttng snapshot record --name my-snapshot --max-size 2M
4838 ----
4839
4840 Older recorded events are discarded in order to respect this
4841 maximum size.
4842
4843
4844 [role="since-2.6"]
4845 [[mi]]
4846 ==== Machine interface
4847
4848 The `lttng` tool aims at providing a command output as human-readable as
4849 possible. While this output is easy to parse by a human being, machines
4850 have a hard time.
4851
4852 This is why the `lttng` tool provides the general `--mi` option, which
4853 must specify a machine interface output format. As of the latest
4854 LTTng stable release, only the `xml` format is supported. A schema
4855 definition (XSD) is made
4856 https://github.com/lttng/lttng-tools/blob/master/src/common/mi_lttng.xsd[available]
4857 to ease the integration with external tools as much as possible.
4858
4859 The `--mi` option can be used in conjunction with all `lttng` commands.
4860 Here are some examples:
4861
4862 [role="term"]
4863 ----
4864 lttng --mi xml create some-session
4865 lttng --mi xml list some-session
4866 lttng --mi xml list --kernel
4867 lttng --mi xml enable-event --kernel --syscall open
4868 lttng --mi xml start
4869 ----
4870
4871
4872 [[reference]]
4873 == Reference
4874
4875 This chapter presents various references for LTTng packages such as links
4876 to online manpages, tables needed by the rest of the text, descriptions
4877 of library functions, and more.
4878
4879
4880 [[online-lttng-manpages]]
4881 === Online LTTng manpages
4882
4883 LTTng packages currently install the following link:/man[man pages],
4884 available online using the links below:
4885
4886 * **LTTng-tools**
4887 ** man:lttng(1)
4888 ** man:lttng-sessiond(8)
4889 ** man:lttng-relayd(8)
4890 * **LTTng-UST**
4891 ** man:lttng-gen-tp(1)
4892 ** man:lttng-ust(3)
4893 ** man:lttng-ust-cyg-profile(3)
4894 ** man:lttng-ust-dl(3)
4895
4896
4897 [[lttng-ust-ref]]
4898 === LTTng-UST
4899
4900 This section presents references of the LTTng-UST package.
4901
4902
4903 [[liblttng-ust]]
4904 ==== LTTng-UST library (+liblttng&#8209;ust+)
4905
4906 The LTTng-UST library, or `liblttng-ust`, is the main shared object
4907 against which user applications are linked to make LTTng user space
4908 tracing possible.
4909
4910 The <<c-application,C application>> guide shows the complete
4911 process to instrument, build and run a C/$$C++$$ application using
4912 LTTng-UST, while this section contains a few important tables.
4913
4914
4915 [[liblttng-ust-tp-fields]]
4916 ===== Tracepoint fields macros (for `TP_FIELDS()`)
4917
4918 The available macros to define tracepoint fields, which should be listed
4919 within `TP_FIELDS()` in `TRACEPOINT_EVENT()`, are:
4920
4921 [role="growable func-desc",cols="asciidoc,asciidoc"]
4922 .Available macros to define LTTng-UST tracepoint fields
4923 |====
4924 |Macro |Description and parameters
4925
4926 |
4927 +ctf_integer(__t__, __n__, __e__)+
4928
4929 +ctf_integer_nowrite(__t__, __n__, __e__)+
4930 |
4931 Standard integer, displayed in base 10.
4932
4933 +__t__+::
4934 Integer C type (`int`, `long`, `size_t`, ...).
4935
4936 +__n__+::
4937 Field name.
4938
4939 +__e__+::
4940 Argument expression.
4941
4942 |+ctf_integer_hex(__t__, __n__, __e__)+
4943 |
4944 Standard integer, displayed in base 16.
4945
4946 +__t__+::
4947 Integer C type.
4948
4949 +__n__+::
4950 Field name.
4951
4952 +__e__+::
4953 Argument expression.
4954
4955 |+ctf_integer_network(__t__, __n__, __e__)+
4956 |
4957 Integer in network byte order (big endian), displayed in base 10.
4958
4959 +__t__+::
4960 Integer C type.
4961
4962 +__n__+::
4963 Field name.
4964
4965 +__e__+::
4966 Argument expression.
4967
4968 |+ctf_integer_network_hex(__t__, __n__, __e__)+
4969 |
4970 Integer in network byte order, displayed in base 16.
4971
4972 +__t__+::
4973 Integer C type.
4974
4975 +__n__+::
4976 Field name.
4977
4978 +__e__+::
4979 Argument expression.
4980
4981 |
4982 +ctf_float(__t__, __n__, __e__)+
4983
4984 +ctf_float_nowrite(__t__, __n__, __e__)+
4985 |
4986 Floating point number.
4987
4988 +__t__+::
4989 Floating point number C type (`float` or `double`).
4990
4991 +__n__+::
4992 Field name.
4993
4994 +__e__+::
4995 Argument expression.
4996
4997 |
4998 +ctf_string(__n__, __e__)+
4999
5000 +ctf_string_nowrite(__n__, __e__)+
5001 |
5002 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
5003
5004 +__n__+::
5005 Field name.
5006
5007 +__e__+::
5008 Argument expression.
5009
5010 |
5011 +ctf_array(__t__, __n__, __e__, __s__)+
5012
5013 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
5014 |
5015 Statically-sized array of integers
5016
5017 +__t__+::
5018 Array element C type.
5019
5020 +__n__+::
5021 Field name.
5022
5023 +__e__+::
5024 Argument expression.
5025
5026 +__s__+::
5027 Number of elements.
5028
5029 |
5030 +ctf_array_text(__t__, __n__, __e__, __s__)+
5031
5032 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
5033 |
5034 Statically-sized array, printed as text.
5035
5036 The string does not need to be null-terminated.
5037
5038 +__t__+::
5039 Array element C type (always `char`).
5040
5041 +__n__+::
5042 Field name.
5043
5044 +__e__+::
5045 Argument expression.
5046
5047 +__s__+::
5048 Number of elements.
5049
5050 |
5051 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
5052
5053 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
5054 |
5055 Dynamically-sized array of integers.
5056
5057 The type of +__E__+ needs to be unsigned.
5058
5059 +__t__+::
5060 Array element C type.
5061
5062 +__n__+::
5063 Field name.
5064
5065 +__e__+::
5066 Argument expression.
5067
5068 +__T__+::
5069 Length expression C type.
5070
5071 +__E__+::
5072 Length expression.
5073
5074 |
5075 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
5076
5077 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
5078 |
5079 Dynamically-sized array, displayed as text.
5080
5081 The string does not need to be null-terminated.
5082
5083 The type of +__E__+ needs to be unsigned.
5084
5085 The behaviour is undefined if +__e__+ is `NULL`.
5086
5087 +__t__+::
5088 Sequence element C type (always `char`).
5089
5090 +__n__+::
5091 Field name.
5092
5093 +__e__+::
5094 Argument expression.
5095
5096 +__T__+::
5097 Length expression C type.
5098
5099 +__E__+::
5100 Length expression.
5101 |====
5102
5103 The `_nowrite` versions omit themselves from the session trace, but are
5104 otherwise identical. This means the `_nowrite` fields won't be written
5105 in the recorded trace. Their primary purpose is to make some
5106 of the event context available to the
5107 <<enabling-disabling-events,event filters>> without having to
5108 commit the data to sub-buffers.
5109
5110
5111 [[liblttng-ust-tracepoint-loglevel]]
5112 ===== Tracepoint log levels (for `TRACEPOINT_LOGLEVEL()`)
5113
5114 The following table shows the available log level values for the
5115 `TRACEPOINT_LOGLEVEL()` macro:
5116
5117 `TRACE_EMERG`::
5118 System is unusable.
5119
5120 `TRACE_ALERT`::
5121 Action must be taken immediately.
5122
5123 `TRACE_CRIT`::
5124 Critical conditions.
5125
5126 `TRACE_ERR`::
5127 Error conditions.
5128
5129 `TRACE_WARNING`::
5130 Warning conditions.
5131
5132 `TRACE_NOTICE`::
5133 Normal, but significant, condition.
5134
5135 `TRACE_INFO`::
5136 Informational message.
5137
5138 `TRACE_DEBUG_SYSTEM`::
5139 Debug information with system-level scope (set of programs).
5140
5141 `TRACE_DEBUG_PROGRAM`::
5142 Debug information with program-level scope (set of processes).
5143
5144 `TRACE_DEBUG_PROCESS`::
5145 Debug information with process-level scope (set of modules).
5146
5147 `TRACE_DEBUG_MODULE`::
5148 Debug information with module (executable/library) scope (set of units).
5149
5150 `TRACE_DEBUG_UNIT`::
5151 Debug information with compilation unit scope (set of functions).
5152
5153 `TRACE_DEBUG_FUNCTION`::
5154 Debug information with function-level scope.
5155
5156 `TRACE_DEBUG_LINE`::
5157 Debug information with line-level scope (TRACEPOINT_EVENT default).
5158
5159 `TRACE_DEBUG`::
5160 Debug-level message.
5161
5162 Log levels `TRACE_EMERG` through `TRACE_INFO` and `TRACE_DEBUG` match
5163 http://man7.org/linux/man-pages/man3/syslog.3.html[syslog]
5164 level semantics. Log levels `TRACE_DEBUG_SYSTEM` through `TRACE_DEBUG`
5165 offer more fine-grained selection of debug information.
5166
5167
5168 [[lttng-modules-ref]]
5169 === LTTng-modules
5170
5171 This section presents references of the LTTng-modules package.
5172
5173
5174 [[lttng-modules-tp-struct-entry]]
5175 ==== Tracepoint fields macros (for `TP_STRUCT__entry()`)
5176
5177 This table describes possible entries for the `TP_STRUCT__entry()` part
5178 of `LTTNG_TRACEPOINT_EVENT()`:
5179
5180 [role="growable func-desc",cols="asciidoc,asciidoc"]
5181 .Available entries for `TP_STRUCT__entry()` (in `LTTNG_TRACEPOINT_EVENT()`)
5182 |====
5183 |Macro |Description and parameters
5184
5185 |+\__field(__t__, __n__)+
5186 |
5187 Standard integer, displayed in base 10.
5188
5189 +__t__+::
5190 Integer C type (`int`, `unsigned char`, `size_t`, ...).
5191
5192 +__n__+::
5193 Field name.
5194
5195 |+\__field_hex(__t__, __n__)+
5196 |
5197 Standard integer, displayed in base 16.
5198
5199 +__t__+::
5200 Integer C type.
5201
5202 +__n__+::
5203 Field name.
5204
5205 |+\__field_oct(__t__, __n__)+
5206 |
5207 Standard integer, displayed in base 8.
5208
5209 +__t__+::
5210 Integer C type.
5211
5212 +__n__+::
5213 Field name.
5214
5215 |+\__field_network(__t__, __n__)+
5216 |
5217 Integer in network byte order (big endian), displayed in base 10.
5218
5219 +__t__+::
5220 Integer C type.
5221
5222 +__n__+::
5223 Field name.
5224
5225 |+\__field_network_hex(__t__, __n__)+
5226 |
5227 Integer in network byte order (big endian), displayed in base 16.
5228
5229 +__t__+::
5230 Integer C type.
5231
5232 +__n__+::
5233 Field name.
5234
5235 |+\__array(__t__, __n__, __s__)+
5236 |
5237 Statically-sized array, elements displayed in base 10.
5238
5239 +__t__+::
5240 Array element C type.
5241
5242 +__n__+::
5243 Field name.
5244
5245 +__s__+::
5246 Number of elements.
5247
5248 |+\__array_hex(__t__, __n__, __s__)+
5249 |
5250 Statically-sized array, elements displayed in base 16.
5251
5252 +__t__+::
5253 array element C type.
5254 +__n__+::
5255 field name.
5256 +__s__+::
5257 number of elements.
5258
5259 |+\__array_text(__t__, __n__, __s__)+
5260 |
5261 Statically-sized array, displayed as text.
5262
5263 +__t__+::
5264 Array element C type (always char).
5265
5266 +__n__+::
5267 Field name.
5268
5269 +__s__+::
5270 Number of elements.
5271
5272 |+\__dynamic_array(__t__, __n__, __s__)+
5273 |
5274 Dynamically-sized array, displayed in base 10.
5275
5276 +__t__+::
5277 Array element C type.
5278
5279 +__n__+::
5280 Field name.
5281
5282 +__s__+::
5283 Length C expression.
5284
5285 |+\__dynamic_array_hex(__t__, __n__, __s__)+
5286 |
5287 Dynamically-sized array, displayed in base 16.
5288
5289 +__t__+::
5290 Array element C type.
5291
5292 +__n__+::
5293 Field name.
5294
5295 +__s__+::
5296 Length C expression.
5297
5298 |+\__dynamic_array_text(__t__, __n__, __s__)+
5299 |
5300 Dynamically-sized array, displayed as text.
5301
5302 +__t__+::
5303 Array element C type (always char).
5304
5305 +__n__+::
5306 Field name.
5307
5308 +__s__+::
5309 Length C expression.
5310
5311 |+\__string(n, __s__)+
5312 |
5313 Null-terminated string.
5314
5315 The behaviour is undefined behavior if +__s__+ is `NULL`.
5316
5317 +__n__+::
5318 Field name.
5319
5320 +__s__+::
5321 String source (pointer).
5322 |====
5323
5324 The above macros should cover the majority of cases. For advanced items,
5325 see path:{probes/lttng-events.h}.
5326
5327
5328 [[lttng-modules-tp-fast-assign]]
5329 ==== Tracepoint assignment macros (for `TP_fast_assign()`)
5330
5331 This table describes possible entries for the `TP_fast_assign()` part
5332 of `LTTNG_TRACEPOINT_EVENT()`:
5333
5334 [role="growable func-desc",cols="asciidoc,asciidoc"]
5335 .Available entries for `TP_fast_assign()` (in `LTTNG_TRACEPOINT_EVENT()`)
5336 |====
5337 |Macro |Description and parameters
5338
5339 |+tp_assign(__d__, __s__)+
5340 |
5341 Assignment of C expression +__s__+ to tracepoint field +__d__+.
5342
5343 +__d__+::
5344 Name of destination tracepoint field.
5345
5346 +__s__+::
5347 Source C expression (may refer to tracepoint arguments).
5348
5349 |+tp_memcpy(__d__, __s__, __l__)+
5350 |
5351 Memory copy of +__l__+ bytes from +__s__+ to tracepoint field
5352 +__d__+ (use with array fields).
5353
5354 +__d__+::
5355 Name of destination tracepoint field.
5356
5357 +__s__+::
5358 Source C expression (may refer to tracepoint arguments).
5359
5360 +__l__+::
5361 Number of bytes to copy.
5362
5363 |+tp_memcpy_from_user(__d__, __s__, __l__)+
5364 |
5365 Memory copy of +__l__+ bytes from user space +__s__+ to tracepoint
5366 field +__d__+ (use with array fields).
5367
5368 +__d__+::
5369 Name of destination tracepoint field.
5370
5371 +__s__+::
5372 Source C expression (may refer to tracepoint arguments).
5373
5374 +__l__+::
5375 Number of bytes to copy.
5376
5377 |+tp_memcpy_dyn(__d__, __s__)+
5378 |
5379 Memory copy of dynamically-sized array from +__s__+ to tracepoint field
5380 +__d__+.
5381
5382 The number of bytes is known from the field's length expression
5383 (use with dynamically-sized array fields).
5384
5385 +__d__+::
5386 Name of destination tracepoint field.
5387
5388 +__s__+::
5389 Source C expression (may refer to tracepoint arguments).
5390
5391 +__l__+::
5392 Number of bytes to copy.
5393
5394 |+tp_strcpy(__d__, __s__)+
5395 |
5396 String copy of +__s__+ to tracepoint field +__d__+ (use with string
5397 fields).
5398
5399 +__d__+::
5400 Name of destination tracepoint field.
5401
5402 +__s__+::
5403 Source C expression (may refer to tracepoint arguments).
5404 |====
This page took 0.130195 seconds and 3 git commands to generate.