welcome: reword a part
[lttng-docs.git] / 2.6 / lttng-docs-2.6.txt
1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.6, May 26, 2016
5
6
7 include::../common/copyright.txt[]
8
9
10 include::../common/warning-not-maintained.txt[]
11
12
13 [[welcome]]
14 == Welcome!
15
16 Welcome to the **LTTng Documentation**!
17
18 The _Linux Trace Toolkit: next generation_ is an open source software
19 toolkit which you can use to simultaneously trace the Linux kernel, user
20 applications, and user libraries.
21
22 LTTng consists of:
23
24 * Kernel modules to trace the Linux kernel.
25 * Shared libraries to trace user applications written in C or C++.
26 * Java packages to trace Java applications which use `java.util.logging`
27 or Apache log4j 1.2.
28 * A kernel module to trace shell scripts and other user applications
29 without a dedicated instrumentation mechanism.
30 * Daemons and a command-line tool, cmd:lttng, to control the
31 LTTng tracers.
32
33 [NOTE]
34 .Open source documentation
35 ====
36 This is an **open documentation**: its source is available in a
37 https://github.com/lttng/lttng-docs[public Git repository].
38
39 Should you find any error in the content of this text, any grammatical
40 mistake, or any dead link, we would be very grateful if you would file a
41 GitHub issue for it or, even better, contribute a patch to this
42 documentation by creating a pull request.
43 ====
44
45
46 include::../common/audience.txt[]
47
48
49 [[chapters]]
50 === Chapter descriptions
51
52 What follows is a list of brief descriptions of this documentation's
53 chapters. The latter are ordered in such a way as to make the reading
54 as linear as possible.
55
56 . <<nuts-and-bolts,Nuts and bolts>> explains the
57 rudiments of software tracing and the rationale behind the
58 LTTng project.
59 . <<installing-lttng,Installing LTTng>> is divided into
60 sections describing the steps needed to get a working installation
61 of LTTng packages for common Linux distributions and from its
62 source.
63 . <<getting-started,Getting started>> is a very concise guide to
64 get started quickly with LTTng kernel and user space tracing. This
65 chapter is recommended if you're new to LTTng or software tracing
66 in general.
67 . <<understanding-lttng,Understanding LTTng>> deals with some
68 core concepts and components of the LTTng suite. Understanding
69 those is important since the next chapter assumes you're familiar
70 with them.
71 . <<using-lttng,Using LTTng>> is a complete user guide of the
72 LTTng project. It shows in great details how to instrument user
73 applications and the Linux kernel, how to control tracing sessions
74 using the `lttng` command line tool and miscellaneous practical use
75 cases.
76 . <<reference,Reference>> contains references of LTTng components,
77 like links to online manpages and various APIs.
78
79 We recommend that you read the above chapters in this order, although
80 some of them may be skipped depending on your situation. You may skip
81 <<nuts-and-bolts,Nuts and bolts>> if you're familiar with tracing
82 and LTTng. Also, you may jump over <<installing-lttng,Installing LTTng>>
83 if LTTng is already properly installed on your target system.
84
85
86 include::../common/convention.txt[]
87
88
89 include::../common/acknowledgements.txt[]
90
91
92 [[whats-new]]
93 == What's new in LTTng {revision}?
94
95 Most of the changes of LTTng {revision} are bug fixes, making the toolchain
96 more stable than ever before. Still, LTTng {revision} adds some interesting
97 features to the project.
98
99 LTTng 2.5 already supported the instrumentation and tracing of
100 <<java-application,Java applications>> through `java.util.logging`
101 (JUL). LTTng {revision} goes one step further by supporting
102 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2].
103 The new log4j domain is selected using the `--log4j` option in various
104 commands of the `lttng` tool.
105
106 LTTng-modules has supported system call tracing for a long time,
107 but until now, it was only possible to record either all of them,
108 or none of them. LTTng {revision} allows the user to record specific
109 system call events, for example:
110
111 [role="term"]
112 ----
113 lttng enable-event --kernel --syscall open,fork,chdir,pipe
114 ----
115
116 Finally, the `lttng` command line tool is not only able to communicate
117 with humans as it used to do, but also with machines thanks to its new
118 <<mi,machine interface>> feature.
119
120 To learn more about the new features of LTTng {revision}, see the
121 http://lttng.org/blog/2015/02/27/lttng-2.6-released/[release announcement].
122
123
124 [[nuts-and-bolts]]
125 == Nuts and bolts
126
127 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
128 generation_ is a modern toolkit for tracing Linux systems and
129 applications. So your first question might rather be: **what is
130 tracing?**
131
132
133 [[what-is-tracing]]
134 === What is tracing?
135
136 As the history of software engineering progressed and led to what
137 we now take for granted--complex, numerous and
138 interdependent software applications running in parallel on
139 sophisticated operating systems like Linux--the authors of such
140 components, or software developers, began feeling a natural
141 urge of having tools to ensure the robustness and good performance
142 of their masterpieces.
143
144 One major achievement in this field is, inarguably, the
145 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
146 which is an essential tool for developers to find and fix
147 bugs. But even the best debugger won't help make your software run
148 faster, and nowadays, faster software means either more work done by
149 the same hardware, or cheaper hardware for the same work.
150
151 A _profiler_ is often the tool of choice to identify performance
152 bottlenecks. Profiling is suitable to identify _where_ performance is
153 lost in a given software; the profiler outputs a profile, a
154 statistical summary of observed events, which you may use to discover
155 which functions took the most time to execute. However, a profiler
156 won't report _why_ some identified functions are the bottleneck.
157 Bottlenecks might only occur when specific conditions are met, sometimes
158 almost impossible to capture by a statistical profiler, or impossible to
159 reproduce with an application altered by the overhead of an event-based
160 profiler. For a thorough investigation of software performance issues,
161 a history of execution, with the recorded values of chosen variables
162 and context, is essential. This is where tracing comes in handy.
163
164 _Tracing_ is a technique used to understand what goes on in a running
165 software system. The software used for tracing is called a _tracer_,
166 which is conceptually similar to a tape recorder. When recording,
167 specific probes placed in the software source code generate events
168 that are saved on a giant tape: a _trace_ file. Both user applications
169 and the operating system may be traced at the same time, opening the
170 possibility of resolving a wide range of problems that are otherwise
171 extremely challenging.
172
173 Tracing is often compared to _logging_. However, tracers and loggers
174 are two different tools, serving two different purposes. Tracers are
175 designed to record much lower-level events that occur much more
176 frequently than log messages, often in the thousands per second range,
177 with very little execution overhead. Logging is more appropriate for
178 very high-level analysis of less frequent events: user accesses,
179 exceptional conditions (errors and warnings, for example), database
180 transactions, instant messaging communications, and such. More formally,
181 logging is one of several use cases that can be accomplished with
182 tracing.
183
184 The list of recorded events inside a trace file may be read manually
185 like a log file for the maximum level of detail, but it is generally
186 much more interesting to perform application-specific analyses to
187 produce reduced statistics and graphs that are useful to resolve a
188 given problem. Trace viewers and analysers are specialized tools
189 designed to do this.
190
191 So, in the end, this is what LTTng is: a powerful, open source set of
192 tools to trace the Linux kernel and user applications at the same time.
193 LTTng is composed of several components actively maintained and
194 developed by its link:/community/#where[community].
195
196
197 [[lttng-alternatives]]
198 === Alternatives to LTTng
199
200 Excluding proprietary solutions, a few competing software tracers
201 exist for Linux:
202
203 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
204 is the de facto function tracer of the Linux kernel. Its user
205 interface is a set of special files in sysfs.
206 * https://perf.wiki.kernel.org/[perf] is
207 a performance analyzing tool for Linux which supports hardware
208 performance counters, tracepoints, as well as other counters and
209 types of probes. perf's controlling utility is the `perf` command
210 line/curses tool.
211 * http://linux.die.net/man/1/strace[strace]
212 is a command line utility which records system calls made by a
213 user process, as well as signal deliveries and changes of process
214 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
215 to fulfill its function.
216 * https://sourceware.org/systemtap/[SystemTap]
217 is a Linux kernel and user space tracer which uses custom user scripts
218 to produce plain text traces. Scripts are converted to the C language,
219 then compiled as Linux kernel modules which are loaded to produce
220 trace data. SystemTap's primary user interface is the `stap`
221 command line tool.
222 * http://www.sysdig.org/[sysdig], like
223 SystemTap, uses scripts to analyze Linux kernel events. Scripts,
224 or _chisels_ in sysdig's jargon, are written in Lua and executed
225 while the system is being traced, or afterwards. sysdig's interface
226 is the `sysdig` command line tool as well as the curses-based
227 `csysdig` tool.
228
229 The main distinctive features of LTTng is that it produces correlated
230 kernel and user space traces, as well as doing so with the lowest
231 overhead amongst other solutions. It produces trace files in the
232 http://diamon.org/ctf[CTF] format, an optimized file format
233 for production and analyses of multi-gigabyte data. LTTng is the
234 result of close to 10 years of
235 active development by a community of passionate developers. LTTng {revision}
236 is currently available on some major desktop, server, and embedded Linux
237 distributions.
238
239 The main interface for tracing control is a single command line tool
240 named `lttng`. The latter can create several tracing sessions,
241 enable/disable events on the fly, filter them efficiently with custom
242 user expressions, start/stop tracing, and do much more. Traces can be
243 recorded on disk or sent over the network, kept totally or partially,
244 and viewed once tracing becomes inactive or in real-time.
245
246 <<installing-lttng,Install LTTng now>> and start tracing!
247
248
249 [[installing-lttng]]
250 == Installing LTTng
251
252 include::../common/warning-installation-outdated.txt[]
253
254 **LTTng** is a set of software components which interact to allow
255 instrumenting the Linux kernel and user applications as well as
256 controlling tracing sessions (starting/stopping tracing,
257 enabling/disabling events, and more). Those components are bundled into
258 the following packages:
259
260 LTTng-tools::
261 Libraries and command line interface to control tracing sessions.
262
263 LTTng-modules::
264 Linux kernel modules for tracing the kernel.
265
266 LTTng-UST::
267 User space tracing library.
268
269 Most distributions mark the LTTng-modules and LTTng-UST packages as
270 optional. In the following sections, the steps to install all three are
271 always provided, but note that LTTng-modules is only required if
272 you intend to trace the Linux kernel and LTTng-UST is only required if
273 you intend to trace user space applications.
274
275 This chapter shows how to install the above packages on a Linux system.
276 The easiest way is to use the package manager of the system's
277 distribution (<<desktop-distributions,desktop>> or
278 <<embedded-distributions,embedded>>). Support is also available for
279 <<enterprise-distributions,enterprise distributions>>, such as Red Hat
280 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES).
281 Otherwise, you can
282 <<building-from-source,build the LTTng packages from source>>.
283
284
285 [[desktop-distributions]]
286 === Desktop distributions
287
288 Official LTTng {revision} packages are available for
289 <<ubuntu,Ubuntu>>, <<fedora,Fedora>>, and
290 <<opensuse,openSUSE>> (and other RPM-based distributions).
291
292 More recent versions of LTTng are available for Debian and Arch Linux.
293
294 Should any issue arise when
295 following the procedures below, please inform the
296 link:/community[community] about it.
297
298
299 [[ubuntu]]
300 ==== Ubuntu
301
302 LTTng {revision} is packaged in Ubuntu 15.10 _Wily Werewolf_. For other
303 releases of Ubuntu, you need to build and install LTTng {revision}
304 <<building-from-source,from source>>. Ubuntu 15.04 _Vivid Vervet_
305 ships with link:/docs/v2.5/[LTTng 2.5], whilst
306 Ubuntu 16.04 _Xenial Xerus_ ships with
307 link:/docs/v2.7/[LTTng 2.7].
308
309 To install LTTng {revision} from the official Ubuntu repositories,
310 simply use `apt-get`:
311
312 [role="term"]
313 ----
314 sudo apt-get install lttng-tools
315 sudo apt-get install lttng-modules-dkms
316 sudo apt-get install liblttng-ust-dev
317 ----
318
319 If you need to trace
320 <<java-application,Java applications>>,
321 you need to install the LTTng-UST Java agent also:
322
323 [role="term"]
324 ----
325 sudo apt-get install liblttng-ust-agent-java
326 ----
327
328
329 [[fedora]]
330 ==== Fedora
331
332 Fedora 22 and Fedora 23 ship with official LTTng-tools {revision} and
333 LTTng-UST {revision} packages. Simply use `yum`:
334
335 [role="term"]
336 ----
337 sudo yum install lttng-tools
338 sudo yum install lttng-ust
339 sudo yum install lttng-ust-devel
340 ----
341
342 LTTng-modules {revision} still needs to be built and installed from
343 source. For that, make sure that the `kernel-devel` package is
344 already installed beforehand:
345
346 [role="term"]
347 ----
348 sudo yum install kernel-devel
349 ----
350
351 Proceed on to fetch
352 <<building-from-source,LTTng-modules {revision}'s source>>. Build and
353 install it as follows:
354
355 [role="term"]
356 ----
357 KERNELDIR=/usr/src/kernels/$(uname -r) make
358 sudo make modules_install
359 ----
360
361 NOTE: If you need to trace <<java-application,Java applications>> on
362 Fedora, you need to build and install LTTng-UST {revision}
363 <<building-from-source,from source>> and use the
364 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
365 `--enable-java-agent-all` options.
366
367
368 [[opensuse]]
369 ==== openSUSE/RPM
370
371 openSUSE 13.1 and openSUSE 13.2 have LTTng {revision} packages. To install
372 LTTng {revision}, you first need to add an entry to your repository
373 configuration. All LTTng repositories are available
374 http://download.opensuse.org/repositories/devel:/tools:/lttng/[here].
375 For example, the following commands adds the LTTng repository for
376 openSUSE{nbsp}13.1:
377
378 [role="term"]
379 ----
380 sudo zypper addrepo http://download.opensuse.org/repositories/devel:/tools:/lttng/openSUSE_13.1/devel:tools:lttng.repo
381 ----
382
383 Then, refresh the package database:
384
385 [role="term"]
386 ----
387 sudo zypper refresh
388 ----
389
390 and install `lttng-tools`, `lttng-modules` and `lttng-ust-devel`:
391
392 [role="term"]
393 ----
394 sudo zypper install lttng-tools
395 sudo zypper install lttng-modules
396 sudo zypper install lttng-ust-devel
397 ----
398
399 NOTE: If you need to trace <<java-application,Java applications>> on
400 openSUSE, you need to build and install LTTng-UST {revision}
401 <<building-from-source,from source>> and use the
402 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
403 `--enable-java-agent-all` options.
404
405
406 [[embedded-distributions]]
407 === Embedded distributions
408
409 LTTng is packaged by two popular
410 embedded Linux distributions: <<buildroot,Buildroot>> and
411 <<oe-yocto,OpenEmbedded/Yocto>>.
412
413
414 [[buildroot]]
415 ==== Buildroot
416
417 LTTng {revision} is available in Buildroot since Buildroot 2015.05. The
418 LTTng packages are named `lttng-tools`, `lttng-modules`, and `lttng-libust`.
419
420 To enable them, start the Buildroot configuration menu as usual:
421
422 [role="term"]
423 ----
424 make menuconfig
425 ----
426
427 In:
428
429 * _Kernel_: make sure _Linux kernel_ is enabled
430 * _Toolchain_: make sure the following options are enabled:
431 ** _Enable large file (files > 2GB) support_
432 ** _Enable WCHAR support_
433
434 In _Target packages_/_Debugging, profiling and benchmark_, enable
435 _lttng-modules_ and _lttng-tools_. In
436 _Target packages_/_Libraries_/_Other_, enable _lttng-libust_.
437
438 NOTE: If you need to trace <<java-application,Java applications>> on
439 Buildroot, you need to build and install LTTng-UST {revision}
440 <<building-from-source,from source>> and use the
441 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
442 `--enable-java-agent-all` options.
443
444
445 [[oe-yocto]]
446 ==== OpenEmbedded/Yocto
447
448 LTTng {revision} recipes are available in the
449 http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
450 layer of OpenEmbedded since February 8th, 2015 under the following names:
451
452 * `lttng-tools`
453 * `lttng-modules`
454 * `lttng-ust`
455
456 Using BitBake, the simplest way to include LTTng recipes in your
457 target image is to add them to `IMAGE_INSTALL_append` in
458 path:{conf/local.conf}:
459
460 ----
461 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
462 ----
463
464 If you're using Hob, click _Edit image recipe_ once you have selected
465 a machine and an image recipe. Then, under the _All recipes_ tab, search
466 for `lttng` and include the three LTTng recipes.
467
468 NOTE: If you need to trace <<java-application,Java applications>> on
469 OpenEmbedded/Yocto, you need to build and install LTTng-UST {revision}
470 <<building-from-source,from source>> and use the
471 `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
472 `--enable-java-agent-all` options.
473
474
475 [[enterprise-distributions]]
476 === Enterprise distributions (RHEL, SLES)
477
478 To install LTTng on enterprise Linux distributions
479 (such as RHEL and SLES), please see
480 http://packages.efficios.com/[EfficiOS Enterprise Packages].
481
482
483 [[building-from-source]]
484 === Building from source
485
486 As <<installing-lttng,previously stated>>, LTTng is shipped as
487 three packages: LTTng-tools, LTTng-modules, and LTTng-UST. LTTng-tools
488 contains everything needed to control tracing sessions, while
489 LTTng-modules is only needed for Linux kernel tracing and LTTng-UST is
490 only needed for user space tracing.
491
492 The tarballs are available in the
493 http://lttng.org/download#build-from-source[Download section]
494 of the LTTng website.
495
496 Please refer to the path:{README.md} files provided by each package to
497 properly build and install them.
498
499 TIP: The aforementioned path:{README.md} files
500 are rendered as rich text when https://github.com/lttng[viewed on GitHub].
501
502
503 [[getting-started]]
504 == Getting started with LTTng
505
506 This is a small guide to get started quickly with LTTng kernel and user
507 space tracing. For a more thorough understanding of LTTng and intermediate
508 to advanced use cases and, see <<understanding-lttng,Understanding LTTng>>
509 and <<using-lttng,Using LTTng>>.
510
511 Before reading this guide, make sure LTTng
512 <<installing-lttng,is installed>>. LTTng-tools is required. Also install
513 LTTng-modules for
514 <<tracing-the-linux-kernel,tracing the Linux kernel>> and LTTng-UST
515 for
516 <<tracing-your-own-user-application,tracing your own user space applications>>.
517 When the traces are finally written and complete, the
518 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
519 section of this chapter will help you analyze your tracepoint events
520 to investigate.
521
522
523 [[tracing-the-linux-kernel]]
524 === Tracing the Linux kernel
525
526 Make sure LTTng-tools and LTTng-modules packages
527 <<installing-lttng,are installed>>.
528
529 Since you're about to trace the Linux kernel itself, let's look at the
530 available kernel events using the `lttng` tool, which has a
531 Git-like command line structure:
532
533 [role="term"]
534 ----
535 lttng list --kernel
536 ----
537
538 Before tracing, you need to create a session:
539
540 [role="term"]
541 ----
542 sudo lttng create
543 ----
544
545 TIP: You can avoid using `sudo` in the previous and following commands
546 if your user is a member of the <<lttng-sessiond,tracing group>>.
547
548 Let's now enable some events for this session:
549
550 [role="term"]
551 ----
552 sudo lttng enable-event --kernel sched_switch,sched_process_fork
553 ----
554
555 Or you might want to simply enable all available kernel events (beware
556 that trace files grow rapidly when doing this):
557
558 [role="term"]
559 ----
560 sudo lttng enable-event --kernel --all
561 ----
562
563 Start tracing:
564
565 [role="term"]
566 ----
567 sudo lttng start
568 ----
569
570 By default, traces are saved in
571 +\~/lttng-traces/__name__-__date__-__time__+,
572 where +__name__+ is the session name.
573
574 When you're done tracing:
575
576 [role="term"]
577 ----
578 sudo lttng stop
579 sudo lttng destroy
580 ----
581
582 Although `destroy` looks scary here, it doesn't actually destroy the
583 written trace files: it only destroys the tracing session.
584
585 What's next? Have a look at
586 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
587 to view and analyze the trace you just recorded.
588
589
590 [[tracing-your-own-user-application]]
591 === Tracing your own user application
592
593 The previous section helped you create a trace out of Linux kernel
594 events. This section steps you through a simple example showing you how
595 to trace a _Hello world_ program written in C.
596
597 Make sure the LTTng-tools and LTTng-UST packages
598 <<installing-lttng,are installed>>.
599
600 Tracing is just like having `printf()` calls at specific locations of
601 your source code, albeit LTTng is much faster and more flexible than
602 `printf()`. In the LTTng realm, **`tracepoint()`** is analogous to
603 `printf()`.
604
605 Unlike `printf()`, though, `tracepoint()` does not use a format string to
606 know the types of its arguments: the formats of all tracepoints must be
607 defined before using them. So before even writing our _Hello world_ program,
608 we need to define the format of our tracepoint. This is done by creating a
609 **tracepoint provider**, which consists of a tracepoint provider header
610 (`.h` file) and a tracepoint provider definition (`.c` file).
611
612 The tracepoint provider header contains some boilerplate as well as a
613 list of tracepoint definitions and other optional definition entries
614 which we skip for this quickstart. Each tracepoint is defined using the
615 `TRACEPOINT_EVENT()` macro. For each tracepoint, you must provide:
616
617 * a **provider name**, which is the "scope" or namespace of this
618 tracepoint (this usually includes the company and project names)
619 * a **tracepoint name**
620 * a **list of arguments** for the eventual `tracepoint()` call, each
621 item being:
622 ** the argument C type
623 ** the argument name
624 * a **list of fields**, which correspond to the actual fields of the
625 recorded events for this tracepoint
626
627 Here's an example of a simple tracepoint provider header with two
628 arguments: an integer and a string:
629
630 [source,c]
631 ----
632 #undef TRACEPOINT_PROVIDER
633 #define TRACEPOINT_PROVIDER hello_world
634
635 #undef TRACEPOINT_INCLUDE
636 #define TRACEPOINT_INCLUDE "./hello-tp.h"
637
638 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
639 #define _HELLO_TP_H
640
641 #include <lttng/tracepoint.h>
642
643 TRACEPOINT_EVENT(
644 hello_world,
645 my_first_tracepoint,
646 TP_ARGS(
647 int, my_integer_arg,
648 char*, my_string_arg
649 ),
650 TP_FIELDS(
651 ctf_string(my_string_field, my_string_arg)
652 ctf_integer(int, my_integer_field, my_integer_arg)
653 )
654 )
655
656 #endif /* _HELLO_TP_H */
657
658 #include <lttng/tracepoint-event.h>
659 ----
660
661 The exact syntax is well explained in the
662 <<c-application,C application>> instrumentation guide of the
663 <<using-lttng,Using LTTng>> chapter, as well as in
664 man:lttng-ust(3).
665
666 Save the above snippet as path:{hello-tp.h}.
667
668 Write the tracepoint provider definition as path:{hello-tp.c}:
669
670 [source,c]
671 ----
672 #define TRACEPOINT_CREATE_PROBES
673 #define TRACEPOINT_DEFINE
674
675 #include "hello-tp.h"
676 ----
677
678 Create the tracepoint provider:
679
680 [role="term"]
681 ----
682 gcc -c -I. hello-tp.c
683 ----
684
685 Now, by including path:{hello-tp.h} in your own application, you may use the
686 tracepoint defined above by properly refering to it when calling
687 `tracepoint()`:
688
689 [source,c]
690 ----
691 #include <stdio.h>
692 #include "hello-tp.h"
693
694 int main(int argc, char *argv[])
695 {
696 int x;
697
698 puts("Hello, World!\nPress Enter to continue...");
699
700 /*
701 * The following getchar() call is only placed here for the purpose
702 * of this demonstration, for pausing the application in order for
703 * you to have time to list its events. It's not needed otherwise.
704 */
705 getchar();
706
707 /*
708 * A tracepoint() call. Arguments, as defined in hello-tp.h:
709 *
710 * 1st: provider name (always)
711 * 2nd: tracepoint name (always)
712 * 3rd: my_integer_arg (first user-defined argument)
713 * 4th: my_string_arg (second user-defined argument)
714 *
715 * Notice the provider and tracepoint names are NOT strings;
716 * they are in fact parts of variables created by macros in
717 * hello-tp.h.
718 */
719 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
720
721 for (x = 0; x < argc; ++x) {
722 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
723 }
724
725 puts("Quitting now!");
726
727 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
728
729 return 0;
730 }
731 ----
732
733 Save this as path:{hello.c}, next to path:{hello-tp.c}.
734
735 Notice path:{hello-tp.h}, the tracepoint provider header, is included
736 by path:{hello.c}.
737
738 You are now ready to compile the application with LTTng-UST support:
739
740 [role="term"]
741 ----
742 gcc -c hello.c
743 gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
744 ----
745
746 Here's the whole build process:
747
748 [role="img-100"]
749 .User space tracing's build process.
750 image::ust-flow.png[]
751
752 If you followed the
753 <<tracing-the-linux-kernel,Tracing the Linux kernel>> tutorial, the
754 following steps should look familiar.
755
756 First, run the application with a few arguments:
757
758 [role="term"]
759 ----
760 ./hello world and beyond
761 ----
762
763 You should see
764
765 ----
766 Hello, World!
767 Press Enter to continue...
768 ----
769
770 Use the `lttng` tool to list all available user space events:
771
772 [role="term"]
773 ----
774 lttng list --userspace
775 ----
776
777 You should see the `hello_world:my_first_tracepoint` tracepoint listed
778 under the `./hello` process.
779
780 Create a tracing session:
781
782 [role="term"]
783 ----
784 lttng create
785 ----
786
787 Enable the `hello_world:my_first_tracepoint` tracepoint:
788
789 [role="term"]
790 ----
791 lttng enable-event --userspace hello_world:my_first_tracepoint
792 ----
793
794 Start tracing:
795
796 [role="term"]
797 ----
798 lttng start
799 ----
800
801 Go back to the running `hello` application and press Enter. All `tracepoint()`
802 calls are executed and the program finally exits.
803
804 Stop tracing:
805
806 [role="term"]
807 ----
808 lttng stop
809 ----
810
811 Done! You may use `lttng view` to list the recorded events. This command
812 starts http://diamon.org/babeltrace[`babeltrace`]
813 in the background, if it's installed:
814
815 [role="term"]
816 ----
817 lttng view
818 ----
819
820 should output something like:
821
822 ----
823 [18:10:27.684304496] (+?.?????????) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "hi there!", my_integer_field = 23 }
824 [18:10:27.684338440] (+0.000033944) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "./hello", my_integer_field = 0 }
825 [18:10:27.684340692] (+0.000002252) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "world", my_integer_field = 1 }
826 [18:10:27.684342616] (+0.000001924) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "and", my_integer_field = 2 }
827 [18:10:27.684343518] (+0.000000902) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "beyond", my_integer_field = 3 }
828 [18:10:27.684357978] (+0.000014460) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "x^2", my_integer_field = 16 }
829 ----
830
831 When you're done, you may destroy the tracing session, which does _not_
832 destroy the generated trace files, leaving them available for further
833 analysis:
834
835 [role="term"]
836 ----
837 lttng destroy
838 ----
839
840 The next section presents other alternatives to view and analyze your
841 LTTng traces.
842
843
844 [[viewing-and-analyzing-your-traces]]
845 === Viewing and analyzing your traces
846
847 This section describes how to visualize the data gathered after tracing
848 the Linux kernel or a user space application.
849
850 Many ways exist to read LTTng traces:
851
852 * **`babeltrace`** is a command line utility which converts trace formats;
853 it supports the format used by LTTng,
854 CTF, as well as a basic
855 text output which may be ++grep++ed. The `babeltrace` command is
856 part of the
857 http://diamon.org/babeltrace[Babeltrace] project.
858 * Babeltrace also includes **Python bindings** so that you may
859 easily open and read an LTTng trace with your own script, benefiting
860 from the power of Python.
861 * **http://tracecompass.org/[Trace Compass]**
862 is an Eclipse plugin used to visualize and analyze various types of
863 traces, including LTTng's. It also comes as a standalone application.
864
865 LTTng trace files are usually recorded in the dir:{~/lttng-traces} directory.
866 Let's now view the trace and perform a basic analysis using
867 `babeltrace`.
868
869 The simplest way to list all the recorded events of a trace is to pass its
870 path to `babeltrace` with no options:
871
872 [role="term"]
873 ----
874 babeltrace ~/lttng-traces/my-session
875 ----
876
877 `babeltrace` finds all traces recursively within the given path and
878 prints all their events, merging them in order of time.
879
880 Listing all the system calls of a Linux kernel trace with their arguments is
881 easy with `babeltrace` and `grep`:
882
883 [role="term"]
884 ----
885 babeltrace ~/lttng-traces/my-kernel-session | grep sys_
886 ----
887
888 Counting events is also straightforward:
889
890 [role="term"]
891 ----
892 babeltrace ~/lttng-traces/my-kernel-session | grep sys_read | wc --lines
893 ----
894
895 The text output of `babeltrace` is useful for isolating events by simple
896 matching using `grep` and similar utilities. However, more elaborate filters
897 such as keeping only events with a field value falling within a specific range
898 are not trivial to write using a shell. Moreover, reductions and even the
899 most basic computations involving multiple events are virtually impossible
900 to implement.
901
902 Fortunately, Babeltrace ships with Python 3 bindings which makes it
903 really easy to read the events of an LTTng trace sequentially and compute
904 the desired information.
905
906 Here's a simple example using the Babeltrace Python bindings. The following
907 script accepts an LTTng Linux kernel trace path as its first argument and
908 prints the short names of the top 5 running processes on CPU 0 during the
909 whole trace:
910
911 [source,python]
912 ----
913 import sys
914 from collections import Counter
915 import babeltrace
916
917
918 def top5proc():
919 if len(sys.argv) != 2:
920 msg = 'Usage: python {} TRACEPATH'.format(sys.argv[0])
921 raise ValueError(msg)
922
923 # a trace collection holds one to many traces
924 col = babeltrace.TraceCollection()
925
926 # add the trace provided by the user
927 # (LTTng traces always have the 'ctf' format)
928 if col.add_trace(sys.argv[1], 'ctf') is None:
929 raise RuntimeError('Cannot add trace')
930
931 # this counter dict will hold execution times:
932 #
933 # task command name -> total execution time (ns)
934 exec_times = Counter()
935
936 # this holds the last `sched_switch` timestamp
937 last_ts = None
938
939 # iterate events
940 for event in col.events:
941 # keep only `sched_switch` events
942 if event.name != 'sched_switch':
943 continue
944
945 # keep only events which happened on CPU 0
946 if event['cpu_id'] != 0:
947 continue
948
949 # event timestamp
950 cur_ts = event.timestamp
951
952 if last_ts is None:
953 # we start here
954 last_ts = cur_ts
955
956 # previous task command (short) name
957 prev_comm = event['prev_comm']
958
959 # initialize entry in our dict if not yet done
960 if prev_comm not in exec_times:
961 exec_times[prev_comm] = 0
962
963 # compute previous command execution time
964 diff = cur_ts - last_ts
965
966 # update execution time of this command
967 exec_times[prev_comm] += diff
968
969 # update last timestamp
970 last_ts = cur_ts
971
972 # display top 10
973 for name, ns in exec_times.most_common(5):
974 s = ns / 1000000000
975 print('{:20}{} s'.format(name, s))
976
977
978 if __name__ == '__main__':
979 top5proc()
980 ----
981
982 Save this script as path:{top5proc.py} and run it with Python 3, providing the
983 path to an LTTng Linux kernel trace as the first argument:
984
985 [role="term"]
986 ----
987 python3 top5proc.py ~/lttng-sessions/my-session-.../kernel
988 ----
989
990 Make sure the path you provide is the directory containing actual trace
991 files (`channel0_0`, `metadata`, and the rest): the `babeltrace` utility
992 recurses directories, but the Python bindings do not.
993
994 Here's an example of output:
995
996 ----
997 swapper/0 48.607245889 s
998 chromium 7.192738188 s
999 pavucontrol 0.709894415 s
1000 Compositor 0.660867933 s
1001 Xorg.bin 0.616753786 s
1002 ----
1003
1004 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
1005 weren't using the CPU that much when tracing, its first position in the list
1006 makes sense.
1007
1008
1009 [[understanding-lttng]]
1010 == Understanding LTTng
1011
1012 If you're going to use LTTng in any serious way, it is fundamental that
1013 you become familiar with its core concepts. Technical terms like
1014 _tracing sessions_, _domains_, _channels_ and _events_ are used over
1015 and over in the <<using-lttng,Using LTTng>> chapter,
1016 and it is assumed that you understand what they mean when reading it.
1017
1018 LTTng, as you already know, is a _toolkit_. It would be wrong
1019 to call it a simple _tool_ since it is composed of multiple interacting
1020 components. This chapter also describes the latter, providing details
1021 about their respective roles and how they connect together to form
1022 the current LTTng ecosystem.
1023
1024
1025 [[core-concepts]]
1026 === Core concepts
1027
1028 This section explains the various elementary concepts a user has to deal
1029 with when using LTTng. They are:
1030
1031 * <<tracing-session,tracing session>>
1032 * <<domain,domain>>
1033 * <<channel,channel>>
1034 * <<event,event>>
1035
1036
1037 [[tracing-session]]
1038 ==== Tracing session
1039
1040 A _tracing session_ is--like any session--a container of
1041 state. Anything that is done when tracing using LTTng happens in the
1042 scope of a tracing session. In this regard, it is analogous to a bank
1043 website's session: you can't interact online with your bank account
1044 unless you are logged in a session, except for reading a few static
1045 webpages (LTTng, too, can report some static information that does not
1046 need a created tracing session).
1047
1048 A tracing session holds the following attributes and objects (some of
1049 which are described in the following sections):
1050
1051 * a name
1052 * the tracing state (tracing started or stopped)
1053 * the trace data output path/URL (local path or sent over the network)
1054 * a mode (normal, snapshot or live)
1055 * the snapshot output paths/URLs (if applicable)
1056 * for each <<domain,domain>>, a list of <<channel,channels>>
1057 * for each channel:
1058 ** a name
1059 ** the channel state (enabled or disabled)
1060 ** its parameters (event loss mode, sub-buffers size and count,
1061 timer periods, output type, trace files size and count, and the rest)
1062 ** a list of added context information
1063 ** a list of <<event,events>>
1064 * for each event:
1065 ** its state (enabled or disabled)
1066 ** a list of instrumentation points (tracepoints, system calls,
1067 dynamic probes, other types of probes)
1068 ** associated log levels
1069 ** a filter expression
1070
1071 All this information is completely isolated between tracing sessions.
1072 As you can see in the list above, even the tracing state
1073 is a per-tracing session attribute, so that you may trace your target
1074 system/application in a given tracing session with a specific
1075 configuration while another one stays inactive.
1076
1077 [role="img-100"]
1078 .A _tracing session_ is a container of domains, channels, and events.
1079 image::concepts.png[]
1080
1081 Conceptually, a tracing session is a per-user object; the
1082 <<plumbing,Plumbing>> section shows how this is actually
1083 implemented. Any user may create as many concurrent tracing sessions
1084 as desired.
1085
1086 [role="img-100"]
1087 .Each user may create as many tracing sessions as desired.
1088 image::many-sessions.png[]
1089
1090 The trace data generated in a tracing session may be either saved
1091 to disk, sent over the network or not saved at all (in which case
1092 snapshots may still be saved to disk or sent to a remote machine).
1093
1094
1095 [[domain]]
1096 ==== Domain
1097
1098 A tracing _domain_ is the official term the LTTng project uses to
1099 designate a tracer category.
1100
1101 There are currently four known domains:
1102
1103 * Linux kernel
1104 * user space
1105 * `java.util.logging` (JUL)
1106 * log4j
1107
1108 Different tracers expose common features in their own interfaces, but,
1109 from a user's perspective, you still need to target a specific type of
1110 tracer to perform some actions. For example, since both kernel and user
1111 space tracers support named tracepoints (probes manually inserted in
1112 source code), you need to specify which one is concerned when enabling
1113 an event because both domains could have existing events with the same
1114 name.
1115
1116 Some features are not available in all domains. Filtering enabled
1117 events using custom expressions, for example, is currently not
1118 supported in the kernel domain, but support could be added in the
1119 future.
1120
1121
1122 [[channel]]
1123 ==== Channel
1124
1125 A _channel_ is a set of events with specific parameters and potential
1126 added context information. Channels have unique names per domain within
1127 a tracing session. A given event is always registered to at least one
1128 channel; having the same enabled event in two channels makes
1129 this event being recorded twice everytime it occurs.
1130
1131 Channels may be individually enabled or disabled. Occurring events of
1132 a disabled channel never make it to recorded events.
1133
1134 The fundamental role of a channel is to keep a shared ring buffer, where
1135 events are eventually recorded by the tracer and consumed by a consumer
1136 daemon. This internal ring buffer is divided into many sub-buffers of
1137 equal size.
1138
1139 Channels, when created, may be fine-tuned thanks to a few parameters,
1140 many of them related to sub-buffers. The following subsections explain
1141 what those parameters are and in which situations you should manually
1142 adjust them.
1143
1144
1145 [[channel-overwrite-mode-vs-discard-mode]]
1146 ===== Overwrite and discard event loss modes
1147
1148 As previously mentioned, a channel's ring buffer is divided into many
1149 equally sized sub-buffers.
1150
1151 As events occur, they are serialized as trace data into a specific
1152 sub-buffer (yellow arc in the following animation) until it is full:
1153 when this happens, the sub-buffer is marked as consumable (red) and
1154 another, _empty_ (white) sub-buffer starts receiving the following
1155 events. The marked sub-buffer is eventually consumed by a consumer
1156 daemon (returns to white).
1157
1158 [NOTE]
1159 [role="docsvg-channel-subbuf-anim"]
1160 ====
1161 {note-no-anim}
1162 ====
1163
1164 In an ideal world, sub-buffers are consumed faster than filled, like it
1165 is the case above. In the real world, however, all sub-buffers could be
1166 full at some point, leaving no space to record the following events. By
1167 design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer
1168 exists, losing events is acceptable when the alternative would be to
1169 cause substantial delays in the instrumented application's execution.
1170 LTTng privileges performance over integrity, aiming at perturbing the
1171 traced system as little as possible in order to make tracing of subtle
1172 race conditions and rare interrupt cascades possible.
1173
1174 When it comes to losing events because no empty sub-buffer is available,
1175 the channel's _event loss mode_ determines what to do amongst:
1176
1177 Discard::
1178 Drop the newest events until a sub-buffer is released.
1179
1180 Overwrite::
1181 Clear the sub-buffer containing the oldest recorded
1182 events and start recording the newest events there. This mode is
1183 sometimes called _flight recorder mode_ because it behaves like a
1184 flight recorder: always keep a fixed amount of the latest data.
1185
1186 Which mechanism you should choose depends on your context: prioritize
1187 the newest or the oldest events in the ring buffer?
1188
1189 Beware that, in overwrite mode, a whole sub-buffer is abandoned as soon
1190 as a new event doesn't find an empty sub-buffer, whereas in discard
1191 mode, only the event that doesn't fit is discarded.
1192
1193 Also note that a count of lost events is incremented and saved in
1194 the trace itself when an event is lost in discard mode, whereas no
1195 information is kept when a sub-buffer gets overwritten before being
1196 committed.
1197
1198 There are known ways to decrease your probability of losing events. The
1199 next section shows how tuning the sub-buffers count and size can be
1200 used to virtually stop losing events.
1201
1202
1203 [[channel-subbuf-size-vs-subbuf-count]]
1204 ===== Sub-buffers count and size
1205
1206 For each channel, an LTTng user may set its number of sub-buffers and
1207 their size.
1208
1209 Note that there is a noticeable tracer's CPU overhead introduced when
1210 switching sub-buffers (marking a full one as consumable and switching
1211 to an empty one for the following events to be recorded). Knowing this,
1212 the following list presents a few practical situations along with how
1213 to configure sub-buffers for them:
1214
1215 High event throughput::
1216 In general, prefer bigger sub-buffers to
1217 lower the risk of losing events. Having bigger sub-buffers
1218 also ensures a lower sub-buffer switching frequency. The number of
1219 sub-buffers is only meaningful if the channel is enabled in
1220 overwrite mode: in this case, if a sub-buffer overwrite happens, the
1221 other sub-buffers are left unaltered.
1222
1223 Low event throughput::
1224 In general, prefer smaller sub-buffers
1225 since the risk of losing events is already low. Since events
1226 happen less frequently, the sub-buffer switching frequency should
1227 remain low and thus the tracer's overhead should not be a problem.
1228
1229 Low memory system::
1230 If your target system has a low memory
1231 limit, prefer fewer first, then smaller sub-buffers. Even if the
1232 system is limited in memory, you want to keep the sub-buffers as
1233 big as possible to avoid a high sub-buffer switching frequency.
1234
1235 You should know that LTTng uses CTF as its trace format, which means
1236 event data is very compact. For example, the average LTTng Linux kernel
1237 event weights about 32{nbsp}bytes. A sub-buffer size of 1{nbsp}MiB is
1238 thus considered big.
1239
1240 The previous situations highlight the major trade-off between a few big
1241 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1242 frequency vs. how much data is lost in overwrite mode. Assuming a
1243 constant event throughput and using the overwrite mode, the two
1244 following configurations have the same ring buffer total size:
1245
1246 [NOTE]
1247 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1248 ====
1249 {note-no-anim}
1250 ====
1251
1252 * **2 sub-buffers of 4 MiB each** lead to a very low sub-buffer
1253 switching frequency, but if a sub-buffer overwrite happens, half of
1254 the recorded events so far (4{nbsp}MiB) are definitely lost.
1255 * **8 sub-buffers of 1 MiB each** lead to 4{nbsp}times the tracer's
1256 overhead as the previous configuration, but if a sub-buffer
1257 overwrite happens, only the eighth of events recorded so far are
1258 definitely lost.
1259
1260 In discard mode, the sub-buffers count parameter is pointless: use two
1261 sub-buffers and set their size according to the requirements of your
1262 situation.
1263
1264
1265 [[channel-switch-timer]]
1266 ===== Switch timer
1267
1268 The _switch timer_ period is another important configurable feature of
1269 channels to ensure periodic sub-buffer flushing.
1270
1271 When the _switch timer_ fires, a sub-buffer switch happens. This timer
1272 may be used to ensure that event data is consumed and committed to
1273 trace files periodically in case of a low event throughput:
1274
1275 [NOTE]
1276 [role="docsvg-channel-switch-timer"]
1277 ====
1278 {note-no-anim}
1279 ====
1280
1281 It's also convenient when big sub-buffers are used to cope with
1282 sporadic high event throughput, even if the throughput is normally
1283 lower.
1284
1285
1286 [[channel-buffering-schemes]]
1287 ===== Buffering schemes
1288
1289 In the user space tracing domain, two **buffering schemes** are
1290 available when creating a channel:
1291
1292 Per-PID buffering::
1293 Keep one ring buffer per process.
1294
1295 Per-UID buffering::
1296 Keep one ring buffer for all processes of a single user.
1297
1298 The per-PID buffering scheme consumes more memory than the per-UID
1299 option if more than one process is instrumented for LTTng-UST. However,
1300 per-PID buffering ensures that one process having a high event
1301 throughput won't fill all the shared sub-buffers, only its own.
1302
1303 The Linux kernel tracing domain only has one available buffering scheme
1304 which is to use a single ring buffer for the whole system.
1305
1306
1307 [[event]]
1308 ==== Event
1309
1310 An _event_, in LTTng's realm, is a term often used metonymically,
1311 having multiple definitions depending on the context:
1312
1313 . When tracing, an event is a _point in space-time_. Space, in a
1314 tracing context, is the set of all executable positions of a
1315 compiled application by a logical processor. When a program is
1316 executed by a processor and some instrumentation point, or
1317 _probe_, is encountered, an event occurs. This event is accompanied
1318 by some contextual payload (values of specific variables at this
1319 point of execution) which may or may not be recorded.
1320 . In the context of a recorded trace file, the term _event_ implies
1321 a _recorded event_.
1322 . When configuring a tracing session, _enabled events_ refer to
1323 specific rules which could lead to the transfer of actual
1324 occurring events (1) to recorded events (2).
1325
1326 The whole <<core-concepts,Core concepts>> section focuses on the
1327 third definition. An event is always registered to _one or more_
1328 channels and may be enabled or disabled at will per channel. A disabled
1329 event never leads to a recorded event, even if its channel is enabled.
1330
1331 An event (3) is enabled with a few conditions that must _all_ be met
1332 when an event (1) happens in order to generate a recorded event (2):
1333
1334 . A _probe_ or group of probes in the traced application must be
1335 executed.
1336 . **Optionally**, the probe must have a log level matching a
1337 log level range specified when enabling the event.
1338 . **Optionally**, the occurring event must satisfy a custom
1339 expression, or _filter_, specified when enabling the event.
1340
1341
1342 [[plumbing]]
1343 === Plumbing
1344
1345 The previous section described the concepts at the heart of LTTng.
1346 This section summarizes LTTng's implementation: how those objects are
1347 managed by different applications and libraries working together to
1348 form the toolkit.
1349
1350
1351 [[plumbing-overview]]
1352 ==== Overview
1353
1354 As <<installing-lttng,mentioned previously>>, the whole LTTng suite
1355 is made of the LTTng-tools, LTTng-UST, and
1356 LTTng-modules packages. Together, they provide different daemons, libraries,
1357 kernel modules and command line interfaces. The following tree shows
1358 which usable component belongs to which package:
1359
1360 * **LTTng-tools**:
1361 ** session daemon (`lttng-sessiond`)
1362 ** consumer daemon (`lttng-consumerd`)
1363 ** relay daemon (`lttng-relayd`)
1364 ** tracing control library (`liblttng-ctl`)
1365 ** tracing control command line tool (`lttng`)
1366 * **LTTng-UST**:
1367 ** user space tracing library (`liblttng-ust`) and its headers
1368 ** preloadable user space tracing helpers
1369 (`liblttng-ust-libc-wrapper`, `liblttng-ust-pthread-wrapper`,
1370 `liblttng-ust-cyg-profile`, `liblttng-ust-cyg-profile-fast`
1371 and `liblttng-ust-dl`)
1372 ** user space tracepoint code generator command line tool
1373 (`lttng-gen-tp`)
1374 ** `java.util.logging`/log4j tracepoint providers
1375 (`liblttng-ust-jul-jni` and `liblttng-ust-log4j-jni`) and JAR
1376 file (path:{liblttng-ust-agent.jar})
1377 * **LTTng-modules**:
1378 ** LTTng Linux kernel tracer module
1379 ** tracing ring buffer kernel modules
1380 ** many LTTng probe kernel modules
1381
1382 The following diagram shows how the most important LTTng components
1383 interact. Plain purple arrows represent trace data paths while dashed
1384 red arrows indicate control communications. The LTTng relay daemon is
1385 shown running on a remote system, although it could as well run on the
1386 target (monitored) system.
1387
1388 [role="img-100"]
1389 .Control and data paths between LTTng components.
1390 image::plumbing-26.png[]
1391
1392 Each component is described in the following subsections.
1393
1394
1395 [[lttng-sessiond]]
1396 ==== Session daemon
1397
1398 At the heart of LTTng's plumbing is the _session daemon_, often called
1399 by its command name, `lttng-sessiond`.
1400
1401 The session daemon is responsible for managing tracing sessions and
1402 what they logically contain (channel properties, enabled/disabled
1403 events, and the rest). By communicating locally with instrumented
1404 applications (using LTTng-UST) and with the LTTng Linux kernel modules
1405 (LTTng-modules), it oversees all tracing activities.
1406
1407 One of the many things that `lttng-sessiond` does is to keep
1408 track of the available event types. User space applications and
1409 libraries actively connect and register to the session daemon when they
1410 start. By contrast, `lttng-sessiond` seeks out and loads the appropriate
1411 LTTng kernel modules as part of its own initialization. Kernel event
1412 types are _pulled_ by `lttng-sessiond`, whereas user space event types
1413 are _pushed_ to it by the various user space tracepoint providers.
1414
1415 Using a specific inter-process communication protocol with Linux kernel
1416 and user space tracers, the session daemon can send channel information
1417 so that they are initialized, enable/disable specific probes based on
1418 enabled/disabled events by the user, send event filters information to
1419 LTTng tracers so that filtering actually happens at the tracer site,
1420 start/stop tracing a specific application or the Linux kernel, and more.
1421
1422 The session daemon is not useful without some user controlling it,
1423 because it's only a sophisticated control interchange and thus
1424 doesn't make any decision on its own. `lttng-sessiond` opens a local
1425 socket for controlling it, albeit the preferred way to control it is
1426 using `liblttng-ctl`, an installed C library hiding the communication
1427 protocol behind an easy-to-use API. The `lttng` tool makes use of
1428 `liblttng-ctl` to implement a user-friendly command line interface.
1429
1430 `lttng-sessiond` does not receive any trace data from instrumented
1431 applications; the _consumer daemons_ are the programs responsible for
1432 collecting trace data using shared ring buffers. However, the session
1433 daemon is the one that must spawn a consumer daemon and establish
1434 a control communication with it.
1435
1436 Session daemons run on a per-user basis. Knowing this, multiple
1437 instances of `lttng-sessiond` may run simultaneously, each belonging
1438 to a different user and each operating independently of the others.
1439 Only `root`'s session daemon, however, may control LTTng kernel modules
1440 (that is, the kernel tracer). With that in mind, if a user has no root
1441 access on the target system, he cannot trace the system's kernel, but
1442 should still be able to trace its own instrumented applications.
1443
1444 It has to be noted that, although only `root`'s session daemon may
1445 control the kernel tracer, the `lttng-sessiond` command has a `--group`
1446 option which may be used to specify the name of a special user group
1447 allowed to communicate with `root`'s session daemon and thus record
1448 kernel traces. By default, this group is named `tracing`.
1449
1450 If not done yet, the `lttng` tool, by default, automatically starts a
1451 session daemon. `lttng-sessiond` may also be started manually:
1452
1453 [role="term"]
1454 ----
1455 lttng-sessiond
1456 ----
1457
1458 This starts the session daemon in foreground. Use
1459
1460 [role="term"]
1461 ----
1462 lttng-sessiond --daemonize
1463 ----
1464
1465 to start it as a true daemon.
1466
1467 To kill the current user's session daemon, `pkill` may be used:
1468
1469 [role="term"]
1470 ----
1471 pkill lttng-sessiond
1472 ----
1473
1474 The default `SIGTERM` signal terminates it cleanly.
1475
1476 Several other options are available and described in
1477 man:lttng-sessiond(8) or by running `lttng-sessiond --help`.
1478
1479
1480 [[lttng-consumerd]]
1481 ==== Consumer daemon
1482
1483 The _consumer daemon_, or `lttng-consumerd`, is a program sharing some
1484 ring buffers with user applications or the LTTng kernel modules to
1485 collect trace data and output it at some place (on disk or sent over
1486 the network to an LTTng relay daemon).
1487
1488 Consumer daemons are created by a session daemon as soon as events are
1489 enabled within a tracing session, well before tracing is activated
1490 for the latter. Entirely managed by session daemons,
1491 consumer daemons survive session destruction to be reused later,
1492 should a new tracing session be created. Consumer daemons are always
1493 owned by the same user as their session daemon. When its owner session
1494 daemon is killed, the consumer daemon also exits. This is because
1495 the consumer daemon is always the child process of a session daemon.
1496 Consumer daemons should never be started manually. For this reason,
1497 they are not installed in one of the usual locations listed in the
1498 `PATH` environment variable. `lttng-sessiond` has, however, a
1499 bunch of options (see man:lttng-sessiond(8)) to
1500 specify custom consumer daemon paths if, for some reason, a consumer
1501 daemon other than the default installed one is needed.
1502
1503 There are up to two running consumer daemons per user, whereas only one
1504 session daemon may run per user. This is because each process has
1505 independent bitness: if the target system runs a mixture of 32-bit and
1506 64-bit processes, it is more efficient to have separate corresponding
1507 32-bit and 64-bit consumer daemons. The `root` user is an exception: it
1508 may have up to _three_ running consumer daemons: 32-bit and 64-bit
1509 instances for its user space applications and one more reserved for
1510 collecting kernel trace data.
1511
1512 As new tracing domains are added to LTTng, the development community's
1513 intent is to minimize the need for additionnal consumer daemon instances
1514 dedicated to them. For instance, the `java.util.logging` (JUL) domain
1515 events are in fact mapped to the user space domain, thus tracing this
1516 particular domain is handled by existing user space domain consumer
1517 daemons.
1518
1519
1520 [[lttng-relayd]]
1521 ==== Relay daemon
1522
1523 When a tracing session is configured to send its trace data over the
1524 network, an LTTng _relay daemon_ must be used at the other end to
1525 receive trace packets and serialize them to trace files. This setup
1526 makes it possible to trace a target system without ever committing trace
1527 data to its local storage, a feature which is useful for embedded
1528 systems, amongst others. The command implementing the relay daemon
1529 is `lttng-relayd`.
1530
1531 The basic use case of `lttng-relayd` is to transfer trace data received
1532 over the network to trace files on the local file system. The relay
1533 daemon must listen on two TCP ports to achieve this: one control port,
1534 used by the target session daemon, and one data port, used by the
1535 target consumer daemon. The relay and session daemons agree on common
1536 default ports when custom ones are not specified.
1537
1538 Since the communication transport protocol for both ports is standard
1539 TCP, the relay daemon may be started either remotely or locally (on the
1540 target system).
1541
1542 While two instances of consumer daemons (32-bit and 64-bit) may run
1543 concurrently for a given user, `lttng-relayd` needs only be of its
1544 host operating system's bitness.
1545
1546 The other important feature of LTTng's relay daemon is the support of
1547 _LTTng live_. LTTng live is an application protocol to view events as
1548 they arrive. The relay daemon still records events in trace files,
1549 but a _tee_ allows to inspect incoming events.
1550
1551 [role="img-100"]
1552 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a live viewer.
1553 image::lttng-live.png[]
1554
1555 Using LTTng live locally thus requires to run a local relay daemon.
1556
1557
1558 [[liblttng-ctl-lttng]]
1559 ==== [[lttng-cli]]Control library and command line interface
1560
1561 The LTTng control library, `liblttng-ctl`, can be used to communicate
1562 with the session daemon using a C API that hides the underlying
1563 protocol's details. `liblttng-ctl` is part of LTTng-tools.
1564
1565 `liblttng-ctl` may be used by including its "master" header:
1566
1567 [source,c]
1568 ----
1569 #include <lttng/lttng.h>
1570 ----
1571
1572 Some objects are referred by name (C string), such as tracing sessions,
1573 but most of them require creating a handle first using
1574 `lttng_create_handle()`. The best available developer documentation for
1575 `liblttng-ctl` is, for the moment, its installed header files as such.
1576 Every function/structure is thoroughly documented.
1577
1578 The `lttng` program is the _de facto_ standard user interface to
1579 control LTTng tracing sessions. `lttng` uses `liblttng-ctl` to
1580 communicate with session daemons behind the scenes.
1581 Its man page, man:lttng(1), is exhaustive, as well as its command
1582 line help (+lttng _cmd_ --help+, where +_cmd_+ is the command name).
1583
1584 The <<controlling-tracing,Controlling tracing>> section is a feature
1585 tour of the `lttng` tool.
1586
1587
1588 [[lttng-ust]]
1589 ==== User space tracing library
1590
1591 The user space tracing part of LTTng is possible thanks to the user
1592 space tracing library, `liblttng-ust`, which is part of the LTTng-UST
1593 package.
1594
1595 `liblttng-ust` provides header files containing macros used to define
1596 tracepoints and create tracepoint providers, as well as a shared object
1597 that must be linked to individual applications to connect to and
1598 communicate with a session daemon and a consumer daemon as soon as the
1599 application starts.
1600
1601 The exact mechanism by which an application is registered to the
1602 session daemon is beyond the scope of this documentation. The only thing
1603 you need to know is that, since the library constructor does this job
1604 automatically, tracepoints may be safely inserted anywhere in the source
1605 code without prior manual initialization of `liblttng-ust`.
1606
1607 The `liblttng-ust`-session daemon collaboration also provides an
1608 interesting feature: user space events may be enabled _before_
1609 applications actually start. By doing this and starting tracing before
1610 launching the instrumented application, you make sure that even the
1611 earliest occurring events can be recorded.
1612
1613 The <<c-application,C application>> instrumenting guide of the
1614 <<using-lttng,Using LTTng>> chapter focuses on using `liblttng-ust`:
1615 instrumenting, building/linking and running a user application.
1616
1617
1618 [[lttng-modules]]
1619 ==== LTTng kernel modules
1620
1621 The LTTng Linux kernel modules provide everything needed to trace the
1622 Linux kernel: various probes, a ring buffer implementation for a
1623 consumer daemon to read trace data and the tracer itself.
1624
1625 Only in exceptional circumstances should you ever need to load the
1626 LTTng kernel modules manually: it is normally the responsability of
1627 `root`'s session daemon to do so. Even if you were to develop your
1628 own LTTng probe module--for tracing a custom kernel or some kernel
1629 module (this topic is covered in the
1630 <<instrumenting-linux-kernel,Linux kernel>> instrumenting guide of
1631 the <<using-lttng,Using LTTng>> chapter)&#8212;you
1632 should use the `--extra-kmod-probes` option of the session daemon to
1633 append your probe to the default list. The session and consumer daemons
1634 of regular users do not interact with the LTTng kernel modules at all.
1635
1636 LTTng kernel modules are installed, by default, in
1637 +/usr/lib/modules/_release_/extra+, where +_release_+ is the
1638 kernel release (see `uname --kernel-release`).
1639
1640
1641 [[using-lttng]]
1642 == Using LTTng
1643
1644 Using LTTng involves two main activities: **instrumenting** and
1645 **controlling tracing**.
1646
1647 _<<instrumenting,Instrumenting>>_ is the process of inserting probes
1648 into some source code. It can be done manually, by writing tracepoint
1649 calls at specific locations in the source code of the program to trace,
1650 or more automatically using dynamic probes (address in assembled code,
1651 symbol name, function entry/return, and others).
1652
1653 It has to be noted that, as an LTTng user, you may not have to worry
1654 about the instrumentation process. Indeed, you may want to trace a
1655 program already instrumented. As an example, the Linux kernel is
1656 thoroughly instrumented, which is why you can trace it without caring
1657 about adding probes.
1658
1659 _<<controlling-tracing,Controlling tracing>>_ is everything
1660 that can be done by the LTTng session daemon, which is controlled using
1661 `liblttng-ctl` or its command line utility, `lttng`: creating tracing
1662 sessions, listing tracing sessions and events, enabling/disabling
1663 events, starting/stopping the tracers, taking snapshots, amongst many
1664 other commands.
1665
1666 This chapter is a complete user guide of both activities,
1667 with common use cases of LTTng exposed throughout the text. It is
1668 assumed that you are familiar with LTTng's concepts (events, channels,
1669 domains, tracing sessions) and that you understand the roles of its
1670 components (daemons, libraries, command line tools); if not, we invite
1671 you to read the <<understanding-lttng,Understanding LTTng>> chapter
1672 before you begin reading this one.
1673
1674 If you're new to LTTng, we suggest that you rather start with the
1675 <<getting-started,Getting started>> small guide first, then come
1676 back here to broaden your knowledge.
1677
1678 If you're only interested in tracing the Linux kernel with its current
1679 instrumentation, you may skip the
1680 <<instrumenting,Instrumenting>> section.
1681
1682
1683 [[instrumenting]]
1684 === Instrumenting
1685
1686 There are many examples of tracing and monitoring in our everyday life.
1687 You have access to real-time and historical weather reports and forecasts
1688 thanks to weather stations installed around the country. You know your
1689 possibly hospitalized friends' and family's hearts are safe thanks to
1690 electrocardiography. You make sure not to drive your car too fast
1691 and have enough fuel to reach your destination thanks to gauges visible
1692 on your dashboard.
1693
1694 All the previous examples have something in common: they rely on
1695 **probes**. Without electrodes attached to the surface of a body's
1696 skin, cardiac monitoring would be futile.
1697
1698 LTTng, as a tracer, is no different from the real life examples above.
1699 If you're about to trace a software system or, put in other words, record its
1700 history of execution, you better have probes in the subject you're
1701 tracing: the actual software. Various ways were developed to do this.
1702 The most straightforward one is to manually place probes, called
1703 _tracepoints_, in the software's source code. The Linux kernel tracing
1704 domain also allows probes added dynamically.
1705
1706 If you're only interested in tracing the Linux kernel, it may very well
1707 be that your tracing needs are already appropriately covered by LTTng's
1708 built-in Linux kernel tracepoints and other probes. Or you may be in
1709 possession of a user space application which has already been
1710 instrumented. In such cases, the work resides entirely in the design
1711 and execution of tracing sessions, allowing you to jump to
1712 <<controlling-tracing,Controlling tracing>> right now.
1713
1714 This chapter focuses on the following use cases of instrumentation:
1715
1716 * <<c-application,C>> and <<cxx-application,$$C++$$>> applications
1717 * <<prebuilt-ust-helpers,prebuilt user space tracing helpers>>
1718 * <<java-application,Java application>>
1719 * <<instrumenting-linux-kernel,Linux kernel>> module or the
1720 kernel itself
1721 * the <<proc-lttng-logger-abi,path:{/proc/lttng-logger} ABI>>
1722
1723 Some advanced techniques are also presented at the very end of this
1724 chapter.
1725
1726
1727 [[c-application]]
1728 ==== C application
1729
1730 Instrumenting a C (or $$C++$$) application, be it an executable program
1731 or a library, implies using LTTng-UST, the
1732 user space tracing component of LTTng. For C/$$C++$$ applications, the
1733 LTTng-UST package includes a dynamically loaded library
1734 (`liblttng-ust`), C headers and the `lttng-gen-tp` command line utility.
1735
1736 Since C and $$C++$$ are the base languages of virtually all other
1737 programming languages
1738 (Java virtual machine, Python, Perl, PHP and Node.js interpreters, to
1739 name a few), implementing user space tracing for an unsupported language
1740 is just a matter of using the LTTng-UST C API at the right places.
1741
1742 The usual work flow to instrument a user space C application with
1743 LTTng-UST is:
1744
1745 . Define tracepoints (actual probes)
1746 . Write tracepoint providers
1747 . Insert tracepoints into target source code
1748 . Package (build) tracepoint providers
1749 . Build user application and link it with tracepoint providers
1750
1751 The steps above are discussed in greater detail in the following
1752 subsections.
1753
1754
1755 [[tracepoint-provider]]
1756 ===== Tracepoint provider
1757
1758 Before jumping into defining tracepoints and inserting
1759 them into the application source code, you must understand what a
1760 _tracepoint provider_ is.
1761
1762 For the sake of this guide, consider the following two files:
1763
1764 [source,c]
1765 .path:{tp.h}
1766 ----
1767 #undef TRACEPOINT_PROVIDER
1768 #define TRACEPOINT_PROVIDER my_provider
1769
1770 #undef TRACEPOINT_INCLUDE
1771 #define TRACEPOINT_INCLUDE "./tp.h"
1772
1773 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1774 #define _TP_H
1775
1776 #include <lttng/tracepoint.h>
1777
1778 TRACEPOINT_EVENT(
1779 my_provider,
1780 my_first_tracepoint,
1781 TP_ARGS(
1782 int, my_integer_arg,
1783 char*, my_string_arg
1784 ),
1785 TP_FIELDS(
1786 ctf_string(my_string_field, my_string_arg)
1787 ctf_integer(int, my_integer_field, my_integer_arg)
1788 )
1789 )
1790
1791 TRACEPOINT_EVENT(
1792 my_provider,
1793 my_other_tracepoint,
1794 TP_ARGS(
1795 int, my_int
1796 ),
1797 TP_FIELDS(
1798 ctf_integer(int, some_field, my_int)
1799 )
1800 )
1801
1802 #endif /* _TP_H */
1803
1804 #include <lttng/tracepoint-event.h>
1805 ----
1806
1807 [source,c]
1808 .path:{tp.c}
1809 ----
1810 #define TRACEPOINT_CREATE_PROBES
1811
1812 #include "tp.h"
1813 ----
1814
1815 The two files above are defining a _tracepoint provider_. A tracepoint
1816 provider is some sort of namespace for _tracepoint definitions_. Tracepoint
1817 definitions are written above with the `TRACEPOINT_EVENT()` macro, and allow
1818 eventual `tracepoint()` calls respecting their definitions to be inserted
1819 into the user application's C source code (we explore this in a
1820 later section).
1821
1822 Many tracepoint definitions may be part of the same tracepoint provider
1823 and many tracepoint providers may coexist in a user space application. A
1824 tracepoint provider is packaged either:
1825
1826 * directly into an existing user application's C source file
1827 * as an object file
1828 * as a static library
1829 * as a shared library
1830
1831 The two files above, path:{tp.h} and path:{tp.c}, show a typical template for
1832 writing a tracepoint provider. LTTng-UST was designed so that two
1833 tracepoint providers should not be defined in the same header file.
1834
1835 We will now go through the various parts of the above files and
1836 give them a meaning. As you may have noticed, the LTTng-UST API for
1837 C/$$C++$$ applications is some preprocessor sorcery. The LTTng-UST macros
1838 used in your application and those in the LTTng-UST headers are
1839 combined to produce actual source code needed to make tracing possible
1840 using LTTng.
1841
1842 Let's start with the header file, path:{tp.h}. It begins with
1843
1844 [source,c]
1845 ----
1846 #undef TRACEPOINT_PROVIDER
1847 #define TRACEPOINT_PROVIDER my_provider
1848 ----
1849
1850 `TRACEPOINT_PROVIDER` defines the name of the provider to which the
1851 following tracepoint definitions belong. It is used internally by
1852 LTTng-UST headers and _must_ be defined. Since `TRACEPOINT_PROVIDER`
1853 could have been defined by another header file also included by the same
1854 C source file, the best practice is to undefine it first.
1855
1856 NOTE: Names in LTTng-UST follow the C
1857 _identifier_ syntax (starting with a letter and containing either
1858 letters, numbers or underscores); they are _not_ C strings
1859 (not surrounded by double quotes). This is because LTTng-UST macros
1860 use those identifier-like strings to create symbols (named types and
1861 variables).
1862
1863 The tracepoint provider is a group of tracepoint definitions; its chosen
1864 name should reflect this. A hierarchy like Java packages is recommended,
1865 using underscores instead of dots, for example,
1866 `org_company_project_component`.
1867
1868 Next is `TRACEPOINT_INCLUDE`:
1869
1870 [source,c]
1871 ----
1872 #undef TRACEPOINT_INCLUDE
1873 #define TRACEPOINT_INCLUDE "./tp.h"
1874 ----
1875
1876 This little bit of instrospection is needed by LTTng-UST to include
1877 your header at various predefined places.
1878
1879 Include guard follows:
1880
1881 [source,c]
1882 ----
1883 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1884 #define _TP_H
1885 ----
1886
1887 Add these precompiler conditionals to ensure the tracepoint event
1888 generation can include this file more than once.
1889
1890 The `TRACEPOINT_EVENT()` macro is defined in a LTTng-UST header file which
1891 must be included:
1892
1893 [source,c]
1894 ----
1895 #include <lttng/tracepoint.h>
1896 ----
1897
1898 This also allows the application to use the `tracepoint()` macro.
1899
1900 Next is a list of `TRACEPOINT_EVENT()` macro calls which create the
1901 actual tracepoint definitions. We skip this for the moment and
1902 come back to how to use `TRACEPOINT_EVENT()`
1903 <<defining-tracepoints,in a later section>>. Just pay attention to
1904 the first argument: it's always the name of the tracepoint provider
1905 being defined in this header file.
1906
1907 End of include guard:
1908
1909 [source,c]
1910 ----
1911 #endif /* _TP_H */
1912 ----
1913
1914 Finally, include `<lttng/tracepoint-event.h>` to expand the macros:
1915
1916 [source,c]
1917 ----
1918 #include <lttng/tracepoint-event.h>
1919 ----
1920
1921 That's it for path:{tp.h}. Of course, this is only a header file; it must be
1922 included in some C source file to actually use it. This is the job of
1923 path:{tp.c}:
1924
1925 [source,c]
1926 ----
1927 #define TRACEPOINT_CREATE_PROBES
1928
1929 #include "tp.h"
1930 ----
1931
1932 When `TRACEPOINT_CREATE_PROBES` is defined, the macros used in path:{tp.h},
1933 which is included just after, actually create the source code for
1934 LTTng-UST probes (global data structures and functions) out of your
1935 tracepoint definitions. How exactly this is done is out of this text's scope.
1936 `TRACEPOINT_CREATE_PROBES` is discussed further
1937 in
1938 <<building-tracepoint-providers-and-user-application,Building/linking
1939 tracepoint providers and the user application>>.
1940
1941 You could include other header files like path:{tp.h} here to create the probes
1942 of different tracepoint providers, for example:
1943
1944 [source,c]
1945 ----
1946 #define TRACEPOINT_CREATE_PROBES
1947
1948 #include "tp1.h"
1949 #include "tp2.h"
1950 ----
1951
1952 The rule is: probes of a given tracepoint provider
1953 must be created in exactly one source file. This source file could be one
1954 of your project's; it doesn't have to be on its own like
1955 path:{tp.c}, although
1956 <<building-tracepoint-providers-and-user-application,a later section>>
1957 shows that doing so allows packaging the tracepoint providers
1958 independently and keep them out of your application, also making it
1959 possible to reuse them between projects.
1960
1961 The following sections explain how to define tracepoints, how to use the
1962 `tracepoint()` macro to instrument your user space C application and how
1963 to build/link tracepoint providers and your application with LTTng-UST
1964 support.
1965
1966
1967 [[lttng-gen-tp]]
1968 ===== Using `lttng-gen-tp`
1969
1970 LTTng-UST ships with `lttng-gen-tp`, a handy command line utility for
1971 generating most of the stuff discussed above. It takes a _template file_,
1972 with a name usually ending with the `.tp` extension, containing only
1973 tracepoint definitions, and outputs a tracepoint provider (either a C
1974 source file or a precompiled object file) with its header file.
1975
1976 `lttng-gen-tp` should suffice in <<static-linking,static linking>>
1977 situations. When using it, write a template file containing a list of
1978 `TRACEPOINT_EVENT()` macro calls. The tool finds the provider names
1979 used and generate the appropriate files which are going to look a lot
1980 like path:{tp.h} and path:{tp.c} above.
1981
1982 Just call `lttng-gen-tp` like this:
1983
1984 [role="term"]
1985 ----
1986 lttng-gen-tp my-template.tp
1987 ----
1988
1989 path:{my-template.c}, path:{my-template.o} and path:{my-template.h}
1990 are created in the same directory.
1991
1992 You may specify custom C flags passed to the compiler invoked by
1993 `lttng-gen-tp` using the `CFLAGS` environment variable:
1994
1995 [role="term"]
1996 ----
1997 CFLAGS=-I/custom/include/path lttng-gen-tp my-template.tp
1998 ----
1999
2000 For more information on `lttng-gen-tp`, see man:lttng-gen-tp(1).
2001
2002
2003 [[defining-tracepoints]]
2004 ===== Defining tracepoints
2005
2006 As written in <<tracepoint-provider,Tracepoint provider>>,
2007 tracepoints are defined using the
2008 `TRACEPOINT_EVENT()` macro. Each tracepoint, when called using the
2009 `tracepoint()` macro in the actual application's source code, generates
2010 a specific event type with its own fields.
2011
2012 Let's have another look at the example above, with a few added comments:
2013
2014 [source,c]
2015 ----
2016 TRACEPOINT_EVENT(
2017 /* tracepoint provider name */
2018 my_provider,
2019
2020 /* tracepoint/event name */
2021 my_first_tracepoint,
2022
2023 /* list of tracepoint arguments */
2024 TP_ARGS(
2025 int, my_integer_arg,
2026 char*, my_string_arg
2027 ),
2028
2029 /* list of fields of eventual event */
2030 TP_FIELDS(
2031 ctf_string(my_string_field, my_string_arg)
2032 ctf_integer(int, my_integer_field, my_integer_arg)
2033 )
2034 )
2035 ----
2036
2037 The tracepoint provider name must match the name of the tracepoint
2038 provider in which this tracepoint is defined
2039 (see <<tracepoint-provider,Tracepoint provider>>). In other words,
2040 always use the same string as the value of `TRACEPOINT_PROVIDER` above.
2041
2042 The tracepoint name becomes the event name once events are recorded
2043 by the LTTng-UST tracer. It must follow the tracepoint provider name
2044 syntax: start with a letter and contain either letters, numbers or
2045 underscores. Two tracepoints under the same provider cannot have the
2046 same name. In other words, you cannot overload a tracepoint like you
2047 would overload functions and methods in $$C++$$/Java.
2048
2049 NOTE: The concatenation of the tracepoint
2050 provider name and the tracepoint name cannot exceed 254 characters. If
2051 it does, the instrumented application compiles and runs, but LTTng
2052 issues multiple warnings and you could experience serious problems.
2053
2054 The list of tracepoint arguments gives this tracepoint its signature:
2055 see it like the declaration of a C function. The format of `TP_ARGS()`
2056 arguments is: C type, then argument name; repeat as needed, up to ten
2057 times. For example, if we were to replicate the signature of C standard
2058 library's `fseek()`, the `TP_ARGS()` part would look like:
2059
2060 [source,c]
2061 ----
2062 TP_ARGS(
2063 FILE*, stream,
2064 long int, offset,
2065 int, origin
2066 ),
2067 ----
2068
2069 Of course, you need to include appropriate header files before
2070 the `TRACEPOINT_EVENT()` macro calls if any argument has a complex type.
2071
2072 `TP_ARGS()` may not be omitted, but may be empty. `TP_ARGS(void)` is
2073 also accepted.
2074
2075 The list of fields is where the fun really begins. The fields defined
2076 in this list are the fields of the events generated by the execution
2077 of this tracepoint. Each tracepoint field definition has a C
2078 _argument expression_ which is evaluated when the execution reaches
2079 the tracepoint. Tracepoint arguments _may be_ used freely in those
2080 argument expressions, but they _don't_ have to.
2081
2082 There are several types of tracepoint fields available. The macros to
2083 define them are given and explained in the
2084 <<liblttng-ust-tp-fields,LTTng-UST library reference>> section.
2085
2086 Field names must follow the standard C identifier syntax: letter, then
2087 optional sequence of letters, numbers or underscores. Each field must have
2088 a different name.
2089
2090 Those `ctf_*()` macros are added to the `TP_FIELDS()` part of
2091 `TRACEPOINT_EVENT()`. Note that they are not delimited by commas.
2092 `TP_FIELDS()` may be empty, but the `TP_FIELDS(void)` form is _not_
2093 accepted.
2094
2095 The following snippet shows how argument expressions may be used in
2096 tracepoint fields and how they may refer freely to tracepoint arguments.
2097
2098 [source,c]
2099 ----
2100 /* for struct stat */
2101 #include <sys/types.h>
2102 #include <sys/stat.h>
2103 #include <unistd.h>
2104
2105 TRACEPOINT_EVENT(
2106 my_provider,
2107 my_tracepoint,
2108 TP_ARGS(
2109 int, my_int_arg,
2110 char*, my_str_arg,
2111 struct stat*, st
2112 ),
2113 TP_FIELDS(
2114 /* simple integer field with constant value */
2115 ctf_integer(
2116 int, /* field C type */
2117 my_constant_field, /* field name */
2118 23 + 17 /* argument expression */
2119 )
2120
2121 /* my_int_arg tracepoint argument */
2122 ctf_integer(
2123 int,
2124 my_int_arg_field,
2125 my_int_arg
2126 )
2127
2128 /* my_int_arg squared */
2129 ctf_integer(
2130 int,
2131 my_int_arg_field2,
2132 my_int_arg * my_int_arg
2133 )
2134
2135 /* sum of first 4 characters of my_str_arg */
2136 ctf_integer(
2137 int,
2138 sum4,
2139 my_str_arg[0] + my_str_arg[1] +
2140 my_str_arg[2] + my_str_arg[3]
2141 )
2142
2143 /* my_str_arg as string field */
2144 ctf_string(
2145 my_str_arg_field, /* field name */
2146 my_str_arg /* argument expression */
2147 )
2148
2149 /* st_size member of st tracepoint argument, hexadecimal */
2150 ctf_integer_hex(
2151 off_t, /* field C type */
2152 size_field, /* field name */
2153 st->st_size /* argument expression */
2154 )
2155
2156 /* st_size member of st tracepoint argument, as double */
2157 ctf_float(
2158 double, /* field C type */
2159 size_dbl_field, /* field name */
2160 (double) st->st_size /* argument expression */
2161 )
2162
2163 /* half of my_str_arg string as text sequence */
2164 ctf_sequence_text(
2165 char, /* element C type */
2166 half_my_str_arg_field, /* field name */
2167 my_str_arg, /* argument expression */
2168 size_t, /* length expression C type */
2169 strlen(my_str_arg) / 2 /* length expression */
2170 )
2171 )
2172 )
2173 ----
2174
2175 As you can see, having a custom argument expression for each field
2176 makes tracepoints very flexible for tracing a user space C application.
2177 This tracepoint definition is reused later in this guide, when
2178 actually using tracepoints in a user space application.
2179
2180
2181 [[using-tracepoint-classes]]
2182 ===== Using tracepoint classes
2183
2184 In LTTng-UST, a _tracepoint class_ is a class of tracepoints sharing the
2185 same field types and names. A _tracepoint instance_ is one instance of
2186 such a declared tracepoint class, with its own event name and tracepoint
2187 provider name.
2188
2189 What is documented in <<defining-tracepoints,Defining tracepoints>>
2190 is actually how to declare a _tracepoint class_ and define a
2191 _tracepoint instance_ at the same time. Without revealing the internals
2192 of LTTng-UST too much, it has to be noted that one serialization
2193 function is created for each tracepoint class. A serialization
2194 function is responsible for serializing the fields of a tracepoint
2195 into a sub-buffer when tracing. For various performance reasons, when
2196 your situation requires multiple tracepoints with different names, but
2197 with the same fields layout, the best practice is to manually create
2198 a tracepoint class and instantiate as many tracepoint instances as
2199 needed. One positive effect of such a design, amongst other advantages,
2200 is that all tracepoint instances of the same tracepoint class
2201 reuse the same serialization function, thus reducing cache pollution.
2202
2203 As an example, here are three tracepoint definitions as we know them:
2204
2205 [source,c]
2206 ----
2207 TRACEPOINT_EVENT(
2208 my_app,
2209 get_account,
2210 TP_ARGS(
2211 int, userid,
2212 size_t, len
2213 ),
2214 TP_FIELDS(
2215 ctf_integer(int, userid, userid)
2216 ctf_integer(size_t, len, len)
2217 )
2218 )
2219
2220 TRACEPOINT_EVENT(
2221 my_app,
2222 get_settings,
2223 TP_ARGS(
2224 int, userid,
2225 size_t, len
2226 ),
2227 TP_FIELDS(
2228 ctf_integer(int, userid, userid)
2229 ctf_integer(size_t, len, len)
2230 )
2231 )
2232
2233 TRACEPOINT_EVENT(
2234 my_app,
2235 get_transaction,
2236 TP_ARGS(
2237 int, userid,
2238 size_t, len
2239 ),
2240 TP_FIELDS(
2241 ctf_integer(int, userid, userid)
2242 ctf_integer(size_t, len, len)
2243 )
2244 )
2245 ----
2246
2247 In this case, three tracepoint classes are created, with one tracepoint
2248 instance for each of them: `get_account`, `get_settings` and
2249 `get_transaction`. However, they all share the same field names and
2250 types. Declaring one tracepoint class and three tracepoint instances of
2251 the latter is a better design choice:
2252
2253 [source,c]
2254 ----
2255 /* the tracepoint class */
2256 TRACEPOINT_EVENT_CLASS(
2257 /* tracepoint provider name */
2258 my_app,
2259
2260 /* tracepoint class name */
2261 my_class,
2262
2263 /* arguments */
2264 TP_ARGS(
2265 int, userid,
2266 size_t, len
2267 ),
2268
2269 /* fields */
2270 TP_FIELDS(
2271 ctf_integer(int, userid, userid)
2272 ctf_integer(size_t, len, len)
2273 )
2274 )
2275
2276 /* the tracepoint instances */
2277 TRACEPOINT_EVENT_INSTANCE(
2278 /* tracepoint provider name */
2279 my_app,
2280
2281 /* tracepoint class name */
2282 my_class,
2283
2284 /* tracepoint/event name */
2285 get_account,
2286
2287 /* arguments */
2288 TP_ARGS(
2289 int, userid,
2290 size_t, len
2291 )
2292 )
2293 TRACEPOINT_EVENT_INSTANCE(
2294 my_app,
2295 my_class,
2296 get_settings,
2297 TP_ARGS(
2298 int, userid,
2299 size_t, len
2300 )
2301 )
2302 TRACEPOINT_EVENT_INSTANCE(
2303 my_app,
2304 my_class,
2305 get_transaction,
2306 TP_ARGS(
2307 int, userid,
2308 size_t, len
2309 )
2310 )
2311 ----
2312
2313 Of course, all those names and `TP_ARGS()` invocations are redundant,
2314 but some C preprocessor magic can solve this:
2315
2316 [source,c]
2317 ----
2318 #define MY_TRACEPOINT_ARGS \
2319 TP_ARGS( \
2320 int, userid, \
2321 size_t, len \
2322 )
2323
2324 TRACEPOINT_EVENT_CLASS(
2325 my_app,
2326 my_class,
2327 MY_TRACEPOINT_ARGS,
2328 TP_FIELDS(
2329 ctf_integer(int, userid, userid)
2330 ctf_integer(size_t, len, len)
2331 )
2332 )
2333
2334 #define MY_APP_TRACEPOINT_INSTANCE(name) \
2335 TRACEPOINT_EVENT_INSTANCE( \
2336 my_app, \
2337 my_class, \
2338 name, \
2339 MY_TRACEPOINT_ARGS \
2340 )
2341
2342 MY_APP_TRACEPOINT_INSTANCE(get_account)
2343 MY_APP_TRACEPOINT_INSTANCE(get_settings)
2344 MY_APP_TRACEPOINT_INSTANCE(get_transaction)
2345 ----
2346
2347
2348 [[assigning-log-levels]]
2349 ===== Assigning log levels to tracepoints
2350
2351 Optionally, a log level can be assigned to a defined tracepoint.
2352 Assigning different levels of importance to tracepoints can be useful;
2353 when controlling tracing sessions,
2354 <<controlling-tracing,you can choose>> to only enable tracepoints
2355 falling into a specific log level range.
2356
2357 Log levels are assigned to defined tracepoints using the
2358 `TRACEPOINT_LOGLEVEL()` macro. The latter must be used _after_ having
2359 used `TRACEPOINT_EVENT()` for a given tracepoint. The
2360 `TRACEPOINT_LOGLEVEL()` macro has the following construct:
2361
2362 [source,c]
2363 ----
2364 TRACEPOINT_LOGLEVEL(PROVIDER_NAME, TRACEPOINT_NAME, LOG_LEVEL)
2365 ----
2366
2367 where the first two arguments are the same as the first two arguments
2368 of `TRACEPOINT_EVENT()` and `LOG_LEVEL` is one
2369 of the values given in the
2370 <<liblttng-ust-tracepoint-loglevel,LTTng-UST library reference>>
2371 section.
2372
2373 As an example, let's assign a `TRACE_DEBUG_UNIT` log level to our
2374 previous tracepoint definition:
2375
2376 [source,c]
2377 ----
2378 TRACEPOINT_LOGLEVEL(my_provider, my_tracepoint, TRACE_DEBUG_UNIT)
2379 ----
2380
2381
2382 [[probing-the-application-source-code]]
2383 ===== Probing the application's source code
2384
2385 Once tracepoints are properly defined within a tracepoint provider,
2386 they may be inserted into the user application to be instrumented
2387 using the `tracepoint()` macro. Its first argument is the tracepoint
2388 provider name and its second is the tracepoint name. The next, optional
2389 arguments are defined by the `TP_ARGS()` part of the definition of
2390 the tracepoint to use.
2391
2392 As an example, let us again take the following tracepoint definition:
2393
2394 [source,c]
2395 ----
2396 TRACEPOINT_EVENT(
2397 /* tracepoint provider name */
2398 my_provider,
2399
2400 /* tracepoint/event name */
2401 my_first_tracepoint,
2402
2403 /* list of tracepoint arguments */
2404 TP_ARGS(
2405 int, my_integer_arg,
2406 char*, my_string_arg
2407 ),
2408
2409 /* list of fields of eventual event */
2410 TP_FIELDS(
2411 ctf_string(my_string_field, my_string_arg)
2412 ctf_integer(int, my_integer_field, my_integer_arg)
2413 )
2414 )
2415 ----
2416
2417 Assuming this is part of a file named path:{tp.h} which defines the tracepoint
2418 provider and which is included by path:{tp.c}, here's a complete C application
2419 calling this tracepoint (multiple times):
2420
2421 [source,c]
2422 ----
2423 #define TRACEPOINT_DEFINE
2424 #include "tp.h"
2425
2426 int main(int argc, char* argv[])
2427 {
2428 int i;
2429
2430 tracepoint(my_provider, my_first_tracepoint, 23, "Hello, World!");
2431
2432 for (i = 0; i < argc; ++i) {
2433 tracepoint(my_provider, my_first_tracepoint, i, argv[i]);
2434 }
2435
2436 return 0;
2437 }
2438 ----
2439
2440 For each tracepoint provider, `TRACEPOINT_DEFINE` must be defined into
2441 exactly one translation unit (C source file) of the user application,
2442 before including the tracepoint provider header file. In other words,
2443 for a given tracepoint provider, you cannot define `TRACEPOINT_DEFINE`,
2444 and then include its header file in two separate C source files of
2445 the same application. `TRACEPOINT_DEFINE` is discussed further in
2446 <<building-tracepoint-providers-and-user-application,Building/linking
2447 tracepoint providers and the user application>>.
2448
2449 As another example, remember this definition we wrote in a previous
2450 section (comments are stripped):
2451
2452 [source,c]
2453 ----
2454 /* for struct stat */
2455 #include <sys/types.h>
2456 #include <sys/stat.h>
2457 #include <unistd.h>
2458
2459 TRACEPOINT_EVENT(
2460 my_provider,
2461 my_tracepoint,
2462 TP_ARGS(
2463 int, my_int_arg,
2464 char*, my_str_arg,
2465 struct stat*, st
2466 ),
2467 TP_FIELDS(
2468 ctf_integer(int, my_constant_field, 23 + 17)
2469 ctf_integer(int, my_int_arg_field, my_int_arg)
2470 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2471 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2472 my_str_arg[2] + my_str_arg[3])
2473 ctf_string(my_str_arg_field, my_str_arg)
2474 ctf_integer_hex(off_t, size_field, st->st_size)
2475 ctf_float(double, size_dbl_field, (double) st->st_size)
2476 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2477 size_t, strlen(my_str_arg) / 2)
2478 )
2479 )
2480 ----
2481
2482 Here's an example of calling it:
2483
2484 [source,c]
2485 ----
2486 #define TRACEPOINT_DEFINE
2487 #include "tp.h"
2488
2489 int main(void)
2490 {
2491 struct stat s;
2492
2493 stat("/etc/fstab", &s);
2494
2495 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2496
2497 return 0;
2498 }
2499 ----
2500
2501 When viewing the trace, assuming the file size of path:{/etc/fstab} is
2502 301{nbsp}bytes, the event generated by the execution of this tracepoint
2503 should have the following fields, in this order:
2504
2505 ----
2506 my_constant_field 40
2507 my_int_arg_field 23
2508 my_int_arg_field2 529
2509 sum4_field 389
2510 my_str_arg_field "Hello, World!"
2511 size_field 0x12d
2512 size_dbl_field 301.0
2513 half_my_str_arg_field "Hello,"
2514 ----
2515
2516
2517 [[building-tracepoint-providers-and-user-application]]
2518 ===== Building/linking tracepoint providers and the user application
2519
2520 The final step of using LTTng-UST for tracing a user space C application
2521 (beside running the application) is building and linking tracepoint
2522 providers and the application itself.
2523
2524 As discussed above, the macros used by the user-written tracepoint provider
2525 header file are useless until actually used to create probes code
2526 (global data structures and functions) in a translation unit (C source file).
2527 This is accomplished by defining `TRACEPOINT_CREATE_PROBES` in a translation
2528 unit and then including the tracepoint provider header file.
2529 When `TRACEPOINT_CREATE_PROBES` is defined, macros used and included by
2530 the tracepoint provider header produce actual source code needed by any
2531 application using the defined tracepoints. Defining
2532 `TRACEPOINT_CREATE_PROBES` produces code used when registering
2533 tracepoint providers when the tracepoint provider package loads.
2534
2535 The other important definition is `TRACEPOINT_DEFINE`. This one creates
2536 global, per-tracepoint structures referencing the tracepoint providers
2537 data. Those structures are required by the actual functions inserted
2538 where `tracepoint()` macros are placed and need to be defined by the
2539 instrumented application.
2540
2541 Both `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` need to be defined
2542 at some places in order to trace a user space C application using LTTng.
2543 Although explaining their exact mechanism is beyond the scope of this
2544 document, the reason they both exist separately is to allow the trace
2545 providers to be packaged as a shared object (dynamically loaded library).
2546
2547 There are two ways to compile and link the tracepoint providers
2548 with the application: _<<static-linking,statically>>_ or
2549 _<<dynamic-linking,dynamically>>_. Both methods are covered in the
2550 following subsections.
2551
2552
2553 [[static-linking]]
2554 ===== Static linking the tracepoint providers to the application
2555
2556 With the static linking method, compiled tracepoint providers are copied
2557 into the target application. There are three ways to do this:
2558
2559 . Use one of your **existing C source files** to create probes.
2560 . Create probes in a separate C source file and build it as an
2561 **object file** to be linked with the application (more decoupled).
2562 . Create probes in a separate C source file, build it as an
2563 object file and archive it to create a **static library**
2564 (more decoupled, more portable).
2565
2566 The first approach is to define `TRACEPOINT_CREATE_PROBES` and include
2567 your tracepoint provider(s) header file(s) directly into an existing C
2568 source file. Here's an example:
2569
2570 [source,c]
2571 ----
2572 #include <stdlib.h>
2573 #include <stdio.h>
2574 /* ... */
2575
2576 #define TRACEPOINT_CREATE_PROBES
2577 #define TRACEPOINT_DEFINE
2578 #include "tp.h"
2579
2580 /* ... */
2581
2582 int my_func(int a, const char* b)
2583 {
2584 /* ... */
2585
2586 tracepoint(my_provider, my_tracepoint, buf, sz, limit, &tt)
2587
2588 /* ... */
2589 }
2590
2591 /* ... */
2592 ----
2593
2594 Again, before including a given tracepoint provider header file,
2595 `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` must be defined in
2596 one, **and only one**, translation unit. Other C source files of the
2597 same application may include path:{tp.h} to use tracepoints with
2598 the `tracepoint()` macro, but must not define
2599 `TRACEPOINT_CREATE_PROBES`/`TRACEPOINT_DEFINE` again.
2600
2601 This translation unit may be built as an object file by making sure to
2602 add `.` to the include path:
2603
2604 [role="term"]
2605 ----
2606 gcc -c -I. file.c
2607 ----
2608
2609 The second approach is to isolate the tracepoint provider code into a
2610 separate object file by using a dedicated C source file to create probes:
2611
2612 [source,c]
2613 ----
2614 #define TRACEPOINT_CREATE_PROBES
2615
2616 #include "tp.h"
2617 ----
2618
2619 `TRACEPOINT_DEFINE` must be defined by a translation unit of the
2620 application. Since we're talking about static linking here, it could as
2621 well be defined directly in the file above, before `#include "tp.h"`:
2622
2623 [source,c]
2624 ----
2625 #define TRACEPOINT_CREATE_PROBES
2626 #define TRACEPOINT_DEFINE
2627
2628 #include "tp.h"
2629 ----
2630
2631 This is actually what <<lttng-gen-tp,`lttng-gen-tp`>> does, and is
2632 the recommended practice.
2633
2634 Build the tracepoint provider:
2635
2636 [role="term"]
2637 ----
2638 gcc -c -I. tp.c
2639 ----
2640
2641 Finally, the resulting object file may be archived to create a
2642 more portable tracepoint provider static library:
2643
2644 [role="term"]
2645 ----
2646 ar rc tp.a tp.o
2647 ----
2648
2649 Using a static library does have the advantage of centralising the
2650 tracepoint providers objects so they can be shared between multiple
2651 applications. This way, when the tracepoint provider is modified, the
2652 source code changes don't have to be patched into each application's source
2653 code tree. The applications need to be relinked after each change, but need
2654 not to be otherwise recompiled (unless the tracepoint provider's API
2655 changes).
2656
2657 Regardless of which method you choose, you end up with an object file
2658 (potentially archived) containing the trace providers assembled code.
2659 To link this code with the rest of your application, you must also link
2660 with `liblttng-ust` and `libdl`:
2661
2662 [role="term"]
2663 ----
2664 gcc -o app tp.o other.o files.o of.o your.o app.o -llttng-ust -ldl
2665 ----
2666
2667 or
2668
2669 [role="term"]
2670 ----
2671 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -ldl
2672 ----
2673
2674 If you're using a BSD
2675 system, replace `-ldl` with `-lc`:
2676
2677 [role="term"]
2678 ----
2679 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -lc
2680 ----
2681
2682 The application can be started as usual, for example:
2683
2684 [role="term"]
2685 ----
2686 ./app
2687 ----
2688
2689 The `lttng` command line tool can be used to
2690 <<controlling-tracing,control tracing>>.
2691
2692
2693 [[dynamic-linking]]
2694 ===== Dynamic linking the tracepoint providers to the application
2695
2696 The second approach to package the tracepoint providers is to use
2697 dynamic linking: the library and its member functions are explicitly
2698 sought, loaded and unloaded at runtime using `libdl`.
2699
2700 It has to be noted that, for a variety of reasons, the created shared
2701 library is be dynamically _loaded_, as opposed to dynamically
2702 _linked_. The tracepoint provider shared object is, however, linked
2703 with `liblttng-ust`, so that `liblttng-ust` is guaranteed to be loaded
2704 as soon as the tracepoint provider is. If the tracepoint provider is
2705 not loaded, since the application itself is not linked with
2706 `liblttng-ust`, the latter is not loaded at all and the tracepoint calls
2707 become inert.
2708
2709 The process to create the tracepoint provider shared object is pretty
2710 much the same as the static library method, except that:
2711
2712 * since the tracepoint provider is not part of the application
2713 anymore, `TRACEPOINT_DEFINE` _must_ be defined, for each tracepoint
2714 provider, in exactly one translation unit (C source file) of the
2715 _application_;
2716 * `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` must be defined next to
2717 `TRACEPOINT_DEFINE`.
2718
2719 Regarding `TRACEPOINT_DEFINE` and `TRACEPOINT_PROBE_DYNAMIC_LINKAGE`,
2720 the recommended practice is to use a separate C source file in your
2721 application to define them, then include the tracepoint provider
2722 header files afterwards. For example:
2723
2724 [source,c]
2725 ----
2726 #define TRACEPOINT_DEFINE
2727 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2728
2729 /* include the header files of one or more tracepoint providers below */
2730 #include "tp1.h"
2731 #include "tp2.h"
2732 #include "tp3.h"
2733 ----
2734
2735 `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` makes the macros included afterwards
2736 (by including the tracepoint provider header, which itself includes
2737 LTTng-UST headers) aware that the tracepoint provider is to be loaded
2738 dynamically and not part of the application's executable.
2739
2740 The tracepoint provider object file used to create the shared library
2741 is built like it is using the static library method, only with the
2742 `-fpic` option added:
2743
2744 [role="term"]
2745 ----
2746 gcc -c -fpic -I. tp.c
2747 ----
2748
2749 It is then linked as a shared library like this:
2750
2751 [role="term"]
2752 ----
2753 gcc -shared -Wl,--no-as-needed -o tp.so -llttng-ust tp.o
2754 ----
2755
2756 As previously stated, this tracepoint provider shared object isn't
2757 linked with the user application: it's loaded manually. This is
2758 why the application is built with no mention of this tracepoint
2759 provider, but still needs `libdl`:
2760
2761 [role="term"]
2762 ----
2763 gcc -o app other.o files.o of.o your.o app.o -ldl
2764 ----
2765
2766 Now, to make LTTng-UST tracing available to the application, the
2767 `LD_PRELOAD` environment variable is used to preload the tracepoint
2768 provider shared library _before_ the application actually starts:
2769
2770 [role="term"]
2771 ----
2772 LD_PRELOAD=/path/to/tp.so ./app
2773 ----
2774
2775 [NOTE]
2776 ====
2777 It is not safe to use
2778 `dlclose()` on a tracepoint provider shared object that
2779 is being actively used for tracing, due to a lack of reference
2780 counting from LTTng-UST to the shared object.
2781
2782 For example, statically linking a tracepoint provider to a
2783 shared object which is to be dynamically loaded by an application
2784 (a plugin, for example) is not safe: the shared object, which
2785 contains the tracepoint provider, could be dynamically closed
2786 (`dlclose()`) at any time by the application.
2787
2788 To instrument a shared object, either:
2789
2790 * Statically link the tracepoint provider to the _application_, or
2791 * Build the tracepoint provider as a shared object (following
2792 the procedure shown in this section), and preload it when
2793 tracing is needed using the `LD_PRELOAD`
2794 environment variable.
2795 ====
2796
2797 Your application will still work without this preloading, albeit without
2798 LTTng-UST tracing support:
2799
2800 [role="term"]
2801 ----
2802 ./app
2803 ----
2804
2805
2806 [[using-lttng-ust-with-daemons]]
2807 ===== Using LTTng-UST with daemons
2808
2809 Some extra care is needed when using `liblttng-ust` with daemon
2810 applications that call `fork()`, `clone()` or BSD's `rfork()` without
2811 a following `exec()` family system call. The `liblttng-ust-fork`
2812 library must be preloaded for the application.
2813
2814 Example:
2815
2816 [role="term"]
2817 ----
2818 LD_PRELOAD=liblttng-ust-fork.so ./app
2819 ----
2820
2821 Or, if you're using a tracepoint provider shared library:
2822
2823 [role="term"]
2824 ----
2825 LD_PRELOAD="liblttng-ust-fork.so /path/to/tp.so" ./app
2826 ----
2827
2828
2829 [[lttng-ust-pkg-config]]
2830 ===== Using pkg-config
2831
2832 On some distributions, LTTng-UST is shipped with a pkg-config metadata
2833 file, so that you may use the `pkg-config` tool:
2834
2835 [role="term"]
2836 ----
2837 pkg-config --libs lttng-ust
2838 ----
2839
2840 This prints `-llttng-ust -ldl` on Linux systems.
2841
2842 You may also check the LTTng-UST version using `pkg-config`:
2843
2844 [role="term"]
2845 ----
2846 pkg-config --modversion lttng-ust
2847 ----
2848
2849 For more information about pkg-config, see
2850 http://linux.die.net/man/1/pkg-config[its manpage].
2851
2852
2853 [role="since-2.5"]
2854 [[tracef]]
2855 ===== Using `tracef()`
2856
2857 `tracef()` is a small LTTng-UST API to avoid defining your own
2858 tracepoints and tracepoint providers. The signature of `tracef()` is
2859 the same as `printf()`'s.
2860
2861 The `tracef()` utility function was developed to make user space tracing
2862 super simple, albeit with notable disadvantages compared to custom,
2863 full-fledged tracepoint providers:
2864
2865 * All generated events have the same provider/event names, respectively
2866 `lttng_ust_tracef` and `event`.
2867 * There's no static type checking.
2868 * The only event field you actually get, named `msg`, is a string
2869 potentially containing the values you passed to the function
2870 using your own format. This also means that you cannot use filtering
2871 using a custom expression at runtime because there are no isolated
2872 fields.
2873 * Since `tracef()` uses C standard library's `vasprintf()` function
2874 in the background to format the strings at runtime, its
2875 expected performance is lower than using custom tracepoint providers
2876 with typed fields, which do not require a conversion to a string.
2877
2878 Thus, `tracef()` is useful for quick prototyping and debugging, but
2879 should not be considered for any permanent/serious application
2880 instrumentation.
2881
2882 To use `tracef()`, first include `<lttng/tracef.h>` in the C source file
2883 where you need to insert probes:
2884
2885 [source,c]
2886 ----
2887 #include <lttng/tracef.h>
2888 ----
2889
2890 Use `tracef()` like you would use `printf()` in your source code, for
2891 example:
2892
2893 [source,c]
2894 ----
2895 /* ... */
2896
2897 tracef("my message, my integer: %d", my_integer);
2898
2899 /* ... */
2900 ----
2901
2902 Link your application with `liblttng-ust`:
2903
2904 [role="term"]
2905 ----
2906 gcc -o app app.c -llttng-ust
2907 ----
2908
2909 Execute the application as usual:
2910
2911 [role="term"]
2912 ----
2913 ./app
2914 ----
2915
2916 Voilà! Use the `lttng` command line tool to
2917 <<controlling-tracing,control tracing>>. You can enable `tracef()`
2918 events like this:
2919
2920 [role="term"]
2921 ----
2922 lttng enable-event --userspace 'lttng_ust_tracef:*'
2923 ----
2924
2925
2926 [[lttng-ust-environment-variables-compiler-flags]]
2927 ===== LTTng-UST environment variables and special compilation flags
2928
2929 A few special environment variables and compile flags may affect the
2930 behavior of LTTng-UST.
2931
2932 LTTng-UST's debugging can be activated by setting the environment
2933 variable `LTTNG_UST_DEBUG` to `1` when launching the application. It
2934 can also be enabled at compile time by defining `LTTNG_UST_DEBUG` when
2935 compiling LTTng-UST (using the `-DLTTNG_UST_DEBUG` compiler option).
2936
2937 The environment variable `LTTNG_UST_REGISTER_TIMEOUT` can be used to
2938 specify how long the application should wait for the
2939 <<lttng-sessiond,session daemon>>'s _registration done_ command
2940 before proceeding to execute the main program. The timeout value is
2941 specified in milliseconds. 0 means _don't wait_. -1 means
2942 _wait forever_. Setting this environment variable to 0 is recommended
2943 for applications with time contraints on the process startup time.
2944
2945 The default value of `LTTNG_UST_REGISTER_TIMEOUT` (when not defined)
2946 is **3000{nbsp}ms**.
2947
2948 The compilation definition `LTTNG_UST_DEBUG_VALGRIND` should be enabled
2949 at build time (`-DLTTNG_UST_DEBUG_VALGRIND`) to allow `liblttng-ust`
2950 to be used with http://valgrind.org/[Valgrind].
2951 The side effect of defining `LTTNG_UST_DEBUG_VALGRIND` is that per-CPU
2952 buffering is disabled.
2953
2954
2955 [[cxx-application]]
2956 ==== $$C++$$ application
2957
2958 Because of $$C++$$'s cross-compatibility with the C language, $$C++$$
2959 applications can be readily instrumented with the LTTng-UST C API.
2960
2961 Follow the <<c-application,C application>> user guide above. It
2962 should be noted that, in this case, tracepoint providers should have
2963 the typical `.cpp`, `.cxx` or `.cc` extension and be built with `g++`
2964 instead of `gcc`. This is the easiest way of avoiding linking errors
2965 due to symbol name mangling incompatibilities between both languages.
2966
2967
2968 [[prebuilt-ust-helpers]]
2969 ==== Prebuilt user space tracing helpers
2970
2971 The LTTng-UST package provides a few helpers that one may find
2972 useful in some situations. They all work the same way: you must
2973 preload the appropriate shared object before running the user
2974 application (using the `LD_PRELOAD` environment variable).
2975
2976 The shared objects are normally found in dir:{/usr/lib}.
2977
2978 The current installed helpers are:
2979
2980 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}::
2981 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
2982 and POSIX threads tracing>>.
2983
2984 path:{liblttng-ust-cyg-profile.so} and path:{liblttng-ust-cyg-profile-fast.so}::
2985 <<liblttng-ust-cyg-profile,Function tracing>>.
2986
2987 path:{liblttng-ust-dl.so}::
2988 <<liblttng-ust-dl,Dynamic linker tracing>>.
2989
2990 The following subsections document what helpers instrument exactly
2991 and how to use them.
2992
2993
2994 [role="since-2.3"]
2995 [[liblttng-ust-libc-pthread-wrapper]]
2996 ===== C standard library and POSIX threads tracing
2997
2998 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}
2999 can add instrumentation to respectively some C standard library and
3000 POSIX threads functions.
3001
3002 The following functions are traceable by path:{liblttng-ust-libc-wrapper.so}:
3003
3004 [role="growable"]
3005 .Functions instrumented by path:{liblttng-ust-libc-wrapper.so}
3006 |====
3007 |TP provider name |TP name |Instrumented function
3008
3009 .6+|`ust_libc` |`malloc` |`malloc()`
3010 |`calloc` |`calloc()`
3011 |`realloc` |`realloc()`
3012 |`free` |`free()`
3013 |`memalign` |`memalign()`
3014 |`posix_memalign` |`posix_memalign()`
3015 |====
3016
3017 The following functions are traceable by
3018 path:{liblttng-ust-pthread-wrapper.so}:
3019
3020 [role="growable"]
3021 .Functions instrumented by path:{liblttng-ust-pthread-wrapper.so}
3022 |====
3023 |TP provider name |TP name |Instrumented function
3024
3025 .4+|`ust_pthread` |`pthread_mutex_lock_req` |`pthread_mutex_lock()` (request time)
3026 |`pthread_mutex_lock_acq` |`pthread_mutex_lock()` (acquire time)
3027 |`pthread_mutex_trylock` |`pthread_mutex_trylock()`
3028 |`pthread_mutex_unlock` |`pthread_mutex_unlock()`
3029 |====
3030
3031 All tracepoints have fields corresponding to the arguments of the
3032 function they instrument.
3033
3034 To use one or the other with any user application, independently of
3035 how the latter is built, do:
3036
3037 [role="term"]
3038 ----
3039 LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
3040 ----
3041
3042 or
3043
3044 [role="term"]
3045 ----
3046 LD_PRELOAD=liblttng-ust-pthread-wrapper.so my-app
3047 ----
3048
3049 To use both, do:
3050
3051 [role="term"]
3052 ----
3053 LD_PRELOAD="liblttng-ust-libc-wrapper.so liblttng-ust-pthread-wrapper.so" my-app
3054 ----
3055
3056 When the shared object is preloaded, it effectively replaces the
3057 functions listed in the above tables by wrappers which add tracepoints
3058 and call the replaced functions.
3059
3060 Of course, like any other tracepoint, the ones above need to be enabled
3061 in order for LTTng-UST to generate events. This is done using the
3062 `lttng` command line tool
3063 (see <<controlling-tracing,Controlling tracing>>).
3064
3065
3066 [[liblttng-ust-cyg-profile]]
3067 ===== Function tracing
3068
3069 Function tracing is the recording of which functions are entered and
3070 left during the execution of an application. Like with any LTTng event,
3071 the precise time at which this happens is also kept.
3072
3073 GCC and clang have an option named
3074 https://gcc.gnu.org/onlinedocs/gcc-4.9.1/gcc/Code-Gen-Options.html[`-finstrument-functions`]
3075 which generates instrumentation calls for entry and exit to functions.
3076 The LTTng-UST function tracing helpers, path:{liblttng-ust-cyg-profile.so}
3077 and path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
3078 to add instrumentation to the two generated functions (which contain
3079 `cyg_profile` in their names, hence the shared object's name).
3080
3081 In order to use LTTng-UST function tracing, the translation units to
3082 instrument must be built using the `-finstrument-functions` compiler
3083 flag.
3084
3085 LTTng-UST function tracing comes in two flavors, each providing
3086 different trade-offs: path:{liblttng-ust-cyg-profile-fast.so} and
3087 path:{liblttng-ust-cyg-profile.so}.
3088
3089 **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant that
3090 should only be used where it can be _guaranteed_ that the complete event
3091 stream is recorded without any missing events. Any kind of duplicate
3092 information is left out. This version registers the following
3093 tracepoints:
3094
3095 [role="growable",options="header,autowidth"]
3096 .Functions instrumented by path:{liblttng-ust-cyg-profile-fast.so}
3097 |====
3098 |TP provider name |TP name |Instrumented function
3099
3100 .2+|`lttng_ust_cyg_profile_fast`
3101
3102 |`func_entry`
3103 a|Function entry
3104
3105 `addr`::
3106 Address of called function.
3107
3108 |`func_exit`
3109 |Function exit
3110 |====
3111
3112 Assuming no event is lost, having only the function addresses on entry
3113 is enough for creating a call graph (remember that a recorded event
3114 always contains the ID of the CPU that generated it). A tool like
3115 https://sourceware.org/binutils/docs/binutils/addr2line.html[`addr2line`]
3116 may be used to convert function addresses back to source files names
3117 and line numbers.
3118
3119 The other helper,
3120 **path:{liblttng-ust-cyg-profile.so}**,
3121 is a more robust variant which also works for use cases where
3122 events might get discarded or not recorded from application startup.
3123 In these cases, the trace analyzer needs extra information to be
3124 able to reconstruct the program flow. This version registers the
3125 following tracepoints:
3126
3127 [role="growable",options="header,autowidth"]
3128 .Functions instrumented by path:{liblttng-ust-cyg-profile.so}
3129 |====
3130 |TP provider name |TP name |Instrumented function
3131
3132 .2+|`lttng_ust_cyg_profile`
3133
3134 |`func_entry`
3135 a|Function entry
3136
3137 `addr`::
3138 Address of called function.
3139
3140 `call_site`::
3141 Call site address.
3142
3143 |`func_exit`
3144 a|Function exit
3145
3146 `addr`::
3147 Address of called function.
3148
3149 `call_site`::
3150 Call site address.
3151 |====
3152
3153 To use one or the other variant with any user application, assuming at
3154 least one translation unit of the latter is compiled with the
3155 `-finstrument-functions` option, do:
3156
3157 [role="term"]
3158 ----
3159 LD_PRELOAD=liblttng-ust-cyg-profile-fast.so my-app
3160 ----
3161
3162 or
3163
3164 [role="term"]
3165 ----
3166 LD_PRELOAD=liblttng-ust-cyg-profile.so my-app
3167 ----
3168
3169 It might be necessary to limit the number of source files where
3170 `-finstrument-functions` is used to prevent excessive amount of trace
3171 data to be generated at runtime.
3172
3173 TIP: When using GCC, at least, you can use
3174 the `-finstrument-functions-exclude-function-list`
3175 option to avoid instrumenting entries and exits of specific
3176 symbol names.
3177
3178 All events generated from LTTng-UST function tracing are provided on
3179 log level `TRACE_DEBUG_FUNCTION`, which is useful to easily enable
3180 function tracing events in your tracing session using the
3181 `--loglevel-only` option of `lttng enable-event`
3182 (see <<controlling-tracing,Controlling tracing>>).
3183
3184
3185 [role="since-2.4"]
3186 [[liblttng-ust-dl]]
3187 ===== Dynamic linker tracing
3188
3189 This LTTng-UST helper causes all calls to `dlopen()` and `dlclose()`
3190 in the target application to be traced with LTTng.
3191
3192 The helper's shared object, path:{liblttng-ust-dl.so}, registers the
3193 following tracepoints when preloaded:
3194
3195 [role="growable",options="header,autowidth"]
3196 .Functions instrumented by path:{liblttng-ust-dl.so}
3197 |====
3198 |TP provider name |TP name |Instrumented function
3199
3200 .2+|`ust_baddr`
3201
3202 |`push`
3203 a|`dlopen()` call
3204
3205 `baddr`::
3206 Memory base address (where the dynamic linker placed the shared
3207 object).
3208
3209 `sopath`::
3210 File system path to the loaded shared object.
3211
3212 `size`::
3213 File size of the the loaded shared object.
3214
3215 `mtime`::
3216 Last modification time (seconds since Epoch time) of the loaded shared
3217 object.
3218
3219 |`pop`
3220 a|Function exit
3221
3222 `baddr`::
3223 Memory base address (where the dynamic linker placed the shared
3224 object).
3225 |====
3226
3227 To use this LTTng-UST helper with any user application, independently of
3228 how the latter is built, do:
3229
3230 [role="term"]
3231 ----
3232 LD_PRELOAD=liblttng-ust-dl.so my-app
3233 ----
3234
3235 Of course, like any other tracepoint, the ones above need to be enabled
3236 in order for LTTng-UST to generate events. This is done using the
3237 `lttng` command line tool
3238 (see <<controlling-tracing,Controlling tracing>>).
3239
3240
3241 [role="since-2.4"]
3242 [[java-application]]
3243 ==== Java application
3244
3245 LTTng-UST provides a _logging_ back-end for Java applications using either
3246 http://docs.oracle.com/javase/7/docs/api/java/util/logging/Logger.html[`java.util.logging`]
3247 (JUL) or
3248 http://logging.apache.org/log4j/1.2/[Apache log4j 1.2]
3249 This back-end is called the _LTTng-UST Java agent_, and it is responsible
3250 for the communications with an LTTng session daemon.
3251
3252 From the user's point of view, once the LTTng-UST Java agent has been
3253 initialized, JUL and log4j loggers may be created and used as usual.
3254 The agent adds its own handler to the _root logger_, so that all
3255 loggers may generate LTTng events with no effort.
3256
3257 Common JUL/log4j features are supported using the `lttng` tool
3258 (see <<controlling-tracing,Controlling tracing>>):
3259
3260 * listing all logger names
3261 * enabling/disabling events per logger name
3262 * JUL/log4j log levels
3263
3264
3265 [role="since-2.1"]
3266 [[jul]]
3267 ===== `java.util.logging`
3268
3269 Here's an example of tracing a Java application which is using
3270 **`java.util.logging`**:
3271
3272 [source,java]
3273 ----
3274 import java.util.logging.Logger;
3275 import org.lttng.ust.agent.LTTngAgent;
3276
3277 public class Test
3278 {
3279 private static final int answer = 42;
3280
3281 public static void main(String[] argv) throws Exception
3282 {
3283 // create a logger
3284 Logger logger = Logger.getLogger("jello");
3285
3286 // call this as soon as possible (before logging)
3287 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3288
3289 // log at will!
3290 logger.info("some info");
3291 logger.warning("some warning");
3292 Thread.sleep(500);
3293 logger.finer("finer information; the answer is " + answer);
3294 Thread.sleep(123);
3295 logger.severe("error!");
3296
3297 // not mandatory, but cleaner
3298 lttngAgent.dispose();
3299 }
3300 }
3301 ----
3302
3303 The LTTng-UST Java agent is packaged in a JAR file named
3304 `liblttng-ust-agent.jar` It is typically located in
3305 dir:{/usr/lib/lttng/java}. To compile the snippet above
3306 (saved as `Test.java`), do:
3307
3308 [role="term"]
3309 ----
3310 javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar Test.java
3311 ----
3312
3313 You can run the resulting compiled class like this:
3314
3315 [role="term"]
3316 ----
3317 java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:. Test
3318 ----
3319
3320 NOTE: http://openjdk.java.net/[OpenJDK] 7 is used for development and
3321 continuous integration, thus this version is directly supported.
3322 However, the LTTng-UST Java agent has also been tested with OpenJDK 6.
3323
3324
3325 [role="since-2.6"]
3326 [[log4j]]
3327 ===== Apache log4j 1.2
3328
3329 LTTng features an Apache log4j 1.2 agent, which means your existing
3330 Java applications using log4j 1.2 for logging can record events to
3331 LTTng traces with just a minor source code modification.
3332
3333 NOTE: This version of LTTng does not support Log4j 2.
3334
3335 Here's an example:
3336
3337 [source,java]
3338 ----
3339 import org.apache.log4j.Logger;
3340 import org.apache.log4j.BasicConfigurator;
3341 import org.lttng.ust.agent.LTTngAgent;
3342
3343 public class Test
3344 {
3345 private static final int answer = 42;
3346
3347 public static void main(String[] argv) throws Exception
3348 {
3349 // create and configure a logger
3350 Logger logger = Logger.getLogger(Test.class);
3351 BasicConfigurator.configure();
3352
3353 // call this as soon as possible (before logging)
3354 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3355
3356 // log at will!
3357 logger.info("some info");
3358 logger.warn("some warning");
3359 Thread.sleep(500);
3360 logger.debug("debug information; the answer is " + answer);
3361 Thread.sleep(123);
3362 logger.error("error!");
3363 logger.fatal("fatal error!");
3364
3365 // not mandatory, but cleaner
3366 lttngAgent.dispose();
3367 }
3368 }
3369 ----
3370
3371 To compile the snippet above, do:
3372
3373 [role="term"]
3374 ----
3375 javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP Test.java
3376 ----
3377
3378 where `$LOG4JCP` is the log4j 1.2 JAR file path.
3379
3380 You can run the resulting compiled class like this:
3381
3382 [role="term"]
3383 ----
3384 java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP:. Test
3385 ----
3386
3387
3388 [[instrumenting-linux-kernel]]
3389 ==== Linux kernel
3390
3391 The Linux kernel can be instrumented for LTTng tracing, either its core
3392 source code or a kernel module. It has to be noted that Linux is
3393 readily traceable using LTTng since many parts of its source code are
3394 already instrumented: this is the job of the upstream
3395 http://git.lttng.org/?p=lttng-modules.git[LTTng-modules]
3396 package. This section presents how to add LTTng instrumentation where it
3397 does not currently exist and how to instrument custom kernel modules.
3398
3399 All LTTng instrumentation in the Linux kernel is based on an existing
3400 infrastructure which bears the name of its main macro, `TRACE_EVENT()`.
3401 This macro is used to define tracepoints,
3402 each tracepoint having a name, usually with the
3403 +__subsys__&#95;__name__+ format,
3404 +_subsys_+ being the subsystem name and
3405 +_name_+ the specific event name.
3406
3407 Tracepoints defined with `TRACE_EVENT()` may be inserted anywhere in
3408 the Linux kernel source code, after what callbacks, called _probes_,
3409 may be registered to execute some action when a tracepoint is
3410 executed. This mechanism is directly used by ftrace and perf,
3411 but cannot be used as is by LTTng: an adaptation layer is added to
3412 satisfy LTTng's specific needs.
3413
3414 With that in mind, this documentation does not cover the `TRACE_EVENT()`
3415 format and how to use it, but it is mandatory to understand it and use
3416 it to instrument Linux for LTTng. A series of
3417 LWN articles explain
3418 `TRACE_EVENT()` in details:
3419 http://lwn.net/Articles/379903/[part 1],
3420 http://lwn.net/Articles/381064/[part 2], and
3421 http://lwn.net/Articles/383362/[part 3].
3422 Once you master `TRACE_EVENT()` enough for your use case, continue
3423 reading this section so that you can add the LTTng adaptation layer of
3424 instrumentation.
3425
3426 This section first discusses the general method of instrumenting the
3427 Linux kernel for LTTng. This method is then reused for the specific
3428 case of instrumenting a kernel module.
3429
3430
3431 [[instrumenting-linux-kernel-itself]]
3432 ===== Instrumenting the Linux kernel for LTTng
3433
3434 The following subsections explain strictly how to add custom LTTng
3435 instrumentation to the Linux kernel. They do not explain how the
3436 macros actually work and the internal mechanics of the tracer.
3437
3438 You should have a Linux kernel source code tree to work with.
3439 Throughout this section, all file paths are relative to the root of
3440 this tree unless otherwise stated.
3441
3442 You need a copy of the LTTng-modules Git repository:
3443
3444 [role="term"]
3445 ----
3446 git clone git://git.lttng.org/lttng-modules.git
3447 ----
3448
3449 The steps to add custom LTTng instrumentation to a Linux kernel
3450 involves defining and using the mainline `TRACE_EVENT()` tracepoints
3451 first, then writing and using the LTTng adaptation layer.
3452
3453
3454 [[mainline-trace-event]]
3455 ===== Defining/using tracepoints with mainline `TRACE_EVENT()` infrastructure
3456
3457 The first step is to define tracepoints using the mainline Linux
3458 `TRACE_EVENT()` macro and insert tracepoints where you want them.
3459 Your tracepoint definitions reside in a header file in
3460 dir:{include/trace/events}. If you're adding tracepoints to an existing
3461 subsystem, edit its appropriate header file.
3462
3463 As an example, the following header file (let's call it
3464 dir:{include/trace/events/hello.h}) defines one tracepoint using
3465 `TRACE_EVENT()`:
3466
3467 [source,c]
3468 ----
3469 /* subsystem name is "hello" */
3470 #undef TRACE_SYSTEM
3471 #define TRACE_SYSTEM hello
3472
3473 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3474 #define _TRACE_HELLO_H
3475
3476 #include <linux/tracepoint.h>
3477
3478 TRACE_EVENT(
3479 /* "hello" is the subsystem name, "world" is the event name */
3480 hello_world,
3481
3482 /* tracepoint function prototype */
3483 TP_PROTO(int foo, const char* bar),
3484
3485 /* arguments for this tracepoint */
3486 TP_ARGS(foo, bar),
3487
3488 /* LTTng doesn't need those */
3489 TP_STRUCT__entry(),
3490 TP_fast_assign(),
3491 TP_printk("", 0)
3492 );
3493
3494 #endif
3495
3496 /* this part must be outside protection */
3497 #include <trace/define_trace.h>
3498 ----
3499
3500 Notice that we don't use any of the last three arguments: they
3501 are left empty here because LTTng doesn't need them. You would only fill
3502 `TP_STRUCT__entry()`, `TP_fast_assign()` and `TP_printk()` if you were
3503 to also use this tracepoint for ftrace/perf.
3504
3505 Once this is done, you may place calls to `trace_hello_world()`
3506 wherever you want in the Linux source code. As an example, let us place
3507 such a tracepoint in the `usb_probe_device()` static function
3508 (path:{drivers/usb/core/driver.c}):
3509
3510 [source,c]
3511 ----
3512 /* called from driver core with dev locked */
3513 static int usb_probe_device(struct device *dev)
3514 {
3515 struct usb_device_driver *udriver = to_usb_device_driver(dev->driver);
3516 struct usb_device *udev = to_usb_device(dev);
3517 int error = 0;
3518
3519 trace_hello_world(udev->devnum, udev->product);
3520
3521 /* ... */
3522 }
3523 ----
3524
3525 This tracepoint should fire every time a USB device is plugged in.
3526
3527 At the top of path:{driver.c}, we need to include our actual tracepoint
3528 definition and, in this case (one place per subsystem), define
3529 `CREATE_TRACE_POINTS`, which creates our tracepoint:
3530
3531 [source,c]
3532 ----
3533 /* ... */
3534
3535 #include "usb.h"
3536
3537 #define CREATE_TRACE_POINTS
3538 #include <trace/events/hello.h>
3539
3540 /* ... */
3541 ----
3542
3543 Build your custom Linux kernel. In order to use LTTng, make sure the
3544 following kernel configuration options are enabled:
3545
3546 * `CONFIG_MODULES` (loadable module support)
3547 * `CONFIG_KALLSYMS` (load all symbols for debugging/kksymoops)
3548 * `CONFIG_HIGH_RES_TIMERS` (high resolution timer support)
3549 * `CONFIG_TRACEPOINTS` (kernel tracepoint instrumentation)
3550
3551 Boot the custom kernel. The directory
3552 dir:{/sys/kernel/debug/tracing/events/hello} should exist if everything
3553 went right, with a dir:{hello_world} subdirectory.
3554
3555
3556 [[lttng-adaptation-layer]]
3557 ===== Adding the LTTng adaptation layer
3558
3559 The steps to write the LTTng adaptation layer are, in your
3560 LTTng-modules copy's source code tree:
3561
3562 . In dir:{instrumentation/events/lttng-module},
3563 add a header +__subsys__.h+ for your custom
3564 subsystem +__subsys__+ and write your
3565 tracepoint definitions using LTTng-modules macros in it.
3566 Those macros look like the mainline kernel equivalents,
3567 but they present subtle, yet important differences.
3568 . In dir:{probes}, create the C source file of the LTTng probe kernel
3569 module for your subsystem. It should be named
3570 +lttng-probe-__subsys__.c+.
3571 . Edit path:{probes/Makefile} so that the LTTng-modules project
3572 builds your custom LTTng probe kernel module.
3573 . Build and install LTTng kernel modules.
3574
3575 Following our `hello_world` event example, here's the content of
3576 path:{instrumentation/events/lttng-module/hello.h}:
3577
3578 [source,c]
3579 ----
3580 #undef TRACE_SYSTEM
3581 #define TRACE_SYSTEM hello
3582
3583 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3584 #define _TRACE_HELLO_H
3585
3586 #include "../../../probes/lttng-tracepoint-event.h"
3587 #include <linux/tracepoint.h>
3588
3589 LTTNG_TRACEPOINT_EVENT(
3590 /* format identical to mainline version for those */
3591 hello_world,
3592 TP_PROTO(int foo, const char* bar),
3593 TP_ARGS(foo, bar),
3594
3595 /* possible differences */
3596 TP_STRUCT__entry(
3597 __field(int, my_int)
3598 __field(char, char0)
3599 __field(char, char1)
3600 __string(product, bar)
3601 ),
3602
3603 /* notice the use of tp_assign()/tp_strcpy() and no semicolons */
3604 TP_fast_assign(
3605 tp_assign(my_int, foo)
3606 tp_assign(char0, bar[0])
3607 tp_assign(char1, bar[1])
3608 tp_strcpy(product, bar)
3609 ),
3610
3611 /* This one is actually not used by LTTng either, but must be
3612 * present for the moment.
3613 */
3614 TP_printk("", 0)
3615
3616 /* no semicolon after this either */
3617 )
3618
3619 #endif
3620
3621 /* other difference: do NOT include <trace/define_trace.h> */
3622 #include "../../../probes/define_trace.h"
3623 ----
3624
3625 Some possible entries for `TP_STRUCT__entry()` and `TP_fast_assign()`,
3626 in the case of LTTng-modules, are shown in the
3627 <<lttng-modules-ref,LTTng-modules reference>> section.
3628
3629 The best way to learn how to use the above macros is to inspect
3630 existing LTTng tracepoint definitions in
3631 dir:{instrumentation/events/lttng-module} header files. Compare
3632 them with the Linux kernel mainline versions in
3633 dir:{include/trace/events}.
3634
3635 The next step is writing the LTTng probe kernel module C source file.
3636 This one is named +lttng-probe-__subsys__.c+
3637 in dir:{probes}. You may always use the following template:
3638
3639 [source,c]
3640 ----
3641 #include <linux/module.h>
3642 #include "../lttng-tracer.h"
3643
3644 /* Build time verification of mismatch between mainline TRACE_EVENT()
3645 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3646 */
3647 #include <trace/events/hello.h>
3648
3649 /* create LTTng tracepoint probes */
3650 #define LTTNG_PACKAGE_BUILD
3651 #define CREATE_TRACE_POINTS
3652 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
3653
3654 #include "../instrumentation/events/lttng-module/hello.h"
3655
3656 MODULE_LICENSE("GPL and additional rights");
3657 MODULE_AUTHOR("Your name <your-email>");
3658 MODULE_DESCRIPTION("LTTng hello probes");
3659 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
3660 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
3661 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
3662 LTTNG_MODULES_EXTRAVERSION);
3663 ----
3664
3665 Just replace `hello` with your subsystem name. In this example,
3666 `<trace/events/hello.h>`, which is the original mainline tracepoint
3667 definition header, is included for verification purposes: the
3668 LTTng-modules build system is able to emit an error at build time when
3669 the arguments of the mainline `TRACE_EVENT()` definitions do not match
3670 the ones of the LTTng-modules adaptation layer
3671 (`LTTNG_TRACEPOINT_EVENT()`).
3672
3673 Edit path:{probes/Makefile} and add your new kernel module object
3674 next to existing ones:
3675
3676 [source,make]
3677 ----
3678 # ...
3679
3680 obj-m += lttng-probe-module.o
3681 obj-m += lttng-probe-power.o
3682
3683 obj-m += lttng-probe-hello.o
3684
3685 # ...
3686 ----
3687
3688 Time to build! Point to your custom Linux kernel source tree using
3689 the `KERNELDIR` variable:
3690
3691 [role="term"]
3692 ----
3693 make KERNELDIR=/path/to/custom/linux
3694 ----
3695
3696 Finally, install modules:
3697
3698 [role="term"]
3699 ----
3700 sudo make modules_install
3701 ----
3702
3703
3704 [[instrumenting-linux-kernel-tracing]]
3705 ===== Tracing
3706
3707 The <<controlling-tracing,Controlling tracing>> section explains
3708 how to use the `lttng` tool to create and control tracing sessions.
3709 Although the `lttng` tool loads the appropriate _known_ LTTng kernel
3710 modules when needed (by launching `root`'s session daemon), it won't
3711 load your custom `lttng-probe-hello` module by default. You need to
3712 manually start an LTTng session daemon as `root` and use the
3713 `--extra-kmod-probes` option to append your custom probe module to the
3714 default list:
3715
3716 [role="term"]
3717 ----
3718 sudo pkill -u root lttng-sessiond
3719 sudo lttng-sessiond --extra-kmod-probes=hello
3720 ----
3721
3722 The first command makes sure any existing instance is killed. If
3723 you're not interested in using the default probes, or if you only
3724 want to use a few of them, you could use `--kmod-probes` instead,
3725 which specifies an absolute list:
3726
3727 [role="term"]
3728 ----
3729 sudo lttng-sessiond --kmod-probes=hello,ext4,net,block,signal,sched
3730 ----
3731
3732 Confirm the custom probe module is loaded:
3733
3734 [role="term"]
3735 ----
3736 lsmod | grep lttng_probe_hello
3737 ----
3738
3739 The `hello_world` event should appear in the list when doing
3740
3741 [role="term"]
3742 ----
3743 lttng list --kernel | grep hello
3744 ----
3745
3746 You may now create an LTTng tracing session, enable the `hello_world`
3747 kernel event (and others if you wish) and start tracing:
3748
3749 [role="term"]
3750 ----
3751 sudo lttng create my-session
3752 sudo lttng enable-event --kernel hello_world
3753 sudo lttng start
3754 ----
3755
3756 Plug a few USB devices, then stop tracing and inspect the trace (if
3757 http://diamon.org/babeltrace[Babeltrace]
3758 is installed):
3759
3760 [role="term"]
3761 ----
3762 sudo lttng stop
3763 sudo lttng view
3764 ----
3765
3766 Here's a sample output:
3767
3768 ----
3769 [15:30:34.835895035] (+?.?????????) hostname hello_world: { cpu_id = 1 }, { my_int = 8, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3770 [15:30:42.262781421] (+7.426886386) hostname hello_world: { cpu_id = 1 }, { my_int = 9, char0 = 80, char1 = 97, product = "Patriot Memory" }
3771 [15:30:48.175621778] (+5.912840357) hostname hello_world: { cpu_id = 1 }, { my_int = 10, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3772 ----
3773
3774 Two USB flash drives were used for this test.
3775
3776 You may change your LTTng custom probe, rebuild it and reload it at
3777 any time when not tracing. Make sure you remove the old module
3778 (either by killing the root LTTng session daemon which loaded the
3779 module in the first place, or by using `modprobe --remove` directly)
3780 before loading the updated one.
3781
3782
3783 [[instrumenting-out-of-tree-linux-kernel]]
3784 ===== Advanced: Instrumenting an out-of-tree Linux kernel module for LTTng
3785
3786 Instrumenting a custom Linux kernel module for LTTng follows the exact
3787 same steps as
3788 <<instrumenting-linux-kernel-itself,adding instrumentation
3789 to the Linux kernel itself>>,
3790 the only difference being that your mainline tracepoint definition
3791 header doesn't reside in the mainline source tree, but in your
3792 kernel module source tree.
3793
3794 The only reference to this mainline header is in the LTTng custom
3795 probe's source code (path:{probes/lttng-probe-hello.c} in our example),
3796 for build time verification:
3797
3798 [source,c]
3799 ----
3800 /* ... */
3801
3802 /* Build time verification of mismatch between mainline TRACE_EVENT()
3803 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3804 */
3805 #include <trace/events/hello.h>
3806
3807 /* ... */
3808 ----
3809
3810 The preferred, flexible way to include your module's mainline
3811 tracepoint definition header is to put it in a specific directory
3812 relative to your module's root (`tracepoints`, for example) and include it
3813 relative to your module's root directory in the LTTng custom probe's
3814 source:
3815
3816 [source,c]
3817 ----
3818 #include <tracepoints/hello.h>
3819 ----
3820
3821 You may then build LTTng-modules by adding your module's root
3822 directory as an include path to the extra C flags:
3823
3824 [role="term"]
3825 ----
3826 make ccflags-y=-I/path/to/kernel/module KERNELDIR=/path/to/custom/linux
3827 ----
3828
3829 Using `ccflags-y` allows you to move your kernel module to another
3830 directory and rebuild the LTTng-modules project with no change to
3831 source files.
3832
3833
3834 [role="since-2.5"]
3835 [[proc-lttng-logger-abi]]
3836 ==== LTTng logger ABI
3837
3838 The `lttng-tracer` Linux kernel module, installed by the LTTng-modules
3839 package, creates a special LTTng logger ABI file path:{/proc/lttng-logger}
3840 when loaded. Writing text data to this file generates an LTTng kernel
3841 domain event named `lttng_logger`.
3842
3843 Unlike other kernel domain events, `lttng_logger` may be enabled by
3844 any user, not only root users or members of the tracing group.
3845
3846 To use the LTTng logger ABI, simply write a string to
3847 path:{/proc/lttng-logger}:
3848
3849 [role="term"]
3850 ----
3851 echo -n 'Hello, World!' > /proc/lttng-logger
3852 ----
3853
3854 The `msg` field of the `lttng_logger` event contains the recorded
3855 message.
3856
3857 NOTE: Messages are split in chunks of 1024{nbsp}bytes.
3858
3859 The LTTng logger ABI is a quick and easy way to trace some events from
3860 user space through the kernel tracer. However, it is much more basic
3861 than LTTng-UST: it's slower (involves system call round-trip to the
3862 kernel and only supports logging strings). The LTTng logger ABI is
3863 particularly useful for recording logs as LTTng traces from shell
3864 scripts, potentially combining them with other Linux kernel/user space
3865 events.
3866
3867
3868 [[instrumenting-32-bit-app-on-64-bit-system]]
3869 ==== Advanced: Instrumenting a 32-bit application on a 64-bit system
3870
3871 [[advanced-instrumenting-techniques]]In order to trace a 32-bit
3872 application running on a 64-bit system,
3873 LTTng must use a dedicated 32-bit
3874 <<lttng-consumerd,consumer daemon>>. This section discusses how to
3875 build that daemon (which is _not_ part of the default 64-bit LTTng
3876 build) and the LTTng 32-bit tracing libraries, and how to instrument
3877 a 32-bit application in that context.
3878
3879 Make sure you install all 32-bit versions of LTTng dependencies.
3880 Their names can be found in the `README.md` files of each LTTng package
3881 source. How to find and install them depends on your target's
3882 Linux distribution. `gcc-multilib` is a common package name for the
3883 multilib version of GCC, which you also need.
3884
3885 The following packages will be built for 32-bit support on a 64-bit
3886 system: http://urcu.so/[Userspace RCU],
3887 LTTng-UST and LTTng-tools.
3888
3889
3890 [[building-32-bit-userspace-rcu]]
3891 ===== Building 32-bit Userspace RCU
3892
3893 Follow this:
3894
3895 [role="term"]
3896 ----
3897 git clone git://git.urcu.so/urcu.git
3898 cd urcu
3899 ./bootstrap
3900 ./configure --libdir=/usr/lib32 CFLAGS=-m32
3901 make
3902 sudo make install
3903 sudo ldconfig
3904 ----
3905
3906 The `-m32` C compiler flag creates 32-bit object files and `--libdir`
3907 indicates where to install the resulting libraries.
3908
3909
3910 [[building-32-bit-lttng-ust]]
3911 ===== Building 32-bit LTTng-UST
3912
3913 Follow this:
3914
3915 [role="term"]
3916 ----
3917 git clone http://git.lttng.org/lttng-ust.git
3918 cd lttng-ust
3919 ./bootstrap
3920 ./configure --prefix=/usr \
3921 --libdir=/usr/lib32 \
3922 CFLAGS=-m32 CXXFLAGS=-m32 \
3923 LDFLAGS=-L/usr/lib32
3924 make
3925 sudo make install
3926 sudo ldconfig
3927 ----
3928
3929 `-L/usr/lib32` is required for the build to find the 32-bit versions
3930 of Userspace RCU and other dependencies.
3931
3932 [NOTE]
3933 ====
3934 Depending on your Linux distribution,
3935 32-bit libraries could be installed at a different location than
3936 dir:{/usr/lib32}. For example, Debian is known to install
3937 some 32-bit libraries in dir:{/usr/lib/i386-linux-gnu}.
3938
3939 In this case, make sure to set `LDFLAGS` to all the
3940 relevant 32-bit library paths, for example,
3941 `LDFLAGS="-L/usr/lib32 -L/usr/lib/i386-linux-gnu"`.
3942 ====
3943
3944 NOTE: You may add options to path:{./configure} if you need them, e.g., for
3945 Java and SystemTap support. Look at `./configure --help` for more
3946 information.
3947
3948
3949 [[building-32-bit-lttng-tools]]
3950 ===== Building 32-bit LTTng-tools
3951
3952 Since the host is a 64-bit system, most 32-bit binaries and libraries of
3953 LTTng-tools are not needed; the host uses their 64-bit counterparts.
3954 The required step here is building and installing a 32-bit consumer
3955 daemon.
3956
3957 Follow this:
3958
3959 [role="term"]
3960 ----
3961 git clone http://git.lttng.org/lttng-tools.git
3962 cd lttng-ust
3963 ./bootstrap
3964 ./configure --prefix=/usr \
3965 --libdir=/usr/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3966 LDFLAGS=-L/usr/lib32
3967 make
3968 cd src/bin/lttng-consumerd
3969 sudo make install
3970 sudo ldconfig
3971 ----
3972
3973 The above commands build all the LTTng-tools project as 32-bit
3974 applications, but only installs the 32-bit consumer daemon.
3975
3976
3977 [[building-64-bit-lttng-tools]]
3978 ===== Building 64-bit LTTng-tools
3979
3980 Finally, you need to build a 64-bit version of LTTng-tools which is
3981 aware of the 32-bit consumer daemon previously built and installed:
3982
3983 [role="term"]
3984 ----
3985 make clean
3986 ./bootstrap
3987 ./configure --prefix=/usr \
3988 --with-consumerd32-libdir=/usr/lib32 \
3989 --with-consumerd32-bin=/usr/lib32/lttng/libexec/lttng-consumerd
3990 make
3991 sudo make install
3992 sudo ldconfig
3993 ----
3994
3995 Henceforth, the 64-bit session daemon automatically finds the
3996 32-bit consumer daemon if required.
3997
3998
3999 [[building-instrumented-32-bit-c-application]]
4000 ===== Building an instrumented 32-bit C application
4001
4002 Let us reuse the _Hello world_ example of
4003 <<tracing-your-own-user-application,Tracing your own user application>>
4004 (<<getting-started,Getting started>> chapter).
4005
4006 The instrumentation process is unaltered.
4007
4008 First, a typical 64-bit build (assuming you're running a 64-bit system):
4009
4010 [role="term"]
4011 ----
4012 gcc -o hello64 -I. hello.c hello-tp.c -ldl -llttng-ust
4013 ----
4014
4015 Now, a 32-bit build:
4016
4017 [role="term"]
4018 ----
4019 gcc -o hello32 -I. -m32 hello.c hello-tp.c -L/usr/lib32 \
4020 -ldl -llttng-ust -Wl,-rpath,/usr/lib32
4021 ----
4022
4023 The `-rpath` option, passed to the linker, makes the dynamic loader
4024 check for libraries in dir:{/usr/lib32} before looking in its default paths,
4025 where it should find the 32-bit version of `liblttng-ust`.
4026
4027
4028 [[running-32-bit-and-64-bit-c-applications]]
4029 ===== Running 32-bit and 64-bit versions of an instrumented C application
4030
4031 Now, both 32-bit and 64-bit versions of the _Hello world_ example above
4032 can be traced in the same tracing session. Use the `lttng` tool as usual
4033 to create a tracing session and start tracing:
4034
4035 [role="term"]
4036 ----
4037 lttng create session-3264
4038 lttng enable-event -u -a
4039 ./hello32
4040 ./hello64
4041 lttng stop
4042 ----
4043
4044 Use `lttng view` to verify both processes were
4045 successfully traced.
4046
4047
4048 [[controlling-tracing]]
4049 === Controlling tracing
4050
4051 Once you're in possession of a software that is properly
4052 <<instrumenting,instrumented>> for LTTng tracing, be it thanks to
4053 the built-in LTTng probes for the Linux kernel, a custom user
4054 application or a custom Linux kernel, all that is left is actually
4055 tracing it. As a user, you control LTTng tracing using a single command
4056 line interface: the `lttng` tool. This tool uses `liblttng-ctl` behind
4057 the scene to connect to and communicate with session daemons. LTTng
4058 session daemons may either be started manually (`lttng-sessiond`) or
4059 automatically by the `lttng` command when needed. Trace data may
4060 be forwarded to the network and used elsewhere using an LTTng relay
4061 daemon (`lttng-relayd`).
4062
4063 The manpages of `lttng`, `lttng-sessiond` and `lttng-relayd` are pretty
4064 complete, thus this section is not an online copy of the latter (we
4065 leave this contents for the
4066 <<online-lttng-manpages,Online LTTng manpages>> section).
4067 This section is rather a tour of LTTng
4068 features through practical examples and tips.
4069
4070 If not already done, make sure you understand the core concepts
4071 and how LTTng components connect together by reading the
4072 <<understanding-lttng,Understanding LTTng>> chapter; this section
4073 assumes you are familiar with them.
4074
4075
4076 [[creating-destroying-tracing-sessions]]
4077 ==== Creating and destroying tracing sessions
4078
4079 Whatever you want to do with `lttng`, it has to happen inside a
4080 **tracing session**, created beforehand. A session, in general, is a
4081 per-user container of state. A tracing session is no different; it
4082 keeps a specific state of stuff like:
4083
4084 * session name
4085 * enabled/disabled channels with associated parameters
4086 * enabled/disabled events with associated log levels and filters
4087 * context information added to channels
4088 * tracing activity (started or stopped)
4089
4090 and more.
4091
4092 A single user may have many active tracing sessions. LTTng session
4093 daemons are the ultimate owners and managers of tracing sessions. For
4094 user space tracing, each user has its own session daemon. Since Linux
4095 kernel tracing requires root privileges, only `root`'s session daemon
4096 may enable and trace kernel events. However, `lttng` has a `--group`
4097 option (which is passed to `lttng-sessiond` when starting it) to
4098 specify the name of a _tracing group_ which selected users may be part
4099 of to be allowed to communicate with `root`'s session daemon. By
4100 default, the tracing group name is `tracing`.
4101
4102 To create a tracing session, do:
4103
4104 [role="term"]
4105 ----
4106 lttng create my-session
4107 ----
4108
4109 This creates a new tracing session named `my-session` and make it
4110 the current one. If you don't specify a name (running only
4111 `lttng create`), your tracing session is named `auto` followed by the
4112 current date and time. Traces
4113 are written in +\~/lttng-traces/__session__-+ followed
4114 by the tracing session's creation date/time by default, where
4115 +__session__+ is the tracing session name. To save them
4116 at a different location, use the `--output` option:
4117
4118 [role="term"]
4119 ----
4120 lttng create --output /tmp/some-directory my-session
4121 ----
4122
4123 You may create as many tracing sessions as you wish:
4124
4125 [role="term"]
4126 ----
4127 lttng create other-session
4128 lttng create yet-another-session
4129 ----
4130
4131 You may view all existing tracing sessions using the `list` command:
4132
4133 [role="term"]
4134 ----
4135 lttng list
4136 ----
4137
4138 The state of a _current tracing session_ is kept in path:{~/.lttngrc}. Each
4139 invocation of `lttng` reads this file to set its current tracing
4140 session name so that you don't have to specify a session name for each
4141 command. You could edit this file manually, but the preferred way to
4142 set the current tracing session is to use the `set-session` command:
4143
4144 [role="term"]
4145 ----
4146 lttng set-session other-session
4147 ----
4148
4149 Most `lttng` commands accept a `--session` option to specify the name
4150 of the target tracing session.
4151
4152 Any existing tracing session may be destroyed using the `destroy`
4153 command:
4154
4155 [role="term"]
4156 ----
4157 lttng destroy my-session
4158 ----
4159
4160 Providing no argument to `lttng destroy` destroys the current
4161 tracing session. Destroying a tracing session stops any tracing
4162 running within the latter. Destroying a tracing session frees resources
4163 acquired by the session daemon and tracer side, making sure to flush
4164 all trace data.
4165
4166 You can't do much with LTTng using only the `create`, `set-session`
4167 and `destroy` commands of `lttng`, but it is essential to know them in
4168 order to control LTTng tracing, which always happen within the scope of
4169 a tracing session.
4170
4171
4172 [[enabling-disabling-events]]
4173 ==== Enabling and disabling events
4174
4175 Inside a tracing session, individual events may be enabled or disabled
4176 so that tracing them may or may not generate trace data.
4177
4178 We sometimes use the term _event_ metonymically throughout this text to
4179 refer to a specific condition, or _rule_, that could lead, when
4180 satisfied, to an actual occurring event (a point at a specific position
4181 in source code/binary program, logical processor and time capturing
4182 some payload) being recorded as trace data. This specific condition is
4183 composed of:
4184
4185 . A **domain** (kernel, user space, `java.util.logging`, or log4j)
4186 (required).
4187 . One or many **instrumentation points** in source code or binary
4188 program (tracepoint name, address, symbol name, function name,
4189 logger name, amongst other types of probes) to be executed (required).
4190 . A **log level** (each instrumentation point declares its own log
4191 level) or log level range to match (optional; only valid for user
4192 space domain).
4193 . A **custom user expression**, or **filter**, that must evaluate to
4194 _true_ when a tracepoint is executed (optional; only valid for user
4195 space domain).
4196
4197 All conditions are specified using arguments passed to the
4198 `enable-event` command of the `lttng` tool.
4199
4200 Condition 1 is specified using either `--kernel`/`-k` (kernel),
4201 `--userspace`/`-u` (user space), `--jul`/`-j`
4202 (JUL), or `--log4j`/`-l` (log4j).
4203 Exactly one of those four arguments must be specified.
4204
4205 Condition 2 is specified using one of:
4206
4207 `--tracepoint`::
4208 Tracepoint.
4209
4210 `--probe`::
4211 Dynamic probe (address, symbol name or combination
4212 of both in binary program; only valid for kernel domain).
4213
4214 `--function`::
4215 function entry/exit (address, symbol name or
4216 combination of both in binary program; only valid for kernel domain).
4217
4218 `--syscall`::
4219 System call entry/exit (only valid for kernel domain).
4220
4221 When none of the above is specified, `enable-event` defaults to
4222 using `--tracepoint`.
4223
4224 Condition 3 is specified using one of:
4225
4226 `--loglevel`::
4227 Log level range from the specified level to the most severe
4228 level.
4229
4230 `--loglevel-only`::
4231 Specific log level.
4232
4233 See `lttng enable-event --help` for the complete list of log level
4234 names.
4235
4236 Condition 4 is specified using the `--filter` option. This filter is
4237 a C-like expression, potentially reading real-time values of event
4238 fields, that has to evaluate to _true_ for the condition to be satisfied.
4239 Event fields are read using plain identifiers while context fields
4240 must be prefixed with `$ctx.`. See `lttng enable-event --help` for
4241 all usage details.
4242
4243 The aforementioned arguments are combined to create and enable events.
4244 Each unique combination of arguments leads to a different
4245 _enabled event_. The log level and filter arguments are optional, their
4246 default values being respectively all log levels and a filter which
4247 always returns _true_.
4248
4249 Here are a few examples (you must
4250 <<creating-destroying-tracing-sessions,create a tracing session>>
4251 first):
4252
4253 [role="term"]
4254 ----
4255 lttng enable-event -u --tracepoint my_app:hello_world
4256 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_WARNING
4257 lttng enable-event -u --tracepoint 'my_other_app:*'
4258 lttng enable-event -u --tracepoint my_app:foo_bar \
4259 --filter 'some_field <= 23 && !other_field'
4260 lttng enable-event -k --tracepoint sched_switch
4261 lttng enable-event -k --tracepoint gpio_value
4262 lttng enable-event -k --function usb_probe_device usb_probe_device
4263 lttng enable-event -k --syscall --all
4264 ----
4265
4266 The wildcard symbol, `*`, matches _anything_ and may only be used at
4267 the end of the string when specifying a _tracepoint_. Make sure to
4268 use it between single quotes in your favorite shell to avoid
4269 undesired shell expansion.
4270
4271 System call events can be enabled individually, too:
4272
4273 [role="term"]
4274 ----
4275 lttng enable-event -k --syscall open
4276 lttng enable-event -k --syscall read
4277 lttng enable-event -k --syscall fork,chdir,pipe
4278 ----
4279
4280 The complete list of available system call events can be
4281 obtained using
4282
4283 [role="term"]
4284 ----
4285 lttng list --kernel --syscall
4286 ----
4287
4288 You can see a list of events (enabled or disabled) using
4289
4290 [role="term"]
4291 ----
4292 lttng list some-session
4293 ----
4294
4295 where `some-session` is the name of the desired tracing session.
4296
4297 What you're actually doing when enabling events with specific conditions
4298 is creating a **whitelist** of traceable events for a given channel.
4299 Thus, the following case presents redundancy:
4300
4301 [role="term"]
4302 ----
4303 lttng enable-event -u --tracepoint my_app:hello_you
4304 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_DEBUG
4305 ----
4306
4307 The second command, matching a log level range, is useless since the first
4308 command enables all tracepoints matching the same name,
4309 `my_app:hello_you`.
4310
4311 Disabling an event is simpler: you only need to provide the event
4312 name to the `disable-event` command:
4313
4314 [role="term"]
4315 ----
4316 lttng disable-event --userspace my_app:hello_you
4317 ----
4318
4319 This name has to match a name previously given to `enable-event` (it
4320 has to be listed in the output of `lttng list some-session`).
4321 The `*` wildcard is supported, as long as you also used it in a
4322 previous `enable-event` invocation.
4323
4324 Disabling an event does not add it to some blacklist: it simply removes
4325 it from its channel's whitelist. This is why you cannot disable an event
4326 which wasn't previously enabled.
4327
4328 A disabled event doesn't generate any trace data, even if all its
4329 specified conditions are met.
4330
4331 Events may be enabled and disabled at will, either when LTTng tracers
4332 are active or not. Events may be enabled before a user space application
4333 is even started.
4334
4335
4336 [[basic-tracing-session-control]]
4337 ==== Basic tracing session control
4338
4339 Once you have
4340 <<creating-destroying-tracing-sessions,created a tracing session>>
4341 and <<enabling-disabling-events,enabled one or more events>>,
4342 you may activate the LTTng tracers for the current tracing session at
4343 any time:
4344
4345 [role="term"]
4346 ----
4347 lttng start
4348 ----
4349
4350 Subsequently, you may stop the tracers:
4351
4352 [role="term"]
4353 ----
4354 lttng stop
4355 ----
4356
4357 LTTng is very flexible: user space applications may be launched before
4358 or after the tracers are started. Events are only recorded if they
4359 are properly enabled and if they occur while tracers are active.
4360
4361 A tracing session name may be passed to both the `start` and `stop`
4362 commands to start/stop tracing a session other than the current one.
4363
4364
4365 [[enabling-disabling-channels]]
4366 ==== Enabling and disabling channels
4367
4368 <<event,As mentioned>> in the
4369 <<understanding-lttng,Understanding LTTng>> chapter, enabled
4370 events are contained in a specific channel, itself contained in a
4371 specific tracing session. A channel is a group of events with
4372 tunable parameters (event loss mode, sub-buffer size, number of
4373 sub-buffers, trace file sizes and count, to name a few). A given channel
4374 may only be responsible for enabled events belonging to one domain:
4375 either kernel or user space.
4376
4377 If you only used the `create`, `enable-event` and `start`/`stop`
4378 commands of the `lttng` tool so far, one or two channels were
4379 automatically created for you (one for the kernel domain and/or one
4380 for the user space domain). The default channels are both named
4381 `channel0`; channels from different domains may have the same name.
4382
4383 The current channels of a given tracing session can be viewed with
4384
4385 [role="term"]
4386 ----
4387 lttng list some-session
4388 ----
4389
4390 where `some-session` is the name of the desired tracing session.
4391
4392 To create and enable a channel, use the `enable-channel` command:
4393
4394 [role="term"]
4395 ----
4396 lttng enable-channel --kernel my-channel
4397 ----
4398
4399 This creates a kernel domain channel named `my-channel` with
4400 default parameters in the current tracing session.
4401
4402 [NOTE]
4403 ====
4404 Because of a current limitation, all
4405 channels must be _created_ prior to beginning tracing in a
4406 given tracing session, that is before the first time you do
4407 `lttng start`.
4408
4409 Since a channel is automatically created by
4410 `enable-event` only for the specified domain, you cannot,
4411 for example, enable a kernel domain event, start tracing and then
4412 enable a user space domain event because no user space channel
4413 exists yet and it's too late to create one.
4414
4415 For this reason, make sure to configure your channels properly
4416 before starting the tracers for the first time!
4417 ====
4418
4419 Here's another example:
4420
4421 [role="term"]
4422 ----
4423 lttng enable-channel --userspace --session other-session --overwrite \
4424 --tracefile-size 1048576 1mib-channel
4425 ----
4426
4427 This creates a user space domain channel named `1mib-channel` in
4428 the tracing session named `other-session` that loses new events by
4429 overwriting previously recorded events (instead of the default mode of
4430 discarding newer ones) and saves trace files with a maximum size of
4431 1{nbsp}MiB each.
4432
4433 Note that channels may also be created using the `--channel` option of
4434 the `enable-event` command when the provided channel name doesn't exist
4435 for the specified domain:
4436
4437 [role="term"]
4438 ----
4439 lttng enable-event --kernel --channel some-channel sched_switch
4440 ----
4441
4442 If no kernel domain channel named `some-channel` existed before calling
4443 the above command, it would be created with default parameters.
4444
4445 You may enable the same event in two different channels:
4446
4447 [role="term"]
4448 ----
4449 lttng enable-event --userspace --channel my-channel app:tp
4450 lttng enable-event --userspace --channel other-channel app:tp
4451 ----
4452
4453 If both channels are enabled, the occurring `app:tp` event
4454 generates two recorded events, one for each channel.
4455
4456 Disabling a channel is done with the `disable-event` command:
4457
4458 [role="term"]
4459 ----
4460 lttng disable-event --kernel some-channel
4461 ----
4462
4463 The state of a channel precedes the individual states of events within
4464 it: events belonging to a disabled channel, even if they are
4465 enabled, won't be recorded.
4466
4467
4468
4469 [[fine-tuning-channels]]
4470 ===== Fine-tuning channels
4471
4472 There are various parameters that may be fine-tuned with the
4473 `enable-channel` command. The latter are well documented in
4474 man:lttng(1) and in the <<channel,Channel>> section of the
4475 <<understanding-lttng,Understanding LTTng>> chapter. For basic
4476 tracing needs, their default values should be just fine, but here are a
4477 few examples to break the ice.
4478
4479 As the frequency of recorded events increases--either because the
4480 event throughput is actually higher or because you enabled more events
4481 than usual&#8212;__event loss__ might be experienced. Since LTTng never
4482 waits, by design, for sub-buffer space availability (non-blocking
4483 tracer), when a sub-buffer is full and no empty sub-buffers are left,
4484 there are two possible outcomes: either the new events that do not fit
4485 are rejected, or they start replacing the oldest recorded events.
4486 The choice of which algorithm to use is a per-channel parameter, the
4487 default being discarding the newest events until there is some space
4488 left. If your situation always needs the latest events at the expense
4489 of writing over the oldest ones, create a channel with the `--overwrite`
4490 option:
4491
4492 [role="term"]
4493 ----
4494 lttng enable-channel --kernel --overwrite my-channel
4495 ----
4496
4497 When an event is lost, it means no space was available in any
4498 sub-buffer to accommodate it. Thus, if you want to cope with sporadic
4499 high event throughput situations and avoid losing events, you need to
4500 allocate more room for storing them in memory. This can be done by
4501 either increasing the size of sub-buffers or by adding sub-buffers.
4502 The following example creates a user space domain channel with
4503 16{nbsp}sub-buffers of 512{nbsp}kiB each:
4504
4505 [role="term"]
4506 ----
4507 lttng enable-channel --userspace --num-subbuf 16 --subbuf-size 512k big-channel
4508 ----
4509
4510 Both values need to be powers of two, otherwise they are rounded up
4511 to the next one.
4512
4513 Two other interesting available parameters of `enable-channel` are
4514 `--tracefile-size` and `--tracefile-count`, which respectively limit
4515 the size of each trace file and the their count for a given channel.
4516 When the number of written trace files reaches its limit for a given
4517 channel-CPU pair, the next trace file overwrites the very first
4518 one. The following example creates a kernel domain channel with a
4519 maximum of three trace files of 1{nbsp}MiB each:
4520
4521 [role="term"]
4522 ----
4523 lttng enable-channel --kernel --tracefile-size 1M --tracefile-count 3 my-channel
4524 ----
4525
4526 An efficient way to make sure lots of events are generated is enabling
4527 all kernel events in this channel and starting the tracer:
4528
4529 [role="term"]
4530 ----
4531 lttng enable-event --kernel --all --channel my-channel
4532 lttng start
4533 ----
4534
4535 After a few seconds, look at trace files in your tracing session
4536 output directory. For two CPUs, it should look like:
4537
4538 ----
4539 my-channel_0_0 my-channel_1_0
4540 my-channel_0_1 my-channel_1_1
4541 my-channel_0_2 my-channel_1_2
4542 ----
4543
4544 Amongst the files above, you might see one in each group with a size
4545 lower than 1{nbsp}MiB: they are the files currently being written.
4546
4547 Since all those small files are valid LTTng trace files, LTTng trace
4548 viewers may read them. It is the viewer's responsibility to properly
4549 merge the streams so as to present an ordered list to the user.
4550 http://diamon.org/babeltrace[Babeltrace]
4551 merges LTTng trace files correctly and is fast at doing it.
4552
4553
4554 [[adding-context]]
4555 ==== Adding some context to channels
4556
4557 If you read all the sections of
4558 <<controlling-tracing,Controlling tracing>> so far, you should be
4559 able to create tracing sessions, create and enable channels and events
4560 within them and start/stop the LTTng tracers. Event fields recorded in
4561 trace files provide important information about occurring events, but
4562 sometimes external context may help you solve a problem faster. This
4563 section discusses how to add context information to events of a
4564 specific channel using the `lttng` tool.
4565
4566 There are various available context values which can accompany events
4567 recorded by LTTng, for example:
4568
4569 * **process information**:
4570 ** identifier (PID)
4571 ** name
4572 ** priority
4573 ** scheduling priority (niceness)
4574 ** thread identifier (TID)
4575 * the **hostname** of the system on which the event occurred
4576 * plenty of **performance counters** using perf, for example:
4577 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types
4578 ** cache misses
4579 ** branch instructions, misses, loads
4580 ** CPU faults
4581
4582 The full list is available in the output of `lttng add-context --help`.
4583 Some of them are reserved for a specific domain (kernel or
4584 user space) while others are available for both.
4585
4586 To add context information to one or all channels of a given tracing
4587 session, use the `add-context` command:
4588
4589 [role="term"]
4590 ----
4591 lttng add-context --userspace --type vpid --type perf:thread:cpu-cycles
4592 ----
4593
4594 The above example adds the virtual process identifier and per-thread
4595 CPU cycles count values to all recorded user space domain events of the
4596 current tracing session. Use the `--channel` option to select a specific
4597 channel:
4598
4599 [role="term"]
4600 ----
4601 lttng add-context --kernel --channel my-channel --type tid
4602 ----
4603
4604 adds the thread identifier value to all recorded kernel domain events
4605 in the channel `my-channel` of the current tracing session.
4606
4607 Beware that context information cannot be removed from channels once
4608 it's added for a given tracing session.
4609
4610
4611 [role="since-2.5"]
4612 [[saving-loading-tracing-session]]
4613 ==== Saving and loading tracing session configurations
4614
4615 Configuring a tracing session may be long: creating and enabling
4616 channels with specific parameters, enabling kernel and user space
4617 domain events with specific log levels and filters, and adding context
4618 to some channels are just a few of the many possible operations using
4619 the `lttng` command line tool. If you're going to use LTTng to solve real
4620 world problems, chances are you're going to have to record events using
4621 the same tracing session setup over and over, modifying a few variables
4622 each time in your instrumented program or environment. To avoid
4623 constant tracing session reconfiguration, the `lttng` tool is able to
4624 save and load tracing session configurations to/from XML files.
4625
4626 To save a given tracing session configuration, do:
4627
4628 [role="term"]
4629 ----
4630 lttng save my-session
4631 ----
4632
4633 where `my-session` is the name of the tracing session to save. Tracing
4634 session configurations are saved to dir:{~/.lttng/sessions} by default;
4635 use the `--output-path` option to change this destination directory.
4636
4637 All configuration parameters are saved:
4638
4639 * tracing session name
4640 * trace data output path
4641 * channels with their state and all their parameters
4642 * context information added to channels
4643 * events with their state, log level and filter
4644 * tracing activity (started or stopped)
4645
4646 To load a tracing session, simply do:
4647
4648 [role="term"]
4649 ----
4650 lttng load my-session
4651 ----
4652
4653 or, if you used a custom path:
4654
4655 [role="term"]
4656 ----
4657 lttng load --input-path /path/to/my-session.lttng
4658 ----
4659
4660 Your saved tracing session is restored as if you just configured
4661 it manually.
4662
4663
4664 [[sending-trace-data-over-the-network]]
4665 ==== Sending trace data over the network
4666
4667 The possibility of sending trace data over the network comes as a
4668 built-in feature of LTTng-tools. For this to be possible, an LTTng
4669 _relay daemon_ must be executed and listening on the machine where
4670 trace data is to be received, and the user must create a tracing
4671 session using appropriate options to forward trace data to the remote
4672 relay daemon.
4673
4674 The relay daemon listens on two different TCP ports: one for control
4675 information and the other for actual trace data.
4676
4677 Starting the relay daemon on the remote machine is easy:
4678
4679 [role="term"]
4680 ----
4681 lttng-relayd
4682 ----
4683
4684 This makes it listen to its default ports: 5342 for control and
4685 5343 for trace data. The `--control-port` and `--data-port` options may
4686 be used to specify different ports.
4687
4688 Traces written by `lttng-relayd` are written to
4689 +\~/lttng-traces/__hostname__/__session__+ by
4690 default, where +__hostname__+ is the host name of the
4691 traced (monitored) system and +__session__+ is the
4692 tracing session name. Use the `--output` option to write trace data
4693 outside dir:{~/lttng-traces}.
4694
4695 On the sending side, a tracing session must be created using the
4696 `lttng` tool with the `--set-url` option to connect to the distant
4697 relay daemon:
4698
4699 [role="term"]
4700 ----
4701 lttng create my-session --set-url net://distant-host
4702 ----
4703
4704 The URL format is described in the output of `lttng create --help`.
4705 The above example uses the default ports; the `--ctrl-url` and
4706 `--data-url` options may be used to set the control and data URLs
4707 individually.
4708
4709 Once this basic setup is completed and the connection is established,
4710 you may use the `lttng` tool on the target machine as usual; everything
4711 you do is transparently forwarded to the remote machine if needed.
4712 For example, a parameter changing the maximum size of trace files
4713 only has an effect on the distant relay daemon actually writing
4714 the trace.
4715
4716
4717 [role="since-2.4"]
4718 [[lttng-live]]
4719 ==== Viewing events as they arrive
4720
4721 We have seen how trace files may be produced by LTTng out of generated
4722 application and Linux kernel events. We have seen that those trace files
4723 may be either recorded locally by consumer daemons or remotely using
4724 a relay daemon. And we have seen that the maximum size and count of
4725 trace files is configurable for each channel. With all those features,
4726 it's still not possible to read a trace file as it is being written
4727 because it could be incomplete and appear corrupted to the viewer.
4728 There is a way to view events as they arrive, however: using
4729 _LTTng live_.
4730
4731 LTTng live is implemented, in LTTng, solely on the relay daemon side.
4732 As trace data is sent over the network to a relay daemon by a (possibly
4733 remote) consumer daemon, a _tee_ is created: trace data is recorded to
4734 trace files _as well as_ being transmitted to a connected live viewer:
4735
4736 [role="img-90"]
4737 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a live viewer.
4738 image::lttng-live.png[]
4739
4740 In order to use this feature, a tracing session must created in live
4741 mode on the target system:
4742
4743 [role="term"]
4744 ----
4745 lttng create --live
4746 ----
4747
4748 An optional parameter may be passed to `--live` to set the period
4749 (in microseconds) between flushes to the network
4750 (1{nbsp}second is the default). With:
4751
4752 [role="term"]
4753 ----
4754 lttng create --live 100000
4755 ----
4756
4757 the daemons flush their data every 100{nbsp}ms.
4758
4759 If no network output is specified to the `create` command, a local
4760 relay daemon is spawned. In this very common case, viewing a live
4761 trace is easy: enable events and start tracing as usual, then use
4762 `lttng view` to start the default live viewer:
4763
4764 [role="term"]
4765 ----
4766 lttng view
4767 ----
4768
4769 The correct arguments are passed to the live viewer so that it
4770 may connect to the local relay daemon and start reading live events.
4771
4772 You may also wish to use a live viewer not running on the target
4773 system. In this case, you should specify a network output when using
4774 the `create` command (`--set-url` or `--ctrl-url`/`--data-url` options).
4775 A distant LTTng relay daemon should also be started to receive control
4776 and trace data. By default, `lttng-relayd` listens on 127.0.0.1:5344
4777 for an LTTng live connection. Otherwise, the desired URL may be
4778 specified using its `--live-port` option.
4779
4780 The
4781 http://diamon.org/babeltrace[`babeltrace`]
4782 viewer supports LTTng live as one of its input formats. `babeltrace` is
4783 the default viewer when using `lttng view`. To use it manually, first
4784 list active tracing sessions by doing the following (assuming the relay
4785 daemon to connect to runs on the same host):
4786
4787 [role="term"]
4788 ----
4789 babeltrace --input-format lttng-live net://localhost
4790 ----
4791
4792 Then, choose a tracing session and start viewing events as they arrive
4793 using LTTng live:
4794
4795 [role="term"]
4796 ----
4797 babeltrace --input-format lttng-live net://localhost/host/hostname/my-session
4798 ----
4799
4800
4801 [role="since-2.3"]
4802 [[taking-a-snapshot]]
4803 ==== Taking a snapshot
4804
4805 The normal behavior of LTTng is to record trace data as trace files.
4806 This is ideal for keeping a long history of events that occurred on
4807 the target system and applications, but may be too much data in some
4808 situations. For example, you may wish to trace your application
4809 continuously until some critical situation happens, in which case you
4810 would only need the latest few recorded events to perform the desired
4811 analysis, not multi-gigabyte trace files.
4812
4813 LTTng has an interesting feature called _snapshots_. When creating
4814 a tracing session in snapshot mode, no trace files are written; the
4815 tracers' sub-buffers are constantly overwriting the oldest recorded
4816 events with the newest. At any time, either when the tracers are started
4817 or stopped, you may take a snapshot of those sub-buffers.
4818
4819 There is no difference between the format of a normal trace file and the
4820 format of a snapshot: viewers of LTTng traces also support LTTng
4821 snapshots. By default, snapshots are written to disk, but they may also
4822 be sent over the network.
4823
4824 To create a tracing session in snapshot mode, do:
4825
4826 [role="term"]
4827 ----
4828 lttng create --snapshot my-snapshot-session
4829 ----
4830
4831 Next, enable channels, events and add context to channels as usual.
4832 Once a tracing session is created in snapshot mode, channels are
4833 forced to use the
4834 <<channel-overwrite-mode-vs-discard-mode,overwrite>> mode
4835 (`--overwrite` option of the `enable-channel` command; also called
4836 _flight recorder mode_) and have an `mmap()` channel type
4837 (`--output mmap`).
4838
4839 Start tracing. When you're ready to take a snapshot, do:
4840
4841 [role="term"]
4842 ----
4843 lttng snapshot record --name my-snapshot
4844 ----
4845
4846 This records a snapshot named `my-snapshot` of all channels of
4847 all domains of the current tracing session. By default, snapshots files
4848 are recorded in the path returned by `lttng snapshot list-output`. You
4849 may change this path or decide to send snapshots over the network
4850 using either:
4851
4852 . an output path/URL specified when creating the tracing session
4853 (`lttng create`)
4854 . an added snapshot output path/URL using
4855 `lttng snapshot add-output`
4856 . an output path/URL provided directly to the
4857 `lttng snapshot record` command
4858
4859 Method 3 overrides method 2 which overrides method 1. When specifying
4860 a URL, a relay daemon must be listening on some machine (see
4861 <<sending-trace-data-over-the-network,Sending trace data over the network>>).
4862
4863 If you need to make absolutely sure that the output file won't be
4864 larger than a certain limit, you can set a maximum snapshot size when
4865 taking it with the `--max-size` option:
4866
4867 [role="term"]
4868 ----
4869 lttng snapshot record --name my-snapshot --max-size 2M
4870 ----
4871
4872 Older recorded events are discarded in order to respect this
4873 maximum size.
4874
4875
4876 [role="since-2.6"]
4877 [[mi]]
4878 ==== Machine interface
4879
4880 The `lttng` tool aims at providing a command output as human-readable as
4881 possible. While this output is easy to parse by a human being, machines
4882 have a hard time.
4883
4884 This is why the `lttng` tool provides the general `--mi` option, which
4885 must specify a machine interface output format. As of the latest
4886 LTTng stable release, only the `xml` format is supported. A schema
4887 definition (XSD) is made
4888 https://github.com/lttng/lttng-tools/blob/master/src/common/mi_lttng.xsd[available]
4889 to ease the integration with external tools as much as possible.
4890
4891 The `--mi` option can be used in conjunction with all `lttng` commands.
4892 Here are some examples:
4893
4894 [role="term"]
4895 ----
4896 lttng --mi xml create some-session
4897 lttng --mi xml list some-session
4898 lttng --mi xml list --kernel
4899 lttng --mi xml enable-event --kernel --syscall open
4900 lttng --mi xml start
4901 ----
4902
4903
4904 [[reference]]
4905 == Reference
4906
4907 This chapter presents various references for LTTng packages such as links
4908 to online manpages, tables needed by the rest of the text, descriptions
4909 of library functions, and more.
4910
4911
4912 [[online-lttng-manpages]]
4913 === Online LTTng manpages
4914
4915 LTTng packages currently install the following link:/man[man pages],
4916 available online using the links below:
4917
4918 * **LTTng-tools**
4919 ** man:lttng(1)
4920 ** man:lttng-sessiond(8)
4921 ** man:lttng-relayd(8)
4922 * **LTTng-UST**
4923 ** man:lttng-gen-tp(1)
4924 ** man:lttng-ust(3)
4925 ** man:lttng-ust-cyg-profile(3)
4926 ** man:lttng-ust-dl(3)
4927
4928
4929 [[lttng-ust-ref]]
4930 === LTTng-UST
4931
4932 This section presents references of the LTTng-UST package.
4933
4934
4935 [[liblttng-ust]]
4936 ==== LTTng-UST library (+liblttng&#8209;ust+)
4937
4938 The LTTng-UST library, or `liblttng-ust`, is the main shared object
4939 against which user applications are linked to make LTTng user space
4940 tracing possible.
4941
4942 The <<c-application,C application>> guide shows the complete
4943 process to instrument, build and run a C/$$C++$$ application using
4944 LTTng-UST, while this section contains a few important tables.
4945
4946
4947 [[liblttng-ust-tp-fields]]
4948 ===== Tracepoint fields macros (for `TP_FIELDS()`)
4949
4950 The available macros to define tracepoint fields, which should be listed
4951 within `TP_FIELDS()` in `TRACEPOINT_EVENT()`, are:
4952
4953 [role="growable func-desc",cols="asciidoc,asciidoc"]
4954 .Available macros to define LTTng-UST tracepoint fields
4955 |====
4956 |Macro |Description and parameters
4957
4958 |
4959 +ctf_integer(__t__, __n__, __e__)+
4960
4961 +ctf_integer_nowrite(__t__, __n__, __e__)+
4962 |
4963 Standard integer, displayed in base 10.
4964
4965 +__t__+::
4966 Integer C type (`int`, `long`, `size_t`, ...).
4967
4968 +__n__+::
4969 Field name.
4970
4971 +__e__+::
4972 Argument expression.
4973
4974 |+ctf_integer_hex(__t__, __n__, __e__)+
4975 |
4976 Standard integer, displayed in base 16.
4977
4978 +__t__+::
4979 Integer C type.
4980
4981 +__n__+::
4982 Field name.
4983
4984 +__e__+::
4985 Argument expression.
4986
4987 |+ctf_integer_network(__t__, __n__, __e__)+
4988 |
4989 Integer in network byte order (big endian), displayed in base 10.
4990
4991 +__t__+::
4992 Integer C type.
4993
4994 +__n__+::
4995 Field name.
4996
4997 +__e__+::
4998 Argument expression.
4999
5000 |+ctf_integer_network_hex(__t__, __n__, __e__)+
5001 |
5002 Integer in network byte order, displayed in base 16.
5003
5004 +__t__+::
5005 Integer C type.
5006
5007 +__n__+::
5008 Field name.
5009
5010 +__e__+::
5011 Argument expression.
5012
5013 |
5014 +ctf_float(__t__, __n__, __e__)+
5015
5016 +ctf_float_nowrite(__t__, __n__, __e__)+
5017 |
5018 Floating point number.
5019
5020 +__t__+::
5021 Floating point number C type (`float` or `double`).
5022
5023 +__n__+::
5024 Field name.
5025
5026 +__e__+::
5027 Argument expression.
5028
5029 |
5030 +ctf_string(__n__, __e__)+
5031
5032 +ctf_string_nowrite(__n__, __e__)+
5033 |
5034 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
5035
5036 +__n__+::
5037 Field name.
5038
5039 +__e__+::
5040 Argument expression.
5041
5042 |
5043 +ctf_array(__t__, __n__, __e__, __s__)+
5044
5045 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
5046 |
5047 Statically-sized array of integers
5048
5049 +__t__+::
5050 Array element C type.
5051
5052 +__n__+::
5053 Field name.
5054
5055 +__e__+::
5056 Argument expression.
5057
5058 +__s__+::
5059 Number of elements.
5060
5061 |
5062 +ctf_array_text(__t__, __n__, __e__, __s__)+
5063
5064 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
5065 |
5066 Statically-sized array, printed as text.
5067
5068 The string does not need to be null-terminated.
5069
5070 +__t__+::
5071 Array element C type (always `char`).
5072
5073 +__n__+::
5074 Field name.
5075
5076 +__e__+::
5077 Argument expression.
5078
5079 +__s__+::
5080 Number of elements.
5081
5082 |
5083 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
5084
5085 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
5086 |
5087 Dynamically-sized array of integers.
5088
5089 The type of +__E__+ needs to be unsigned.
5090
5091 +__t__+::
5092 Array element C type.
5093
5094 +__n__+::
5095 Field name.
5096
5097 +__e__+::
5098 Argument expression.
5099
5100 +__T__+::
5101 Length expression C type.
5102
5103 +__E__+::
5104 Length expression.
5105
5106 |
5107 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
5108
5109 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
5110 |
5111 Dynamically-sized array, displayed as text.
5112
5113 The string does not need to be null-terminated.
5114
5115 The type of +__E__+ needs to be unsigned.
5116
5117 The behaviour is undefined if +__e__+ is `NULL`.
5118
5119 +__t__+::
5120 Sequence element C type (always `char`).
5121
5122 +__n__+::
5123 Field name.
5124
5125 +__e__+::
5126 Argument expression.
5127
5128 +__T__+::
5129 Length expression C type.
5130
5131 +__E__+::
5132 Length expression.
5133 |====
5134
5135 The `_nowrite` versions omit themselves from the session trace, but are
5136 otherwise identical. This means the `_nowrite` fields won't be written
5137 in the recorded trace. Their primary purpose is to make some
5138 of the event context available to the
5139 <<enabling-disabling-events,event filters>> without having to
5140 commit the data to sub-buffers.
5141
5142
5143 [[liblttng-ust-tracepoint-loglevel]]
5144 ===== Tracepoint log levels (for `TRACEPOINT_LOGLEVEL()`)
5145
5146 The following table shows the available log level values for the
5147 `TRACEPOINT_LOGLEVEL()` macro:
5148
5149 `TRACE_EMERG`::
5150 System is unusable.
5151
5152 `TRACE_ALERT`::
5153 Action must be taken immediately.
5154
5155 `TRACE_CRIT`::
5156 Critical conditions.
5157
5158 `TRACE_ERR`::
5159 Error conditions.
5160
5161 `TRACE_WARNING`::
5162 Warning conditions.
5163
5164 `TRACE_NOTICE`::
5165 Normal, but significant, condition.
5166
5167 `TRACE_INFO`::
5168 Informational message.
5169
5170 `TRACE_DEBUG_SYSTEM`::
5171 Debug information with system-level scope (set of programs).
5172
5173 `TRACE_DEBUG_PROGRAM`::
5174 Debug information with program-level scope (set of processes).
5175
5176 `TRACE_DEBUG_PROCESS`::
5177 Debug information with process-level scope (set of modules).
5178
5179 `TRACE_DEBUG_MODULE`::
5180 Debug information with module (executable/library) scope (set of units).
5181
5182 `TRACE_DEBUG_UNIT`::
5183 Debug information with compilation unit scope (set of functions).
5184
5185 `TRACE_DEBUG_FUNCTION`::
5186 Debug information with function-level scope.
5187
5188 `TRACE_DEBUG_LINE`::
5189 Debug information with line-level scope (TRACEPOINT_EVENT default).
5190
5191 `TRACE_DEBUG`::
5192 Debug-level message.
5193
5194 Log levels `TRACE_EMERG` through `TRACE_INFO` and `TRACE_DEBUG` match
5195 http://man7.org/linux/man-pages/man3/syslog.3.html[syslog]
5196 level semantics. Log levels `TRACE_DEBUG_SYSTEM` through `TRACE_DEBUG`
5197 offer more fine-grained selection of debug information.
5198
5199
5200 [[lttng-modules-ref]]
5201 === LTTng-modules
5202
5203 This section presents references of the LTTng-modules package.
5204
5205
5206 [[lttng-modules-tp-struct-entry]]
5207 ==== Tracepoint fields macros (for `TP_STRUCT__entry()`)
5208
5209 This table describes possible entries for the `TP_STRUCT__entry()` part
5210 of `LTTNG_TRACEPOINT_EVENT()`:
5211
5212 [role="growable func-desc",cols="asciidoc,asciidoc"]
5213 .Available entries for `TP_STRUCT__entry()` (in `LTTNG_TRACEPOINT_EVENT()`)
5214 |====
5215 |Macro |Description and parameters
5216
5217 |+\__field(__t__, __n__)+
5218 |
5219 Standard integer, displayed in base 10.
5220
5221 +__t__+::
5222 Integer C type (`int`, `unsigned char`, `size_t`, ...).
5223
5224 +__n__+::
5225 Field name.
5226
5227 |+\__field_hex(__t__, __n__)+
5228 |
5229 Standard integer, displayed in base 16.
5230
5231 +__t__+::
5232 Integer C type.
5233
5234 +__n__+::
5235 Field name.
5236
5237 |+\__field_oct(__t__, __n__)+
5238 |
5239 Standard integer, displayed in base 8.
5240
5241 +__t__+::
5242 Integer C type.
5243
5244 +__n__+::
5245 Field name.
5246
5247 |+\__field_network(__t__, __n__)+
5248 |
5249 Integer in network byte order (big endian), displayed in base 10.
5250
5251 +__t__+::
5252 Integer C type.
5253
5254 +__n__+::
5255 Field name.
5256
5257 |+\__field_network_hex(__t__, __n__)+
5258 |
5259 Integer in network byte order (big endian), displayed in base 16.
5260
5261 +__t__+::
5262 Integer C type.
5263
5264 +__n__+::
5265 Field name.
5266
5267 |+\__array(__t__, __n__, __s__)+
5268 |
5269 Statically-sized array, elements displayed in base 10.
5270
5271 +__t__+::
5272 Array element C type.
5273
5274 +__n__+::
5275 Field name.
5276
5277 +__s__+::
5278 Number of elements.
5279
5280 |+\__array_hex(__t__, __n__, __s__)+
5281 |
5282 Statically-sized array, elements displayed in base 16.
5283
5284 +__t__+::
5285 array element C type.
5286 +__n__+::
5287 field name.
5288 +__s__+::
5289 number of elements.
5290
5291 |+\__array_text(__t__, __n__, __s__)+
5292 |
5293 Statically-sized array, displayed as text.
5294
5295 +__t__+::
5296 Array element C type (always char).
5297
5298 +__n__+::
5299 Field name.
5300
5301 +__s__+::
5302 Number of elements.
5303
5304 |+\__dynamic_array(__t__, __n__, __s__)+
5305 |
5306 Dynamically-sized array, displayed in base 10.
5307
5308 +__t__+::
5309 Array element C type.
5310
5311 +__n__+::
5312 Field name.
5313
5314 +__s__+::
5315 Length C expression.
5316
5317 |+\__dynamic_array_hex(__t__, __n__, __s__)+
5318 |
5319 Dynamically-sized array, displayed in base 16.
5320
5321 +__t__+::
5322 Array element C type.
5323
5324 +__n__+::
5325 Field name.
5326
5327 +__s__+::
5328 Length C expression.
5329
5330 |+\__dynamic_array_text(__t__, __n__, __s__)+
5331 |
5332 Dynamically-sized array, displayed as text.
5333
5334 +__t__+::
5335 Array element C type (always char).
5336
5337 +__n__+::
5338 Field name.
5339
5340 +__s__+::
5341 Length C expression.
5342
5343 |+\__string(n, __s__)+
5344 |
5345 Null-terminated string.
5346
5347 The behaviour is undefined behavior if +__s__+ is `NULL`.
5348
5349 +__n__+::
5350 Field name.
5351
5352 +__s__+::
5353 String source (pointer).
5354 |====
5355
5356 The above macros should cover the majority of cases. For advanced items,
5357 see path:{probes/lttng-events.h}.
5358
5359
5360 [[lttng-modules-tp-fast-assign]]
5361 ==== Tracepoint assignment macros (for `TP_fast_assign()`)
5362
5363 This table describes possible entries for the `TP_fast_assign()` part
5364 of `LTTNG_TRACEPOINT_EVENT()`:
5365
5366 [role="growable func-desc",cols="asciidoc,asciidoc"]
5367 .Available entries for `TP_fast_assign()` (in `LTTNG_TRACEPOINT_EVENT()`)
5368 |====
5369 |Macro |Description and parameters
5370
5371 |+tp_assign(__d__, __s__)+
5372 |
5373 Assignment of C expression +__s__+ to tracepoint field +__d__+.
5374
5375 +__d__+::
5376 Name of destination tracepoint field.
5377
5378 +__s__+::
5379 Source C expression (may refer to tracepoint arguments).
5380
5381 |+tp_memcpy(__d__, __s__, __l__)+
5382 |
5383 Memory copy of +__l__+ bytes from +__s__+ to tracepoint field
5384 +__d__+ (use with array fields).
5385
5386 +__d__+::
5387 Name of destination tracepoint field.
5388
5389 +__s__+::
5390 Source C expression (may refer to tracepoint arguments).
5391
5392 +__l__+::
5393 Number of bytes to copy.
5394
5395 |+tp_memcpy_from_user(__d__, __s__, __l__)+
5396 |
5397 Memory copy of +__l__+ bytes from user space +__s__+ to tracepoint
5398 field +__d__+ (use with array fields).
5399
5400 +__d__+::
5401 Name of destination tracepoint field.
5402
5403 +__s__+::
5404 Source C expression (may refer to tracepoint arguments).
5405
5406 +__l__+::
5407 Number of bytes to copy.
5408
5409 |+tp_memcpy_dyn(__d__, __s__)+
5410 |
5411 Memory copy of dynamically-sized array from +__s__+ to tracepoint field
5412 +__d__+.
5413
5414 The number of bytes is known from the field's length expression
5415 (use with dynamically-sized array fields).
5416
5417 +__d__+::
5418 Name of destination tracepoint field.
5419
5420 +__s__+::
5421 Source C expression (may refer to tracepoint arguments).
5422
5423 +__l__+::
5424 Number of bytes to copy.
5425
5426 |+tp_strcpy(__d__, __s__)+
5427 |
5428 String copy of +__s__+ to tracepoint field +__d__+ (use with string
5429 fields).
5430
5431 +__d__+::
5432 Name of destination tracepoint field.
5433
5434 +__s__+::
5435 Source C expression (may refer to tracepoint arguments).
5436 |====
This page took 0.139981 seconds and 4 git commands to generate.