2.9, 2.10: update installation instructions
[lttng-docs.git] / 2.9 / lttng-docs-2.9.txt
1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.9, 3 October 2017
5
6
7 include::../common/copyright.txt[]
8
9
10 include::../common/welcome.txt[]
11
12
13 include::../common/audience.txt[]
14
15
16 [[chapters]]
17 === What's in this documentation?
18
19 The LTTng Documentation is divided into the following sections:
20
21 * **<<nuts-and-bolts,Nuts and bolts>>** explains the
22 rudiments of software tracing and the rationale behind the
23 LTTng project.
24 +
25 You can skip this section if you’re familiar with software tracing and
26 with the LTTng project.
27
28 * **<<installing-lttng,Installation>>** describes the steps to
29 install the LTTng packages on common Linux distributions and from
30 their sources.
31 +
32 You can skip this section if you already properly installed LTTng on
33 your target system.
34
35 * **<<getting-started,Quick start>>** is a concise guide to
36 getting started quickly with LTTng kernel and user space tracing.
37 +
38 We recommend this section if you're new to LTTng or to software tracing
39 in general.
40 +
41 You can skip this section if you're not new to LTTng.
42
43 * **<<core-concepts,Core concepts>>** explains the concepts at
44 the heart of LTTng.
45 +
46 It's a good idea to become familiar with the core concepts
47 before attempting to use the toolkit.
48
49 * **<<plumbing,Components of LTTng>>** describes the various components
50 of the LTTng machinery, like the daemons, the libraries, and the
51 command-line interface.
52 * **<<instrumenting,Instrumentation>>** shows different ways to
53 instrument user applications and the Linux kernel.
54 +
55 Instrumenting source code is essential to provide a meaningful
56 source of events.
57 +
58 You can skip this section if you do not have a programming background.
59
60 * **<<controlling-tracing,Tracing control>>** is divided into topics
61 which demonstrate how to use the vast array of features that
62 LTTng{nbsp}{revision} offers.
63 * **<<reference,Reference>>** contains reference tables.
64 * **<<glossary,Glossary>>** is a specialized dictionary of terms related
65 to LTTng or to the field of software tracing.
66
67
68 include::../common/convention.txt[]
69
70
71 include::../common/acknowledgements.txt[]
72
73
74 [[whats-new]]
75 == What's new in LTTng {revision}?
76
77 LTTng{nbsp}{revision} bears the name _Joannès_. A Berliner Weisse style
78 beer from the http://letreflenoir.com/[Trèfle Noir] microbrewery in
79 https://en.wikipedia.org/wiki/Rouyn-Noranda[Rouyn-Noranda], the
80 https://www.beeradvocate.com/beer/profile/20537/238967/[_**Joannès**_]
81 is a tangy beer with a distinct pink dress and intense fruit flavor,
82 thanks to the presence of fresh blackcurrant grown in Témiscamingue.
83
84 New features and changes in LTTng{nbsp}{revision}:
85
86 * **Tracing control**:
87 ** You can override the name or the URL of a tracing session
88 configuration when you use man:lttng-load(1) thanks to the new
89 opt:lttng-load(1):--override-name and
90 opt:lttng-load(1):--override-url options.
91 ** The new `lttng regenerate` command replaces the now deprecated
92 `lttng metadata` command of LTTng 2.8. man:lttng-regenerate(1) can
93 also <<regenerate-statedump,generate the state dump event records>>
94 of a given tracing session on demand, a handy feature when
95 <<taking-a-snapshot,taking a snapshot>>.
96 ** You can add PMU counters by raw ID with man:lttng-add-context(1):
97 +
98 --
99 [role="term"]
100 ----
101 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
102 ----
103 --
104 +
105 The format of the raw ID is the same as used with man:perf-record(1).
106 See <<adding-context,Add context fields to a channel>> for more
107 examples.
108
109 ** The LTTng <<lttng-relayd,relay daemon>> is now supported on
110 OS{nbsp}X and macOS for a smoother integration within a trace
111 analysis workflow, regardless of the platform used.
112
113 * **User space tracing**:
114 ** Improved performance (tested on x86-64 and ARMv7-A
115 (https://en.wikipedia.org/wiki/Cubieboard[Cubieboard])
116 architectures).
117 ** New helper library (`liblttng-ust-fd`) to help with
118 <<liblttng-ust-fd,applications which close file descriptors that
119 don't belong to them>>, for example, in a loop which closes file
120 descriptors after man:fork(2), or BSD's `closeall()`.
121 ** More accurate <<liblttng-ust-dl,dynamic linker instrumentation>> and
122 state dump event records, especially when a dynamically loaded
123 library manually loads its own dependencies.
124 ** New `ctf_*()` field definition macros (see man:lttng-ust(3)):
125 *** `ctf_array_hex()`
126 *** `ctf_array_network()`
127 *** `ctf_array_network_hex()`
128 *** `ctf_sequence_hex()`
129 *** `ctf_sequence_network()`
130 *** `ctf_sequence_network_hex()`
131 ** New `lttng_ust_loaded` weak symbol defined by `liblttng-ust` for
132 an application to know if the LTTng-UST shared library is loaded
133 or not:
134 +
135 --
136 [source,c]
137 ----
138 #include <stdio.h>
139
140 int lttng_ust_loaded __attribute__((weak));
141
142 int main(void)
143 {
144 if (lttng_ust_loaded) {
145 puts("LTTng-UST is loaded!");
146 } else {
147 puts("LTTng-UST is not loaded!");
148 }
149
150 return 0;
151 }
152 ----
153 --
154
155 ** LTTng-UST thread names have the `-ust` suffix.
156
157 * **Linux kernel tracing**:
158 ** Improved performance (tested on x86-64 and ARMv7-A
159 (https://en.wikipedia.org/wiki/Cubieboard[Cubieboard])
160 architectures).
161 ** New enumeration <<lttng-modules-tp-fields,field definition macros>>:
162 `ctf_enum()` and `ctf_user_enum()`.
163 ** IPv4, IPv6, and TCP header data is recorded in the event records
164 produced by tracepoints starting with `net_`.
165 ** Detailed system call event records: `select`, `pselect6`, `poll`,
166 `ppoll`, `epoll_wait`, `epoll_pwait`, and `epoll_ctl` on all
167 architectures supported by LTTng-modules, and `accept4` on x86-64.
168 ** New I²C instrumentation: the `extract_sensitive_payload` parameter
169 of the new `lttng-probe-i2c` LTTng module controls whether or not
170 the payloads of I²C messages are recorded in I²C event records, since
171 they may contain sensitive data (for example, keystrokes).
172 ** When the LTTng kernel modules are built into the Linux kernel image,
173 the `CONFIG_TRACEPOINTS` configuration option is automatically
174 selected.
175
176
177 [[nuts-and-bolts]]
178 == Nuts and bolts
179
180 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
181 generation_ is a modern toolkit for tracing Linux systems and
182 applications. So your first question might be:
183 **what is tracing?**
184
185
186 [[what-is-tracing]]
187 === What is tracing?
188
189 As the history of software engineering progressed and led to what
190 we now take for granted--complex, numerous and
191 interdependent software applications running in parallel on
192 sophisticated operating systems like Linux--the authors of such
193 components, software developers, began feeling a natural
194 urge to have tools that would ensure the robustness and good performance
195 of their masterpieces.
196
197 One major achievement in this field is, inarguably, the
198 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
199 an essential tool for developers to find and fix bugs. But even the best
200 debugger won't help make your software run faster, and nowadays, faster
201 software means either more work done by the same hardware, or cheaper
202 hardware for the same work.
203
204 A _profiler_ is often the tool of choice to identify performance
205 bottlenecks. Profiling is suitable to identify _where_ performance is
206 lost in a given software. The profiler outputs a profile, a statistical
207 summary of observed events, which you may use to discover which
208 functions took the most time to execute. However, a profiler won't
209 report _why_ some identified functions are the bottleneck. Bottlenecks
210 might only occur when specific conditions are met, conditions that are
211 sometimes impossible to capture by a statistical profiler, or impossible
212 to reproduce with an application altered by the overhead of an
213 event-based profiler. For a thorough investigation of software
214 performance issues, a history of execution is essential, with the
215 recorded values of variables and context fields you choose, and
216 with as little influence as possible on the instrumented software. This
217 is where tracing comes in handy.
218
219 _Tracing_ is a technique used to understand what goes on in a running
220 software system. The software used for tracing is called a _tracer_,
221 which is conceptually similar to a tape recorder. When recording,
222 specific instrumentation points placed in the software source code
223 generate events that are saved on a giant tape: a _trace_ file. You
224 can trace user applications and the operating system at the same time,
225 opening the possibility of resolving a wide range of problems that would
226 otherwise be extremely challenging.
227
228 Tracing is often compared to _logging_. However, tracers and loggers are
229 two different tools, serving two different purposes. Tracers are
230 designed to record much lower-level events that occur much more
231 frequently than log messages, often in the range of thousands per
232 second, with very little execution overhead. Logging is more appropriate
233 for a very high-level analysis of less frequent events: user accesses,
234 exceptional conditions (errors and warnings, for example), database
235 transactions, instant messaging communications, and such. Simply put,
236 logging is one of the many use cases that can be satisfied with tracing.
237
238 The list of recorded events inside a trace file can be read manually
239 like a log file for the maximum level of detail, but it is generally
240 much more interesting to perform application-specific analyses to
241 produce reduced statistics and graphs that are useful to resolve a
242 given problem. Trace viewers and analyzers are specialized tools
243 designed to do this.
244
245 In the end, this is what LTTng is: a powerful, open source set of
246 tools to trace the Linux kernel and user applications at the same time.
247 LTTng is composed of several components actively maintained and
248 developed by its link:/community/#where[community].
249
250
251 [[lttng-alternatives]]
252 === Alternatives to noch:{LTTng}
253
254 Excluding proprietary solutions, a few competing software tracers
255 exist for Linux:
256
257 * https://github.com/dtrace4linux/linux[dtrace4linux] is a port of
258 Sun Microsystems's DTrace to Linux. The cmd:dtrace tool interprets
259 user scripts and is responsible for loading code into the
260 Linux kernel for further execution and collecting the outputted data.
261 * https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF] is a
262 subsystem in the Linux kernel in which a virtual machine can execute
263 programs passed from the user space to the kernel. You can attach
264 such programs to tracepoints and KProbes thanks to a system call, and
265 they can output data to the user space when executed thanks to
266 different mechanisms (pipe, VM register values, and eBPF maps, to name
267 a few).
268 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
269 is the de facto function tracer of the Linux kernel. Its user
270 interface is a set of special files in sysfs.
271 * https://perf.wiki.kernel.org/[perf] is
272 a performance analyzing tool for Linux which supports hardware
273 performance counters, tracepoints, as well as other counters and
274 types of probes. perf's controlling utility is the cmd:perf command
275 line/curses tool.
276 * http://linux.die.net/man/1/strace[strace]
277 is a command-line utility which records system calls made by a
278 user process, as well as signal deliveries and changes of process
279 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
280 to fulfill its function.
281 * http://www.sysdig.org/[sysdig], like SystemTap, uses scripts to
282 analyze Linux kernel events. You write scripts, or _chisels_ in
283 sysdig's jargon, in Lua and sysdig executes them while the system is
284 being traced or afterwards. sysdig's interface is the cmd:sysdig
285 command-line tool as well as the curses-based cmd:csysdig tool.
286 * https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and
287 user space tracer which uses custom user scripts to produce plain text
288 traces. SystemTap converts the scripts to the C language, and then
289 compiles them as Linux kernel modules which are loaded to produce
290 trace data. SystemTap's primary user interface is the cmd:stap
291 command-line tool.
292
293 The main distinctive features of LTTng is that it produces correlated
294 kernel and user space traces, as well as doing so with the lowest
295 overhead amongst other solutions. It produces trace files in the
296 http://diamon.org/ctf[CTF] format, a file format optimized
297 for the production and analyses of multi-gigabyte data.
298
299 LTTng is the result of more than 10 years of active open source
300 development by a community of passionate developers.
301 LTTng{nbsp}{revision} is currently available on major desktop and server
302 Linux distributions.
303
304 The main interface for tracing control is a single command-line tool
305 named cmd:lttng. The latter can create several tracing sessions, enable
306 and disable events on the fly, filter events efficiently with custom
307 user expressions, start and stop tracing, and much more. LTTng can
308 record the traces on the file system or send them over the network, and
309 keep them totally or partially. You can view the traces once tracing
310 becomes inactive or in real-time.
311
312 <<installing-lttng,Install LTTng now>> and
313 <<getting-started,start tracing>>!
314
315
316 [[installing-lttng]]
317 == Installation
318
319 **LTTng** is a set of software <<plumbing,components>> which interact to
320 <<instrumenting,instrument>> the Linux kernel and user applications, and
321 to <<controlling-tracing,control tracing>> (start and stop
322 tracing, enable and disable event rules, and the rest). Those
323 components are bundled into the following packages:
324
325 * **LTTng-tools**: Libraries and command-line interface to
326 control tracing.
327 * **LTTng-modules**: Linux kernel modules to instrument and
328 trace the kernel.
329 * **LTTng-UST**: Libraries and Java/Python packages to instrument and
330 trace user applications.
331
332 Most distributions mark the LTTng-modules and LTTng-UST packages as
333 optional when installing LTTng-tools (which is always required). In the
334 following sections, we always provide the steps to install all three,
335 but note that:
336
337 * You only need to install LTTng-modules if you intend to trace the
338 Linux kernel.
339 * You only need to install LTTng-UST if you intend to trace user
340 applications.
341
342 [role="growable"]
343 .Availability of LTTng{nbsp}{revision} for major Linux distributions as of 3 October 2017.
344 |====
345 |Distribution |Available in releases |Alternatives
346
347 |https://www.ubuntu.com/[Ubuntu]
348 |<<ubuntu,Ubuntu{nbsp}17.04 _Zesty Zapus_ and Ubuntu{nbsp}17.10 _Artful Aardvark_>>.
349
350 Ubuntu{nbsp}14.04 _Trusty Tahr_ and Ubuntu{nbsp}16.04 _Xenial Xerus_:
351 <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
352 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
353 other Ubuntu releases.
354
355 |https://getfedora.org/[Fedora]
356 |<<fedora,Fedora{nbsp}26>>.
357 |link:/docs/v2.10#doc-fedora[LTTng{nbsp}2.10 for Fedora 27].
358
359 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
360 other Fedora releases.
361
362 |https://www.debian.org/[Debian]
363 |xref:debian[Debian "stretch" (stable)].
364 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
365 other Debian releases.
366
367 |https://www.archlinux.org/[Arch Linux]
368 |_Not available_
369 |link:/docs/v2.10#doc-arch-linux[LTTng{nbsp}2.10 for the current Arch Linux build].
370
371 <<building-from-source,Build LTTng{nbsp}{revision} from source>>.
372
373 |https://alpinelinux.org/[Alpine Linux]
374 |<<alpine-linux,Alpine Linux "edge">>.
375 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
376 other Alpine Linux releases.
377
378 |https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
379 |See http://packages.efficios.com/[EfficiOS Enterprise Packages].
380 |
381
382 |https://buildroot.org/[Buildroot]
383 |xref:buildroot[Buildroot{nbsp}2017.02, Buildroot{nbsp}2017.05, and
384 Buildroot{nbsp}2017.08].
385 |link:/docs/v2.8#doc-buildroot[LTTng{nbsp}2.8 for Buildroot{nbsp}2016.11].
386
387 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
388 other Buildroot releases.
389
390 |http://www.openembedded.org/wiki/Main_Page[OpenEmbedded] and
391 https://www.yoctoproject.org/[Yocto]
392 |<<oe-yocto,Yocto Project{nbsp}2.3 _Pyro_>> (`openembedded-core` layer).
393 |link:/docs/v2.8#doc-oe-yocto[LTTng{nbsp}2.8 for Yocto Project{nbsp}2.2 _Morty_]
394 (`openembedded-core` layer).
395
396 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
397 other OpenEmbedded releases.
398 |====
399
400
401 [[ubuntu]]
402 === [[ubuntu-official-repositories]]Ubuntu
403
404 LTTng{nbsp}{revision} is available on Ubuntu{nbsp}17.04 _Zesty Zapus_
405 and Ubuntu{nbsp}17.10 _Artful Aardvark_. For previous releases of
406 Ubuntu, <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
407
408 To install LTTng{nbsp}{revision} on Ubuntu{nbsp}17.04 _Zesty Zapus_:
409
410 . Install the main LTTng{nbsp}{revision} packages:
411 +
412 --
413 [role="term"]
414 ----
415 # apt-get install lttng-tools
416 # apt-get install lttng-modules-dkms
417 # apt-get install liblttng-ust-dev
418 ----
419 --
420
421 . **If you need to instrument and trace
422 <<java-application,Java applications>>**, install the LTTng-UST
423 Java agent:
424 +
425 --
426 [role="term"]
427 ----
428 # apt-get install liblttng-ust-agent-java
429 ----
430 --
431
432 . **If you need to instrument and trace
433 <<python-application,Python{nbsp}3 applications>>**, install the
434 LTTng-UST Python agent:
435 +
436 --
437 [role="term"]
438 ----
439 # apt-get install python3-lttngust
440 ----
441 --
442
443
444 [[ubuntu-ppa]]
445 ==== noch:{LTTng} Stable {revision} PPA
446
447 The https://launchpad.net/~lttng/+archive/ubuntu/stable-{revision}[LTTng
448 Stable{nbsp}{revision} PPA] offers the latest stable
449 LTTng{nbsp}{revision} packages for:
450
451 * Ubuntu{nbsp}14.04 _Trusty Tahr_
452 * Ubuntu{nbsp}16.04 _Xenial Xerus_
453
454 To install LTTng{nbsp}{revision} from the LTTng Stable{nbsp}{revision} PPA:
455
456 . Add the LTTng Stable{nbsp}{revision} PPA repository and update the
457 list of packages:
458 +
459 --
460 [role="term"]
461 ----
462 # apt-add-repository ppa:lttng/stable-2.9
463 # apt-get update
464 ----
465 --
466
467 . Install the main LTTng{nbsp}{revision} packages:
468 +
469 --
470 [role="term"]
471 ----
472 # apt-get install lttng-tools
473 # apt-get install lttng-modules-dkms
474 # apt-get install liblttng-ust-dev
475 ----
476 --
477
478 . **If you need to instrument and trace
479 <<java-application,Java applications>>**, install the LTTng-UST
480 Java agent:
481 +
482 --
483 [role="term"]
484 ----
485 # apt-get install liblttng-ust-agent-java
486 ----
487 --
488
489 . **If you need to instrument and trace
490 <<python-application,Python{nbsp}3 applications>>**, install the
491 LTTng-UST Python agent:
492 +
493 --
494 [role="term"]
495 ----
496 # apt-get install python3-lttngust
497 ----
498 --
499
500
501 [[fedora]]
502 === Fedora
503
504 To install LTTng{nbsp}{revision} on Fedora{nbsp}26:
505
506 . Install the LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision}
507 packages:
508 +
509 --
510 [role="term"]
511 ----
512 # yum install lttng-tools
513 # yum install lttng-ust
514 ----
515 --
516
517 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
518 +
519 --
520 [role="term"]
521 ----
522 $ cd $(mktemp -d) &&
523 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
524 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
525 cd lttng-modules-2.9.* &&
526 make &&
527 sudo make modules_install &&
528 sudo depmod -a
529 ----
530 --
531
532 [IMPORTANT]
533 .Java and Python application instrumentation and tracing
534 ====
535 If you need to instrument and trace <<java-application,Java
536 applications>> on Fedora, you need to build and install
537 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
538 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
539 `--enable-java-agent-all` options to the `configure` script, depending
540 on which Java logging framework you use.
541
542 If you need to instrument and trace <<python-application,Python
543 applications>> on Fedora, you need to build and install
544 LTTng-UST{nbsp}{revision} from source and pass the
545 `--enable-python-agent` option to the `configure` script.
546 ====
547
548
549 [[debian]]
550 === Debian
551
552 To install LTTng{nbsp}{revision} on Debian "stretch" (stable):
553
554 . Install the main LTTng{nbsp}{revision} packages:
555 +
556 --
557 [role="term"]
558 ----
559 # apt-get install lttng-modules-dkms
560 # apt-get install liblttng-ust-dev
561 # apt-get install lttng-tools
562 ----
563 --
564
565 . **If you need to instrument and trace <<java-application,Java
566 applications>>**, install the LTTng-UST Java agent:
567 +
568 --
569 [role="term"]
570 ----
571 # apt-get install liblttng-ust-agent-java
572 ----
573 --
574
575 . **If you need to instrument and trace <<python-application,Python
576 applications>>**, install the LTTng-UST Python agent:
577 +
578 --
579 [role="term"]
580 ----
581 # apt-get install python3-lttngust
582 ----
583 --
584
585
586 [[alpine-linux]]
587 === Alpine Linux
588
589 To install LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision} on
590 Alpine Linux "edge":
591
592 . Make sure your system is
593 https://wiki.alpinelinux.org/wiki/Edge[configured for "edge"].
594 . Enable the _testing_ repository by uncommenting the corresponding
595 line in path:{/etc/apk/repositories}.
596 . Add the LTTng packages:
597 +
598 --
599 [role="term"]
600 ----
601 # apk add lttng-tools
602 # apk add lttng-ust-dev
603 ----
604 --
605
606 To install LTTng-modules{nbsp}{revision} (Linux kernel tracing support)
607 on Alpine Linux "edge":
608
609 . Add the vanilla Linux kernel:
610 +
611 --
612 [role="term"]
613 ----
614 # apk add linux-vanilla linux-vanilla-dev
615 ----
616 --
617
618 . Reboot with the vanilla Linux kernel.
619 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
620 +
621 --
622 [role="term"]
623 ----
624 $ cd $(mktemp -d) &&
625 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
626 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
627 cd lttng-modules-2.9.* &&
628 make &&
629 sudo make modules_install &&
630 sudo depmod -a
631 ----
632 --
633
634
635 [[enterprise-distributions]]
636 === RHEL, SUSE, and other enterprise distributions
637
638 To install LTTng on enterprise Linux distributions, such as Red Hat
639 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SUSE), please
640 see http://packages.efficios.com/[EfficiOS Enterprise Packages].
641
642
643 [[buildroot]]
644 === Buildroot
645
646 To install LTTng{nbsp}{revision} on Buildroot{nbsp}2017.02,
647 Buildroot{nbsp}2017.05, or Buildroot{nbsp}2017.08:
648
649 . Launch the Buildroot configuration tool:
650 +
651 --
652 [role="term"]
653 ----
654 $ make menuconfig
655 ----
656 --
657
658 . In **Kernel**, check **Linux kernel**.
659 . In **Toolchain**, check **Enable WCHAR support**.
660 . In **Target packages**{nbsp}&#8594; **Debugging, profiling and benchmark**,
661 check **lttng-modules** and **lttng-tools**.
662 . In **Target packages**{nbsp}&#8594; **Libraries**{nbsp}&#8594;
663 **Other**, check **lttng-libust**.
664
665
666 [[oe-yocto]]
667 === OpenEmbedded and Yocto
668
669 LTTng{nbsp}{revision} recipes are available in the
670 http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
671 layer for Yocto Project{nbsp}2.3 _Pyro_ under the following names:
672
673 * `lttng-tools`
674 * `lttng-modules`
675 * `lttng-ust`
676
677 With BitBake, the simplest way to include LTTng recipes in your target
678 image is to add them to `IMAGE_INSTALL_append` in path:{conf/local.conf}:
679
680 ----
681 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
682 ----
683
684 If you use Hob:
685
686 . Select a machine and an image recipe.
687 . Click **Edit image recipe**.
688 . Under the **All recipes** tab, search for **lttng**.
689 . Check the desired LTTng recipes.
690
691 [IMPORTANT]
692 .Java and Python application instrumentation and tracing
693 ====
694 If you need to instrument and trace <<java-application,Java
695 applications>> on Yocto/OpenEmbedded, you need to build and install
696 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
697 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
698 `--enable-java-agent-all` options to the `configure` script, depending
699 on which Java logging framework you use.
700
701 If you need to instrument and trace <<python-application,Python
702 applications>> on Yocto/OpenEmbedded, you need to build and install
703 LTTng-UST{nbsp}{revision} from source and pass the
704 `--enable-python-agent` option to the `configure` script.
705 ====
706
707
708 [[building-from-source]]
709 === Build from source
710
711 To build and install LTTng{nbsp}{revision} from source:
712
713 . Using your distribution's package manager, or from source, install
714 the following dependencies of LTTng-tools and LTTng-UST:
715 +
716 --
717 * https://sourceforge.net/projects/libuuid/[libuuid]
718 * http://directory.fsf.org/wiki/Popt[popt]
719 * http://liburcu.org/[Userspace RCU]
720 * http://www.xmlsoft.org/[libxml2]
721 --
722
723 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
724 +
725 --
726 [role="term"]
727 ----
728 $ cd $(mktemp -d) &&
729 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
730 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
731 cd lttng-modules-2.9.* &&
732 make &&
733 sudo make modules_install &&
734 sudo depmod -a
735 ----
736 --
737
738 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
739 +
740 --
741 [role="term"]
742 ----
743 $ cd $(mktemp -d) &&
744 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.9.tar.bz2 &&
745 tar -xf lttng-ust-latest-2.9.tar.bz2 &&
746 cd lttng-ust-2.9.* &&
747 ./configure &&
748 make &&
749 sudo make install &&
750 sudo ldconfig
751 ----
752 --
753 +
754 --
755 [IMPORTANT]
756 .Java and Python application tracing
757 ====
758 If you need to instrument and trace <<java-application,Java
759 applications>>, pass the `--enable-java-agent-jul`,
760 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
761 `configure` script, depending on which Java logging framework you use.
762
763 If you need to instrument and trace <<python-application,Python
764 applications>>, pass the `--enable-python-agent` option to the
765 `configure` script. You can set the `PYTHON` environment variable to the
766 path to the Python interpreter for which to install the LTTng-UST Python
767 agent package.
768 ====
769 --
770 +
771 --
772 [NOTE]
773 ====
774 By default, LTTng-UST libraries are installed to
775 dir:{/usr/local/lib}, which is the de facto directory in which to
776 keep self-compiled and third-party libraries.
777
778 When <<building-tracepoint-providers-and-user-application,linking an
779 instrumented user application with `liblttng-ust`>>:
780
781 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
782 variable.
783 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
784 man:gcc(1), man:g++(1), or man:clang(1).
785 ====
786 --
787
788 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
789 +
790 --
791 [role="term"]
792 ----
793 $ cd $(mktemp -d) &&
794 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
795 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
796 cd lttng-tools-2.9.* &&
797 ./configure &&
798 make &&
799 sudo make install &&
800 sudo ldconfig
801 ----
802 --
803
804 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
805 previous steps automatically for a given version of LTTng and confine
806 the installed files in a specific directory. This can be useful to test
807 LTTng without installing it on your system.
808
809
810 [[getting-started]]
811 == Quick start
812
813 This is a short guide to get started quickly with LTTng kernel and user
814 space tracing.
815
816 Before you follow this guide, make sure to <<installing-lttng,install>>
817 LTTng.
818
819 This tutorial walks you through the steps to:
820
821 . <<tracing-the-linux-kernel,Trace the Linux kernel>>.
822 . <<tracing-your-own-user-application,Trace a user application>> written
823 in C.
824 . <<viewing-and-analyzing-your-traces,View and analyze the
825 recorded events>>.
826
827
828 [[tracing-the-linux-kernel]]
829 === Trace the Linux kernel
830
831 The following command lines start with the `#` prompt because you need
832 root privileges to trace the Linux kernel. You can also trace the kernel
833 as a regular user if your Unix user is a member of the
834 <<tracing-group,tracing group>>.
835
836 . Create a <<tracing-session,tracing session>> which writes its traces
837 to dir:{/tmp/my-kernel-trace}:
838 +
839 --
840 [role="term"]
841 ----
842 # lttng create my-kernel-session --output=/tmp/my-kernel-trace
843 ----
844 --
845
846 . List the available kernel tracepoints and system calls:
847 +
848 --
849 [role="term"]
850 ----
851 # lttng list --kernel
852 # lttng list --kernel --syscall
853 ----
854 --
855
856 . Create <<event,event rules>> which match the desired instrumentation
857 point names, for example the `sched_switch` and `sched_process_fork`
858 tracepoints, and the man:open(2) and man:close(2) system calls:
859 +
860 --
861 [role="term"]
862 ----
863 # lttng enable-event --kernel sched_switch,sched_process_fork
864 # lttng enable-event --kernel --syscall open,close
865 ----
866 --
867 +
868 You can also create an event rule which matches _all_ the Linux kernel
869 tracepoints (this will generate a lot of data when tracing):
870 +
871 --
872 [role="term"]
873 ----
874 # lttng enable-event --kernel --all
875 ----
876 --
877
878 . <<basic-tracing-session-control,Start tracing>>:
879 +
880 --
881 [role="term"]
882 ----
883 # lttng start
884 ----
885 --
886
887 . Do some operation on your system for a few seconds. For example,
888 load a website, or list the files of a directory.
889 . <<basic-tracing-session-control,Stop tracing>> and destroy the
890 tracing session:
891 +
892 --
893 [role="term"]
894 ----
895 # lttng stop
896 # lttng destroy
897 ----
898 --
899 +
900 The man:lttng-destroy(1) command does not destroy the trace data; it
901 only destroys the state of the tracing session.
902
903 . For the sake of this example, make the recorded trace accessible to
904 the non-root users:
905 +
906 --
907 [role="term"]
908 ----
909 # chown -R $(whoami) /tmp/my-kernel-trace
910 ----
911 --
912
913 See <<viewing-and-analyzing-your-traces,View and analyze the
914 recorded events>> to view the recorded events.
915
916
917 [[tracing-your-own-user-application]]
918 === Trace a user application
919
920 This section steps you through a simple example to trace a
921 _Hello world_ program written in C.
922
923 To create the traceable user application:
924
925 . Create the tracepoint provider header file, which defines the
926 tracepoints and the events they can generate:
927 +
928 --
929 [source,c]
930 .path:{hello-tp.h}
931 ----
932 #undef TRACEPOINT_PROVIDER
933 #define TRACEPOINT_PROVIDER hello_world
934
935 #undef TRACEPOINT_INCLUDE
936 #define TRACEPOINT_INCLUDE "./hello-tp.h"
937
938 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
939 #define _HELLO_TP_H
940
941 #include <lttng/tracepoint.h>
942
943 TRACEPOINT_EVENT(
944 hello_world,
945 my_first_tracepoint,
946 TP_ARGS(
947 int, my_integer_arg,
948 char*, my_string_arg
949 ),
950 TP_FIELDS(
951 ctf_string(my_string_field, my_string_arg)
952 ctf_integer(int, my_integer_field, my_integer_arg)
953 )
954 )
955
956 #endif /* _HELLO_TP_H */
957
958 #include <lttng/tracepoint-event.h>
959 ----
960 --
961
962 . Create the tracepoint provider package source file:
963 +
964 --
965 [source,c]
966 .path:{hello-tp.c}
967 ----
968 #define TRACEPOINT_CREATE_PROBES
969 #define TRACEPOINT_DEFINE
970
971 #include "hello-tp.h"
972 ----
973 --
974
975 . Build the tracepoint provider package:
976 +
977 --
978 [role="term"]
979 ----
980 $ gcc -c -I. hello-tp.c
981 ----
982 --
983
984 . Create the _Hello World_ application source file:
985 +
986 --
987 [source,c]
988 .path:{hello.c}
989 ----
990 #include <stdio.h>
991 #include "hello-tp.h"
992
993 int main(int argc, char *argv[])
994 {
995 int x;
996
997 puts("Hello, World!\nPress Enter to continue...");
998
999 /*
1000 * The following getchar() call is only placed here for the purpose
1001 * of this demonstration, to pause the application in order for
1002 * you to have time to list its tracepoints. It is not
1003 * needed otherwise.
1004 */
1005 getchar();
1006
1007 /*
1008 * A tracepoint() call.
1009 *
1010 * Arguments, as defined in hello-tp.h:
1011 *
1012 * 1. Tracepoint provider name (required)
1013 * 2. Tracepoint name (required)
1014 * 3. my_integer_arg (first user-defined argument)
1015 * 4. my_string_arg (second user-defined argument)
1016 *
1017 * Notice the tracepoint provider and tracepoint names are
1018 * NOT strings: they are in fact parts of variables that the
1019 * macros in hello-tp.h create.
1020 */
1021 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
1022
1023 for (x = 0; x < argc; ++x) {
1024 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
1025 }
1026
1027 puts("Quitting now!");
1028 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
1029
1030 return 0;
1031 }
1032 ----
1033 --
1034
1035 . Build the application:
1036 +
1037 --
1038 [role="term"]
1039 ----
1040 $ gcc -c hello.c
1041 ----
1042 --
1043
1044 . Link the application with the tracepoint provider package,
1045 `liblttng-ust`, and `libdl`:
1046 +
1047 --
1048 [role="term"]
1049 ----
1050 $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
1051 ----
1052 --
1053
1054 Here's the whole build process:
1055
1056 [role="img-100"]
1057 .User space tracing tutorial's build steps.
1058 image::ust-flow.png[]
1059
1060 To trace the user application:
1061
1062 . Run the application with a few arguments:
1063 +
1064 --
1065 [role="term"]
1066 ----
1067 $ ./hello world and beyond
1068 ----
1069 --
1070 +
1071 You see:
1072 +
1073 --
1074 ----
1075 Hello, World!
1076 Press Enter to continue...
1077 ----
1078 --
1079
1080 . Start an LTTng <<lttng-sessiond,session daemon>>:
1081 +
1082 --
1083 [role="term"]
1084 ----
1085 $ lttng-sessiond --daemonize
1086 ----
1087 --
1088 +
1089 Note that a session daemon might already be running, for example as
1090 a service that the distribution's service manager started.
1091
1092 . List the available user space tracepoints:
1093 +
1094 --
1095 [role="term"]
1096 ----
1097 $ lttng list --userspace
1098 ----
1099 --
1100 +
1101 You see the `hello_world:my_first_tracepoint` tracepoint listed
1102 under the `./hello` process.
1103
1104 . Create a <<tracing-session,tracing session>>:
1105 +
1106 --
1107 [role="term"]
1108 ----
1109 $ lttng create my-user-space-session
1110 ----
1111 --
1112
1113 . Create an <<event,event rule>> which matches the
1114 `hello_world:my_first_tracepoint` event name:
1115 +
1116 --
1117 [role="term"]
1118 ----
1119 $ lttng enable-event --userspace hello_world:my_first_tracepoint
1120 ----
1121 --
1122
1123 . <<basic-tracing-session-control,Start tracing>>:
1124 +
1125 --
1126 [role="term"]
1127 ----
1128 $ lttng start
1129 ----
1130 --
1131
1132 . Go back to the running `hello` application and press Enter. The
1133 program executes all `tracepoint()` instrumentation points and exits.
1134 . <<basic-tracing-session-control,Stop tracing>> and destroy the
1135 tracing session:
1136 +
1137 --
1138 [role="term"]
1139 ----
1140 $ lttng stop
1141 $ lttng destroy
1142 ----
1143 --
1144 +
1145 The man:lttng-destroy(1) command does not destroy the trace data; it
1146 only destroys the state of the tracing session.
1147
1148 By default, LTTng saves the traces in
1149 +$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
1150 where +__name__+ is the tracing session name. The
1151 env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
1152
1153 See <<viewing-and-analyzing-your-traces,View and analyze the
1154 recorded events>> to view the recorded events.
1155
1156
1157 [[viewing-and-analyzing-your-traces]]
1158 === View and analyze the recorded events
1159
1160 Once you have completed the <<tracing-the-linux-kernel,Trace the Linux
1161 kernel>> and <<tracing-your-own-user-application,Trace a user
1162 application>> tutorials, you can inspect the recorded events.
1163
1164 Many tools are available to read LTTng traces:
1165
1166 * **cmd:babeltrace** is a command-line utility which converts trace
1167 formats; it supports the format that LTTng produces, CTF, as well as a
1168 basic text output which can be ++grep++ed. The cmd:babeltrace command
1169 is part of the http://diamon.org/babeltrace[Babeltrace] project.
1170 * Babeltrace also includes
1171 **https://www.python.org/[Python] bindings** so
1172 that you can easily open and read an LTTng trace with your own script,
1173 benefiting from the power of Python.
1174 * http://tracecompass.org/[**Trace Compass**]
1175 is a graphical user interface for viewing and analyzing any type of
1176 logs or traces, including LTTng's.
1177 * https://github.com/lttng/lttng-analyses[**LTTng analyses**] is a
1178 project which includes many high-level analyses of LTTng kernel
1179 traces, like scheduling statistics, interrupt frequency distribution,
1180 top CPU usage, and more.
1181
1182 NOTE: This section assumes that the traces recorded during the previous
1183 tutorials were saved to their default location, in the
1184 dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
1185 environment variable defaults to `$HOME` if not set.
1186
1187
1188 [[viewing-and-analyzing-your-traces-bt]]
1189 ==== Use the cmd:babeltrace command-line tool
1190
1191 The simplest way to list all the recorded events of a trace is to pass
1192 its path to cmd:babeltrace with no options:
1193
1194 [role="term"]
1195 ----
1196 $ babeltrace ~/lttng-traces/my-user-space-session*
1197 ----
1198
1199 cmd:babeltrace finds all traces recursively within the given path and
1200 prints all their events, merging them in chronological order.
1201
1202 You can pipe the output of cmd:babeltrace into a tool like man:grep(1) for
1203 further filtering:
1204
1205 [role="term"]
1206 ----
1207 $ babeltrace /tmp/my-kernel-trace | grep _switch
1208 ----
1209
1210 You can pipe the output of cmd:babeltrace into a tool like man:wc(1) to
1211 count the recorded events:
1212
1213 [role="term"]
1214 ----
1215 $ babeltrace /tmp/my-kernel-trace | grep _open | wc --lines
1216 ----
1217
1218
1219 [[viewing-and-analyzing-your-traces-bt-python]]
1220 ==== Use the Babeltrace Python bindings
1221
1222 The <<viewing-and-analyzing-your-traces-bt,text output of cmd:babeltrace>>
1223 is useful to isolate events by simple matching using man:grep(1) and
1224 similar utilities. However, more elaborate filters, such as keeping only
1225 event records with a field value falling within a specific range, are
1226 not trivial to write using a shell. Moreover, reductions and even the
1227 most basic computations involving multiple event records are virtually
1228 impossible to implement.
1229
1230 Fortunately, Babeltrace ships with Python 3 bindings which makes it easy
1231 to read the event records of an LTTng trace sequentially and compute the
1232 desired information.
1233
1234 The following script accepts an LTTng Linux kernel trace path as its
1235 first argument and prints the short names of the top 5 running processes
1236 on CPU 0 during the whole trace:
1237
1238 [source,python]
1239 .path:{top5proc.py}
1240 ----
1241 from collections import Counter
1242 import babeltrace
1243 import sys
1244
1245
1246 def top5proc():
1247 if len(sys.argv) != 2:
1248 msg = 'Usage: python3 {} TRACEPATH'.format(sys.argv[0])
1249 print(msg, file=sys.stderr)
1250 return False
1251
1252 # A trace collection contains one or more traces
1253 col = babeltrace.TraceCollection()
1254
1255 # Add the trace provided by the user (LTTng traces always have
1256 # the 'ctf' format)
1257 if col.add_trace(sys.argv[1], 'ctf') is None:
1258 raise RuntimeError('Cannot add trace')
1259
1260 # This counter dict contains execution times:
1261 #
1262 # task command name -> total execution time (ns)
1263 exec_times = Counter()
1264
1265 # This contains the last `sched_switch` timestamp
1266 last_ts = None
1267
1268 # Iterate on events
1269 for event in col.events:
1270 # Keep only `sched_switch` events
1271 if event.name != 'sched_switch':
1272 continue
1273
1274 # Keep only events which happened on CPU 0
1275 if event['cpu_id'] != 0:
1276 continue
1277
1278 # Event timestamp
1279 cur_ts = event.timestamp
1280
1281 if last_ts is None:
1282 # We start here
1283 last_ts = cur_ts
1284
1285 # Previous task command (short) name
1286 prev_comm = event['prev_comm']
1287
1288 # Initialize entry in our dict if not yet done
1289 if prev_comm not in exec_times:
1290 exec_times[prev_comm] = 0
1291
1292 # Compute previous command execution time
1293 diff = cur_ts - last_ts
1294
1295 # Update execution time of this command
1296 exec_times[prev_comm] += diff
1297
1298 # Update last timestamp
1299 last_ts = cur_ts
1300
1301 # Display top 5
1302 for name, ns in exec_times.most_common(5):
1303 s = ns / 1000000000
1304 print('{:20}{} s'.format(name, s))
1305
1306 return True
1307
1308
1309 if __name__ == '__main__':
1310 sys.exit(0 if top5proc() else 1)
1311 ----
1312
1313 Run this script:
1314
1315 [role="term"]
1316 ----
1317 $ python3 top5proc.py /tmp/my-kernel-trace/kernel
1318 ----
1319
1320 Output example:
1321
1322 ----
1323 swapper/0 48.607245889 s
1324 chromium 7.192738188 s
1325 pavucontrol 0.709894415 s
1326 Compositor 0.660867933 s
1327 Xorg.bin 0.616753786 s
1328 ----
1329
1330 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
1331 weren't using the CPU that much when tracing, its first position in the
1332 list makes sense.
1333
1334
1335 [[core-concepts]]
1336 == [[understanding-lttng]]Core concepts
1337
1338 From a user's perspective, the LTTng system is built on a few concepts,
1339 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1340 operates by sending commands to the <<lttng-sessiond,session daemon>>.
1341 Understanding how those objects relate to eachother is key in mastering
1342 the toolkit.
1343
1344 The core concepts are:
1345
1346 * <<tracing-session,Tracing session>>
1347 * <<domain,Tracing domain>>
1348 * <<channel,Channel and ring buffer>>
1349 * <<"event","Instrumentation point, event rule, event, and event record">>
1350
1351
1352 [[tracing-session]]
1353 === Tracing session
1354
1355 A _tracing session_ is a stateful dialogue between you and
1356 a <<lttng-sessiond,session daemon>>. You can
1357 <<creating-destroying-tracing-sessions,create a new tracing
1358 session>> with the `lttng create` command.
1359
1360 Anything that you do when you control LTTng tracers happens within a
1361 tracing session. In particular, a tracing session:
1362
1363 * Has its own name.
1364 * Has its own set of trace files.
1365 * Has its own state of activity (started or stopped).
1366 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
1367 snapshot, or live).
1368 * Has its own <<channel,channels>> which have their own
1369 <<event,event rules>>.
1370
1371 [role="img-100"]
1372 .A _tracing session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,event rules>>.
1373 image::concepts.png[]
1374
1375 Those attributes and objects are completely isolated between different
1376 tracing sessions.
1377
1378 A tracing session is analogous to a cash machine session:
1379 the operations you do on the banking system through the cash machine do
1380 not alter the data of other users of the same system. In the case of
1381 the cash machine, a session lasts as long as your bank card is inside.
1382 In the case of LTTng, a tracing session lasts from the `lttng create`
1383 command to the `lttng destroy` command.
1384
1385 [role="img-100"]
1386 .Each Unix user has its own set of tracing sessions.
1387 image::many-sessions.png[]
1388
1389
1390 [[tracing-session-mode]]
1391 ==== Tracing session mode
1392
1393 LTTng can send the generated trace data to different locations. The
1394 _tracing session mode_ dictates where to send it. The following modes
1395 are available in LTTng{nbsp}{revision}:
1396
1397 Local mode::
1398 LTTng writes the traces to the file system of the machine being traced
1399 (target system).
1400
1401 Network streaming mode::
1402 LTTng sends the traces over the network to a
1403 <<lttng-relayd,relay daemon>> running on a remote system.
1404
1405 Snapshot mode::
1406 LTTng does not write the traces by default. Instead, you can request
1407 LTTng to <<taking-a-snapshot,take a snapshot>>, that is, a copy of the
1408 current tracing buffers, and to write it to the target's file system
1409 or to send it over the network to a <<lttng-relayd,relay daemon>>
1410 running on a remote system.
1411
1412 Live mode::
1413 This mode is similar to the network streaming mode, but a live
1414 trace viewer can connect to the distant relay daemon to
1415 <<lttng-live,view event records as LTTng generates them>> by
1416 the tracers.
1417
1418
1419 [[domain]]
1420 === Tracing domain
1421
1422 A _tracing domain_ is a namespace for event sources. A tracing domain
1423 has its own properties and features.
1424
1425 There are currently five available tracing domains:
1426
1427 * Linux kernel
1428 * User space
1429 * `java.util.logging` (JUL)
1430 * log4j
1431 * Python
1432
1433 You must specify a tracing domain when using some commands to avoid
1434 ambiguity. For example, since all the domains support named tracepoints
1435 as event sources (instrumentation points that you manually insert in the
1436 source code), you need to specify a tracing domain when
1437 <<enabling-disabling-events,creating an event rule>> because all the
1438 tracing domains could have tracepoints with the same names.
1439
1440 Some features are reserved to specific tracing domains. Dynamic function
1441 entry and return instrumentation points, for example, are currently only
1442 supported in the Linux kernel tracing domain, but support for other
1443 tracing domains could be added in the future.
1444
1445 You can create <<channel,channels>> in the Linux kernel and user space
1446 tracing domains. The other tracing domains have a single default
1447 channel.
1448
1449
1450 [[channel]]
1451 === Channel and ring buffer
1452
1453 A _channel_ is an object which is responsible for a set of ring buffers.
1454 Each ring buffer is divided into multiple sub-buffers. When an LTTng
1455 tracer emits an event, it can record it to one or more
1456 sub-buffers. The attributes of a channel determine what to do when
1457 there's no space left for a new event record because all sub-buffers
1458 are full, where to send a full sub-buffer, and other behaviours.
1459
1460 A channel is always associated to a <<domain,tracing domain>>. The
1461 `java.util.logging` (JUL), log4j, and Python tracing domains each have
1462 a default channel which you cannot configure.
1463
1464 A channel also owns <<event,event rules>>. When an LTTng tracer emits
1465 an event, it records it to the sub-buffers of all
1466 the enabled channels with a satisfied event rule, as long as those
1467 channels are part of active <<tracing-session,tracing sessions>>.
1468
1469
1470 [[channel-buffering-schemes]]
1471 ==== Per-user vs. per-process buffering schemes
1472
1473 A channel has at least one ring buffer _per CPU_. LTTng always
1474 records an event to the ring buffer associated to the CPU on which it
1475 occurred.
1476
1477 Two _buffering schemes_ are available when you
1478 <<enabling-disabling-channels,create a channel>> in the
1479 user space <<domain,tracing domain>>:
1480
1481 Per-user buffering::
1482 Allocate one set of ring buffers--one per CPU--shared by all the
1483 instrumented processes of each Unix user.
1484 +
1485 --
1486 [role="img-100"]
1487 .Per-user buffering scheme.
1488 image::per-user-buffering.png[]
1489 --
1490
1491 Per-process buffering::
1492 Allocate one set of ring buffers--one per CPU--for each
1493 instrumented process.
1494 +
1495 --
1496 [role="img-100"]
1497 .Per-process buffering scheme.
1498 image::per-process-buffering.png[]
1499 --
1500 +
1501 The per-process buffering scheme tends to consume more memory than the
1502 per-user option because systems generally have more instrumented
1503 processes than Unix users running instrumented processes. However, the
1504 per-process buffering scheme ensures that one process having a high
1505 event throughput won't fill all the shared sub-buffers of the same
1506 user, only its own.
1507
1508 The Linux kernel tracing domain has only one available buffering scheme
1509 which is to allocate a single set of ring buffers for the whole system.
1510 This scheme is similar to the per-user option, but with a single, global
1511 user "running" the kernel.
1512
1513
1514 [[channel-overwrite-mode-vs-discard-mode]]
1515 ==== Overwrite vs. discard event loss modes
1516
1517 When an event occurs, LTTng records it to a specific sub-buffer (yellow
1518 arc in the following animation) of a specific channel's ring buffer.
1519 When there's no space left in a sub-buffer, the tracer marks it as
1520 consumable (red) and another, empty sub-buffer starts receiving the
1521 following event records. A <<lttng-consumerd,consumer daemon>>
1522 eventually consumes the marked sub-buffer (returns to white).
1523
1524 [NOTE]
1525 [role="docsvg-channel-subbuf-anim"]
1526 ====
1527 {note-no-anim}
1528 ====
1529
1530 In an ideal world, sub-buffers are consumed faster than they are filled,
1531 as is the case in the previous animation. In the real world,
1532 however, all sub-buffers can be full at some point, leaving no space to
1533 record the following events.
1534
1535 By design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer is
1536 available, it is acceptable to lose event records when the alternative
1537 would be to cause substantial delays in the instrumented application's
1538 execution. LTTng privileges performance over integrity; it aims at
1539 perturbing the traced system as little as possible in order to make
1540 tracing of subtle race conditions and rare interrupt cascades possible.
1541
1542 When it comes to losing event records because no empty sub-buffer is
1543 available, the channel's _event loss mode_ determines what to do. The
1544 available event loss modes are:
1545
1546 Discard mode::
1547 Drop the newest event records until a the tracer
1548 releases a sub-buffer.
1549
1550 Overwrite mode::
1551 Clear the sub-buffer containing the oldest event records and start
1552 writing the newest event records there.
1553 +
1554 This mode is sometimes called _flight recorder mode_ because it's
1555 similar to a
1556 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]:
1557 always keep a fixed amount of the latest data.
1558
1559 Which mechanism you should choose depends on your context: prioritize
1560 the newest or the oldest event records in the ring buffer?
1561
1562 Beware that, in overwrite mode, the tracer abandons a whole sub-buffer
1563 as soon as a there's no space left for a new event record, whereas in
1564 discard mode, the tracer only discards the event record that doesn't
1565 fit.
1566
1567 In discard mode, LTTng increments a count of lost event records when an
1568 event record is lost and saves this count to the trace. In overwrite
1569 mode, since LTTng 2.8, LTTng increments a count of lost sub-buffers when
1570 a sub-buffer is lost and saves this count to the trace. In this mode,
1571 the exact number of lost event records in those lost sub-buffers is not
1572 saved to the trace. Trace analyses can use the trace's saved discarded
1573 event record and sub-buffer counts to decide whether or not to perform
1574 the analyses even if trace data is known to be missing.
1575
1576 There are a few ways to decrease your probability of losing event
1577 records.
1578 <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>> shows
1579 how you can fine-une the sub-buffer count and size of a channel to
1580 virtually stop losing event records, though at the cost of greater
1581 memory usage.
1582
1583
1584 [[channel-subbuf-size-vs-subbuf-count]]
1585 ==== Sub-buffer count and size
1586
1587 When you <<enabling-disabling-channels,create a channel>>, you can
1588 set its number of sub-buffers and their size.
1589
1590 Note that there is noticeable CPU overhead introduced when
1591 switching sub-buffers (marking a full one as consumable and switching
1592 to an empty one for the following events to be recorded). Knowing this,
1593 the following list presents a few practical situations along with how
1594 to configure the sub-buffer count and size for them:
1595
1596 * **High event throughput**: In general, prefer bigger sub-buffers to
1597 lower the risk of losing event records.
1598 +
1599 Having bigger sub-buffers also ensures a lower
1600 <<channel-switch-timer,sub-buffer switching frequency>>.
1601 +
1602 The number of sub-buffers is only meaningful if you create the channel
1603 in overwrite mode: in this case, if a sub-buffer overwrite happens, the
1604 other sub-buffers are left unaltered.
1605
1606 * **Low event throughput**: In general, prefer smaller sub-buffers
1607 since the risk of losing event records is low.
1608 +
1609 Because events occur less frequently, the sub-buffer switching frequency
1610 should remain low and thus the tracer's overhead should not be a
1611 problem.
1612
1613 * **Low memory system**: If your target system has a low memory
1614 limit, prefer fewer first, then smaller sub-buffers.
1615 +
1616 Even if the system is limited in memory, you want to keep the
1617 sub-buffers as big as possible to avoid a high sub-buffer switching
1618 frequency.
1619
1620 Note that LTTng uses http://diamon.org/ctf/[CTF] as its trace format,
1621 which means event data is very compact. For example, the average
1622 LTTng kernel event record weights about 32{nbsp}bytes. Thus, a
1623 sub-buffer size of 1{nbsp}MiB is considered big.
1624
1625 The previous situations highlight the major trade-off between a few big
1626 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1627 frequency vs. how much data is lost in overwrite mode. Assuming a
1628 constant event throughput and using the overwrite mode, the two
1629 following configurations have the same ring buffer total size:
1630
1631 [NOTE]
1632 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1633 ====
1634 {note-no-anim}
1635 ====
1636
1637 * **2 sub-buffers of 4{nbsp}MiB each**: Expect a very low sub-buffer
1638 switching frequency, but if a sub-buffer overwrite happens, half of
1639 the event records so far (4{nbsp}MiB) are definitely lost.
1640 * **8 sub-buffers of 1{nbsp}MiB each**: Expect 4{nbsp}times the tracer's
1641 overhead as the previous configuration, but if a sub-buffer
1642 overwrite happens, only the eighth of event records so far are
1643 definitely lost.
1644
1645 In discard mode, the sub-buffers count parameter is pointless: use two
1646 sub-buffers and set their size according to the requirements of your
1647 situation.
1648
1649
1650 [[channel-switch-timer]]
1651 ==== Switch timer period
1652
1653 The _switch timer period_ is an important configurable attribute of
1654 a channel to ensure periodic sub-buffer flushing.
1655
1656 When the _switch timer_ expires, a sub-buffer switch happens. You can
1657 set the switch timer period attribute when you
1658 <<enabling-disabling-channels,create a channel>> to ensure that event
1659 data is consumed and committed to trace files or to a distant relay
1660 daemon periodically in case of a low event throughput.
1661
1662 [NOTE]
1663 [role="docsvg-channel-switch-timer"]
1664 ====
1665 {note-no-anim}
1666 ====
1667
1668 This attribute is also convenient when you use big sub-buffers to cope
1669 with a sporadic high event throughput, even if the throughput is
1670 normally low.
1671
1672
1673 [[channel-read-timer]]
1674 ==== Read timer period
1675
1676 By default, the LTTng tracers use a notification mechanism to signal a
1677 full sub-buffer so that a consumer daemon can consume it. When such
1678 notifications must be avoided, for example in real-time applications,
1679 you can use the channel's _read timer_ instead. When the read timer
1680 fires, the <<lttng-consumerd,consumer daemon>> checks for full,
1681 consumable sub-buffers.
1682
1683
1684 [[tracefile-rotation]]
1685 ==== Trace file count and size
1686
1687 By default, trace files can grow as large as needed. You can set the
1688 maximum size of each trace file that a channel writes when you
1689 <<enabling-disabling-channels,create a channel>>. When the size of
1690 a trace file reaches the channel's fixed maximum size, LTTng creates
1691 another file to contain the next event records. LTTng appends a file
1692 count to each trace file name in this case.
1693
1694 If you set the trace file size attribute when you create a channel, the
1695 maximum number of trace files that LTTng creates is _unlimited_ by
1696 default. To limit them, you can also set a maximum number of trace
1697 files. When the number of trace files reaches the channel's fixed
1698 maximum count, the oldest trace file is overwritten. This mechanism is
1699 called _trace file rotation_.
1700
1701
1702 [[event]]
1703 === Instrumentation point, event rule, event, and event record
1704
1705 An _event rule_ is a set of conditions which must be **all** satisfied
1706 for LTTng to record an occuring event.
1707
1708 You set the conditions when you <<enabling-disabling-events,create
1709 an event rule>>.
1710
1711 You always attach an event rule to <<channel,channel>> when you create
1712 it.
1713
1714 When an event passes the conditions of an event rule, LTTng records it
1715 in one of the attached channel's sub-buffers.
1716
1717 The available conditions, as of LTTng{nbsp}{revision}, are:
1718
1719 * The event rule _is enabled_.
1720 * The instrumentation point's type _is{nbsp}T_.
1721 * The instrumentation point's name (sometimes called _event name_)
1722 _matches{nbsp}N_, but _is not{nbsp}E_.
1723 * The instrumentation point's log level _is as severe as{nbsp}L_, or
1724 _is exactly{nbsp}L_.
1725 * The fields of the event's payload _satisfy_ a filter
1726 expression{nbsp}__F__.
1727
1728 As you can see, all the conditions but the dynamic filter are related to
1729 the event rule's status or to the instrumentation point, not to the
1730 occurring events. This is why, without a filter, checking if an event
1731 passes an event rule is not a dynamic task: when you create or modify an
1732 event rule, all the tracers of its tracing domain enable or disable the
1733 instrumentation points themselves once. This is possible because the
1734 attributes of an instrumentation point (type, name, and log level) are
1735 defined statically. In other words, without a dynamic filter, the tracer
1736 _does not evaluate_ the arguments of an instrumentation point unless it
1737 matches an enabled event rule.
1738
1739 Note that, for LTTng to record an event, the <<channel,channel>> to
1740 which a matching event rule is attached must also be enabled, and the
1741 tracing session owning this channel must be active.
1742
1743 [role="img-100"]
1744 .Logical path from an instrumentation point to an event record.
1745 image::event-rule.png[]
1746
1747 .Event, event record, or event rule?
1748 ****
1749 With so many similar terms, it's easy to get confused.
1750
1751 An **event** is the consequence of the execution of an _instrumentation
1752 point_, like a tracepoint that you manually place in some source code,
1753 or a Linux kernel KProbe. An event is said to _occur_ at a specific
1754 time. Different actions can be taken upon the occurrence of an event,
1755 like record the event's payload to a buffer.
1756
1757 An **event record** is the representation of an event in a sub-buffer. A
1758 tracer is responsible for capturing the payload of an event, current
1759 context variables, the event's ID, and the event's timestamp. LTTng
1760 can append this sub-buffer to a trace file.
1761
1762 An **event rule** is a set of conditions which must all be satisfied for
1763 LTTng to record an occuring event. Events still occur without
1764 satisfying event rules, but LTTng does not record them.
1765 ****
1766
1767
1768 [[plumbing]]
1769 == Components of noch:{LTTng}
1770
1771 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1772 to call LTTng a simple _tool_ since it is composed of multiple
1773 interacting components. This section describes those components,
1774 explains their respective roles, and shows how they connect together to
1775 form the LTTng ecosystem.
1776
1777 The following diagram shows how the most important components of LTTng
1778 interact with user applications, the Linux kernel, and you:
1779
1780 [role="img-100"]
1781 .Control and trace data paths between LTTng components.
1782 image::plumbing.png[]
1783
1784 The LTTng project incorporates:
1785
1786 * **LTTng-tools**: Libraries and command-line interface to
1787 control tracing sessions.
1788 ** <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1789 ** <<lttng-consumerd,Consumer daemon>> (man:lttng-consumerd(8)).
1790 ** <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1791 ** <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1792 ** <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1793 * **LTTng-UST**: Libraries and Java/Python packages to trace user
1794 applications.
1795 ** <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1796 headers to instrument and trace any native user application.
1797 ** <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1798 *** `liblttng-ust-libc-wrapper`
1799 *** `liblttng-ust-pthread-wrapper`
1800 *** `liblttng-ust-cyg-profile`
1801 *** `liblttng-ust-cyg-profile-fast`
1802 *** `liblttng-ust-dl`
1803 ** User space tracepoint provider source files generator command-line
1804 tool (man:lttng-gen-tp(1)).
1805 ** <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1806 Java applications using `java.util.logging` or
1807 Apache log4j 1.2 logging.
1808 ** <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1809 Python applications using the standard `logging` package.
1810 * **LTTng-modules**: <<lttng-modules,Linux kernel modules>> to trace
1811 the kernel.
1812 ** LTTng kernel tracer module.
1813 ** Tracing ring buffer kernel modules.
1814 ** Probe kernel modules.
1815 ** LTTng logger kernel module.
1816
1817
1818 [[lttng-cli]]
1819 === Tracing control command-line interface
1820
1821 [role="img-100"]
1822 .The tracing control command-line interface.
1823 image::plumbing-lttng-cli.png[]
1824
1825 The _man:lttng(1) command-line tool_ is the standard user interface to
1826 control LTTng <<tracing-session,tracing sessions>>. The cmd:lttng tool
1827 is part of LTTng-tools.
1828
1829 The cmd:lttng tool is linked with
1830 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1831 one or more <<lttng-sessiond,session daemons>> behind the scenes.
1832
1833 The cmd:lttng tool has a Git-like interface:
1834
1835 [role="term"]
1836 ----
1837 $ lttng <GENERAL OPTIONS> <COMMAND> <COMMAND OPTIONS>
1838 ----
1839
1840 The <<controlling-tracing,Tracing control>> section explores the
1841 available features of LTTng using the cmd:lttng tool.
1842
1843
1844 [[liblttng-ctl-lttng]]
1845 === Tracing control library
1846
1847 [role="img-100"]
1848 .The tracing control library.
1849 image::plumbing-liblttng-ctl.png[]
1850
1851 The _LTTng control library_, `liblttng-ctl`, is used to communicate
1852 with a <<lttng-sessiond,session daemon>> using a C API that hides the
1853 underlying protocol's details. `liblttng-ctl` is part of LTTng-tools.
1854
1855 The <<lttng-cli,cmd:lttng command-line tool>>
1856 is linked with `liblttng-ctl`.
1857
1858 You can use `liblttng-ctl` in C or $$C++$$ source code by including its
1859 "master" header:
1860
1861 [source,c]
1862 ----
1863 #include <lttng/lttng.h>
1864 ----
1865
1866 Some objects are referenced by name (C string), such as tracing
1867 sessions, but most of them require to create a handle first using
1868 `lttng_create_handle()`.
1869
1870 The best available developer documentation for `liblttng-ctl` is, as of
1871 LTTng{nbsp}{revision}, its installed header files. Every function and
1872 structure is thoroughly documented.
1873
1874
1875 [[lttng-ust]]
1876 === User space tracing library
1877
1878 [role="img-100"]
1879 .The user space tracing library.
1880 image::plumbing-liblttng-ust.png[]
1881
1882 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1883 is the LTTng user space tracer. It receives commands from a
1884 <<lttng-sessiond,session daemon>>, for example to
1885 enable and disable specific instrumentation points, and writes event
1886 records to ring buffers shared with a
1887 <<lttng-consumerd,consumer daemon>>.
1888 `liblttng-ust` is part of LTTng-UST.
1889
1890 Public C header files are installed beside `liblttng-ust` to
1891 instrument any <<c-application,C or $$C++$$ application>>.
1892
1893 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1894 packages, use their own library providing tracepoints which is
1895 linked with `liblttng-ust`.
1896
1897 An application or library does not have to initialize `liblttng-ust`
1898 manually: its constructor does the necessary tasks to properly register
1899 to a session daemon. The initialization phase also enables the
1900 instrumentation points matching the <<event,event rules>> that you
1901 already created.
1902
1903
1904 [[lttng-ust-agents]]
1905 === User space tracing agents
1906
1907 [role="img-100"]
1908 .The user space tracing agents.
1909 image::plumbing-lttng-ust-agents.png[]
1910
1911 The _LTTng-UST Java and Python agents_ are regular Java and Python
1912 packages which add LTTng tracing capabilities to the
1913 native logging frameworks. The LTTng-UST agents are part of LTTng-UST.
1914
1915 In the case of Java, the
1916 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1917 core logging facilities] and
1918 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2] are supported.
1919 Note that Apache Log4{nbsp}2 is not supported.
1920
1921 In the case of Python, the standard
1922 https://docs.python.org/3/library/logging.html[`logging`] package
1923 is supported. Both Python 2 and Python 3 modules can import the
1924 LTTng-UST Python agent package.
1925
1926 The applications using the LTTng-UST agents are in the
1927 `java.util.logging` (JUL),
1928 log4j, and Python <<domain,tracing domains>>.
1929
1930 Both agents use the same mechanism to trace the log statements. When an
1931 agent is initialized, it creates a log handler that attaches to the root
1932 logger. The agent also registers to a <<lttng-sessiond,session daemon>>.
1933 When the application executes a log statement, it is passed to the
1934 agent's log handler by the root logger. The agent's log handler calls a
1935 native function in a tracepoint provider package shared library linked
1936 with <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
1937 other fields, like its logger name and its log level. This native
1938 function contains a user space instrumentation point, hence tracing the
1939 log statement.
1940
1941 The log level condition of an
1942 <<event,event rule>> is considered when tracing
1943 a Java or a Python application, and it's compatible with the standard
1944 JUL, log4j, and Python log levels.
1945
1946
1947 [[lttng-modules]]
1948 === LTTng kernel modules
1949
1950 [role="img-100"]
1951 .The LTTng kernel modules.
1952 image::plumbing-lttng-modules.png[]
1953
1954 The _LTTng kernel modules_ are a set of Linux kernel modules
1955 which implement the kernel tracer of the LTTng project. The LTTng
1956 kernel modules are part of LTTng-modules.
1957
1958 The LTTng kernel modules include:
1959
1960 * A set of _probe_ modules.
1961 +
1962 Each module attaches to a specific subsystem
1963 of the Linux kernel using its tracepoint instrument points. There are
1964 also modules to attach to the entry and return points of the Linux
1965 system call functions.
1966
1967 * _Ring buffer_ modules.
1968 +
1969 A ring buffer implementation is provided as kernel modules. The LTTng
1970 kernel tracer writes to the ring buffer; a
1971 <<lttng-consumerd,consumer daemon>> reads from the ring buffer.
1972
1973 * The _LTTng kernel tracer_ module.
1974 * The _LTTng logger_ module.
1975 +
1976 The LTTng logger module implements the special path:{/proc/lttng-logger}
1977 file so that any executable can generate LTTng events by opening and
1978 writing to this file.
1979 +
1980 See <<proc-lttng-logger-abi,LTTng logger>>.
1981
1982 Generally, you do not have to load the LTTng kernel modules manually
1983 (using man:modprobe(8), for example): a root <<lttng-sessiond,session
1984 daemon>> loads the necessary modules when starting. If you have extra
1985 probe modules, you can specify to load them to the session daemon on
1986 the command line.
1987
1988 The LTTng kernel modules are installed in
1989 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
1990 the kernel release (see `uname --kernel-release`).
1991
1992
1993 [[lttng-sessiond]]
1994 === Session daemon
1995
1996 [role="img-100"]
1997 .The session daemon.
1998 image::plumbing-sessiond.png[]
1999
2000 The _session daemon_, man:lttng-sessiond(8), is a daemon responsible for
2001 managing tracing sessions and for controlling the various components of
2002 LTTng. The session daemon is part of LTTng-tools.
2003
2004 The session daemon sends control requests to and receives control
2005 responses from:
2006
2007 * The <<lttng-ust,user space tracing library>>.
2008 +
2009 Any instance of the user space tracing library first registers to
2010 a session daemon. Then, the session daemon can send requests to
2011 this instance, such as:
2012 +
2013 --
2014 ** Get the list of tracepoints.
2015 ** Share an <<event,event rule>> so that the user space tracing library
2016 can enable or disable tracepoints. Amongst the possible conditions
2017 of an event rule is a filter expression which `liblttng-ust` evalutes
2018 when an event occurs.
2019 ** Share <<channel,channel>> attributes and ring buffer locations.
2020 --
2021 +
2022 The session daemon and the user space tracing library use a Unix
2023 domain socket for their communication.
2024
2025 * The <<lttng-ust-agents,user space tracing agents>>.
2026 +
2027 Any instance of a user space tracing agent first registers to
2028 a session daemon. Then, the session daemon can send requests to
2029 this instance, such as:
2030 +
2031 --
2032 ** Get the list of loggers.
2033 ** Enable or disable a specific logger.
2034 --
2035 +
2036 The session daemon and the user space tracing agent use a TCP connection
2037 for their communication.
2038
2039 * The <<lttng-modules,LTTng kernel tracer>>.
2040 * The <<lttng-consumerd,consumer daemon>>.
2041 +
2042 The session daemon sends requests to the consumer daemon to instruct
2043 it where to send the trace data streams, amongst other information.
2044
2045 * The <<lttng-relayd,relay daemon>>.
2046
2047 The session daemon receives commands from the
2048 <<liblttng-ctl-lttng,tracing control library>>.
2049
2050 The root session daemon loads the appropriate
2051 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
2052 a <<lttng-consumerd,consumer daemon>> as soon as you create
2053 an <<event,event rule>>.
2054
2055 The session daemon does not send and receive trace data: this is the
2056 role of the <<lttng-consumerd,consumer daemon>> and
2057 <<lttng-relayd,relay daemon>>. It does, however, generate the
2058 http://diamon.org/ctf/[CTF] metadata stream.
2059
2060 Each Unix user can have its own session daemon instance. The
2061 tracing sessions managed by different session daemons are completely
2062 independent.
2063
2064 The root user's session daemon is the only one which is
2065 allowed to control the LTTng kernel tracer, and its spawned consumer
2066 daemon is the only one which is allowed to consume trace data from the
2067 LTTng kernel tracer. Note, however, that any Unix user which is a member
2068 of the <<tracing-group,tracing group>> is allowed
2069 to create <<channel,channels>> in the
2070 Linux kernel <<domain,tracing domain>>, and thus to trace the Linux
2071 kernel.
2072
2073 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
2074 session daemon when using its `create` command if none is currently
2075 running. You can also start the session daemon manually.
2076
2077
2078 [[lttng-consumerd]]
2079 === Consumer daemon
2080
2081 [role="img-100"]
2082 .The consumer daemon.
2083 image::plumbing-consumerd.png[]
2084
2085 The _consumer daemon_, man:lttng-consumerd(8), is a daemon which shares
2086 ring buffers with user applications or with the LTTng kernel modules to
2087 collect trace data and send it to some location (on disk or to a
2088 <<lttng-relayd,relay daemon>> over the network). The consumer daemon
2089 is part of LTTng-tools.
2090
2091 You do not start a consumer daemon manually: a consumer daemon is always
2092 spawned by a <<lttng-sessiond,session daemon>> as soon as you create an
2093 <<event,event rule>>, that is, before you start tracing. When you kill
2094 its owner session daemon, the consumer daemon also exits because it is
2095 the session daemon's child process. Command-line options of
2096 man:lttng-sessiond(8) target the consumer daemon process.
2097
2098 There are up to two running consumer daemons per Unix user, whereas only
2099 one session daemon can run per user. This is because each process can be
2100 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
2101 and 64-bit processes, it is more efficient to have separate
2102 corresponding 32-bit and 64-bit consumer daemons. The root user is an
2103 exception: it can have up to _three_ running consumer daemons: 32-bit
2104 and 64-bit instances for its user applications, and one more
2105 reserved for collecting kernel trace data.
2106
2107
2108 [[lttng-relayd]]
2109 === Relay daemon
2110
2111 [role="img-100"]
2112 .The relay daemon.
2113 image::plumbing-relayd.png[]
2114
2115 The _relay daemon_, man:lttng-relayd(8), is a daemon acting as a bridge
2116 between remote session and consumer daemons, local trace files, and a
2117 remote live trace viewer. The relay daemon is part of LTTng-tools.
2118
2119 The main purpose of the relay daemon is to implement a receiver of
2120 <<sending-trace-data-over-the-network,trace data over the network>>.
2121 This is useful when the target system does not have much file system
2122 space to record trace files locally.
2123
2124 The relay daemon is also a server to which a
2125 <<lttng-live,live trace viewer>> can
2126 connect. The live trace viewer sends requests to the relay daemon to
2127 receive trace data as the target system emits events. The
2128 communication protocol is named _LTTng live_; it is used over TCP
2129 connections.
2130
2131 Note that you can start the relay daemon on the target system directly.
2132 This is the setup of choice when the use case is to view events as
2133 the target system emits them without the need of a remote system.
2134
2135
2136 [[instrumenting]]
2137 == [[using-lttng]]Instrumentation
2138
2139 There are many examples of tracing and monitoring in our everyday life:
2140
2141 * You have access to real-time and historical weather reports and
2142 forecasts thanks to weather stations installed around the country.
2143 * You know your heart is safe thanks to an electrocardiogram.
2144 * You make sure not to drive your car too fast and to have enough fuel
2145 to reach your destination thanks to gauges visible on your dashboard.
2146
2147 All the previous examples have something in common: they rely on
2148 **instruments**. Without the electrodes attached to the surface of your
2149 body's skin, cardiac monitoring is futile.
2150
2151 LTTng, as a tracer, is no different from those real life examples. If
2152 you're about to trace a software system or, in other words, record its
2153 history of execution, you better have **instrumentation points** in the
2154 subject you're tracing, that is, the actual software.
2155
2156 Various ways were developed to instrument a piece of software for LTTng
2157 tracing. The most straightforward one is to manually place
2158 instrumentation points, called _tracepoints_, in the software's source
2159 code. It is also possible to add instrumentation points dynamically in
2160 the Linux kernel <<domain,tracing domain>>.
2161
2162 If you're only interested in tracing the Linux kernel, your
2163 instrumentation needs are probably already covered by LTTng's built-in
2164 <<lttng-modules,Linux kernel tracepoints>>. You may also wish to trace a
2165 user application which is already instrumented for LTTng tracing.
2166 In such cases, you can skip this whole section and read the topics of
2167 the <<controlling-tracing,Tracing control>> section.
2168
2169 Many methods are available to instrument a piece of software for LTTng
2170 tracing. They are:
2171
2172 * <<c-application,User space instrumentation for C and $$C++$$
2173 applications>>.
2174 * <<prebuilt-ust-helpers,Prebuilt user space tracing helpers>>.
2175 * <<java-application,User space Java agent>>.
2176 * <<python-application,User space Python agent>>.
2177 * <<proc-lttng-logger-abi,LTTng logger>>.
2178 * <<instrumenting-linux-kernel,LTTng kernel tracepoints>>.
2179
2180
2181 [[c-application]]
2182 === [[cxx-application]]User space instrumentation for C and $$C++$$ applications
2183
2184 The procedure to instrument a C or $$C++$$ user application with
2185 the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`, is:
2186
2187 . <<tracepoint-provider,Create the source files of a tracepoint provider
2188 package>>.
2189 . <<probing-the-application-source-code,Add tracepoints to
2190 the application's source code>>.
2191 . <<building-tracepoint-providers-and-user-application,Build and link
2192 a tracepoint provider package and the user application>>.
2193
2194 If you need quick, man:printf(3)-like instrumentation, you can skip
2195 those steps and use <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>>
2196 instead.
2197
2198 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
2199 instrument a user application with `liblttng-ust`.
2200
2201
2202 [[tracepoint-provider]]
2203 ==== Create the source files of a tracepoint provider package
2204
2205 A _tracepoint provider_ is a set of compiled functions which provide
2206 **tracepoints** to an application, the type of instrumentation point
2207 supported by LTTng-UST. Those functions can emit events with
2208 user-defined fields and serialize those events as event records to one
2209 or more LTTng-UST <<channel,channel>> sub-buffers. The `tracepoint()`
2210 macro, which you <<probing-the-application-source-code,insert in a user
2211 application's source code>>, calls those functions.
2212
2213 A _tracepoint provider package_ is an object file (`.o`) or a shared
2214 library (`.so`) which contains one or more tracepoint providers.
2215 Its source files are:
2216
2217 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
2218 * A <<tpp-source,tracepoint provider package source>> (`.c`).
2219
2220 A tracepoint provider package is dynamically linked with `liblttng-ust`,
2221 the LTTng user space tracer, at run time.
2222
2223 [role="img-100"]
2224 .User application linked with `liblttng-ust` and containing a tracepoint provider.
2225 image::ust-app.png[]
2226
2227 NOTE: If you need quick, man:printf(3)-like instrumentation, you can
2228 skip creating and using a tracepoint provider and use
2229 <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>> instead.
2230
2231
2232 [[tpp-header]]
2233 ===== Create a tracepoint provider header file template
2234
2235 A _tracepoint provider header file_ contains the tracepoint
2236 definitions of a tracepoint provider.
2237
2238 To create a tracepoint provider header file:
2239
2240 . Start from this template:
2241 +
2242 --
2243 [source,c]
2244 .Tracepoint provider header file template (`.h` file extension).
2245 ----
2246 #undef TRACEPOINT_PROVIDER
2247 #define TRACEPOINT_PROVIDER provider_name
2248
2249 #undef TRACEPOINT_INCLUDE
2250 #define TRACEPOINT_INCLUDE "./tp.h"
2251
2252 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
2253 #define _TP_H
2254
2255 #include <lttng/tracepoint.h>
2256
2257 /*
2258 * Use TRACEPOINT_EVENT(), TRACEPOINT_EVENT_CLASS(),
2259 * TRACEPOINT_EVENT_INSTANCE(), and TRACEPOINT_LOGLEVEL() here.
2260 */
2261
2262 #endif /* _TP_H */
2263
2264 #include <lttng/tracepoint-event.h>
2265 ----
2266 --
2267
2268 . Replace:
2269 +
2270 * `provider_name` with the name of your tracepoint provider.
2271 * `"tp.h"` with the name of your tracepoint provider header file.
2272
2273 . Below the `#include <lttng/tracepoint.h>` line, put your
2274 <<defining-tracepoints,tracepoint definitions>>.
2275
2276 Your tracepoint provider name must be unique amongst all the possible
2277 tracepoint provider names used on the same target system. We
2278 suggest to include the name of your project or company in the name,
2279 for example, `org_lttng_my_project_tpp`.
2280
2281 TIP: [[lttng-gen-tp]]You can use the man:lttng-gen-tp(1) tool to create
2282 this boilerplate for you. When using cmd:lttng-gen-tp, all you need to
2283 write are the <<defining-tracepoints,tracepoint definitions>>.
2284
2285
2286 [[defining-tracepoints]]
2287 ===== Create a tracepoint definition
2288
2289 A _tracepoint definition_ defines, for a given tracepoint:
2290
2291 * Its **input arguments**. They are the macro parameters that the
2292 `tracepoint()` macro accepts for this particular tracepoint
2293 in the user application's source code.
2294 * Its **output event fields**. They are the sources of event fields
2295 that form the payload of any event that the execution of the
2296 `tracepoint()` macro emits for this particular tracepoint.
2297
2298 You can create a tracepoint definition by using the
2299 `TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2300 line in the
2301 <<tpp-header,tracepoint provider header file template>>.
2302
2303 The syntax of the `TRACEPOINT_EVENT()` macro is:
2304
2305 [source,c]
2306 .`TRACEPOINT_EVENT()` macro syntax.
2307 ----
2308 TRACEPOINT_EVENT(
2309 /* Tracepoint provider name */
2310 provider_name,
2311
2312 /* Tracepoint name */
2313 tracepoint_name,
2314
2315 /* Input arguments */
2316 TP_ARGS(
2317 arguments
2318 ),
2319
2320 /* Output event fields */
2321 TP_FIELDS(
2322 fields
2323 )
2324 )
2325 ----
2326
2327 Replace:
2328
2329 * `provider_name` with your tracepoint provider name.
2330 * `tracepoint_name` with your tracepoint name.
2331 * `arguments` with the <<tpp-def-input-args,input arguments>>.
2332 * `fields` with the <<tpp-def-output-fields,output event field>>
2333 definitions.
2334
2335 This tracepoint emits events named `provider_name:tracepoint_name`.
2336
2337 [IMPORTANT]
2338 .Event name's length limitation
2339 ====
2340 The concatenation of the tracepoint provider name and the
2341 tracepoint name must not exceed **254 characters**. If it does, the
2342 instrumented application compiles and runs, but LTTng throws multiple
2343 warnings and you could experience serious issues.
2344 ====
2345
2346 [[tpp-def-input-args]]The syntax of the `TP_ARGS()` macro is:
2347
2348 [source,c]
2349 .`TP_ARGS()` macro syntax.
2350 ----
2351 TP_ARGS(
2352 type, arg_name
2353 )
2354 ----
2355
2356 Replace:
2357
2358 * `type` with the C type of the argument.
2359 * `arg_name` with the argument name.
2360
2361 You can repeat `type` and `arg_name` up to 10 times to have
2362 more than one argument.
2363
2364 .`TP_ARGS()` usage with three arguments.
2365 ====
2366 [source,c]
2367 ----
2368 TP_ARGS(
2369 int, count,
2370 float, ratio,
2371 const char*, query
2372 )
2373 ----
2374 ====
2375
2376 The `TP_ARGS()` and `TP_ARGS(void)` forms are valid to create a
2377 tracepoint definition with no input arguments.
2378
2379 [[tpp-def-output-fields]]The `TP_FIELDS()` macro contains a list of
2380 `ctf_*()` macros. Each `ctf_*()` macro defines one event field. See
2381 man:lttng-ust(3) for a complete description of the available `ctf_*()`
2382 macros. A `ctf_*()` macro specifies the type, size, and byte order of
2383 one event field.
2384
2385 Each `ctf_*()` macro takes an _argument expression_ parameter. This is a
2386 C expression that the tracer evalutes at the `tracepoint()` macro site
2387 in the application's source code. This expression provides a field's
2388 source of data. The argument expression can include input argument names
2389 listed in the `TP_ARGS()` macro.
2390
2391 Each `ctf_*()` macro also takes a _field name_ parameter. Field names
2392 must be unique within a given tracepoint definition.
2393
2394 Here's a complete tracepoint definition example:
2395
2396 .Tracepoint definition.
2397 ====
2398 The following tracepoint definition defines a tracepoint which takes
2399 three input arguments and has four output event fields.
2400
2401 [source,c]
2402 ----
2403 #include "my-custom-structure.h"
2404
2405 TRACEPOINT_EVENT(
2406 my_provider,
2407 my_tracepoint,
2408 TP_ARGS(
2409 const struct my_custom_structure*, my_custom_structure,
2410 float, ratio,
2411 const char*, query
2412 ),
2413 TP_FIELDS(
2414 ctf_string(query_field, query)
2415 ctf_float(double, ratio_field, ratio)
2416 ctf_integer(int, recv_size, my_custom_structure->recv_size)
2417 ctf_integer(int, send_size, my_custom_structure->send_size)
2418 )
2419 )
2420 ----
2421
2422 You can refer to this tracepoint definition with the `tracepoint()`
2423 macro in your application's source code like this:
2424
2425 [source,c]
2426 ----
2427 tracepoint(my_provider, my_tracepoint,
2428 my_structure, some_ratio, the_query);
2429 ----
2430 ====
2431
2432 NOTE: The LTTng tracer only evaluates tracepoint arguments at run time
2433 if they satisfy an enabled <<event,event rule>>.
2434
2435
2436 [[using-tracepoint-classes]]
2437 ===== Use a tracepoint class
2438
2439 A _tracepoint class_ is a class of tracepoints which share the same
2440 output event field definitions. A _tracepoint instance_ is one
2441 instance of such a defined tracepoint class, with its own tracepoint
2442 name.
2443
2444 The <<defining-tracepoints,`TRACEPOINT_EVENT()` macro>> is actually a
2445 shorthand which defines both a tracepoint class and a tracepoint
2446 instance at the same time.
2447
2448 When you build a tracepoint provider package, the C or $$C++$$ compiler
2449 creates one serialization function for each **tracepoint class**. A
2450 serialization function is responsible for serializing the event fields
2451 of a tracepoint to a sub-buffer when tracing.
2452
2453 For various performance reasons, when your situation requires multiple
2454 tracepoint definitions with different names, but with the same event
2455 fields, we recommend that you manually create a tracepoint class
2456 and instantiate as many tracepoint instances as needed. One positive
2457 effect of such a design, amongst other advantages, is that all
2458 tracepoint instances of the same tracepoint class reuse the same
2459 serialization function, thus reducing
2460 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2461
2462 .Use a tracepoint class and tracepoint instances.
2463 ====
2464 Consider the following three tracepoint definitions:
2465
2466 [source,c]
2467 ----
2468 TRACEPOINT_EVENT(
2469 my_app,
2470 get_account,
2471 TP_ARGS(
2472 int, userid,
2473 size_t, len
2474 ),
2475 TP_FIELDS(
2476 ctf_integer(int, userid, userid)
2477 ctf_integer(size_t, len, len)
2478 )
2479 )
2480
2481 TRACEPOINT_EVENT(
2482 my_app,
2483 get_settings,
2484 TP_ARGS(
2485 int, userid,
2486 size_t, len
2487 ),
2488 TP_FIELDS(
2489 ctf_integer(int, userid, userid)
2490 ctf_integer(size_t, len, len)
2491 )
2492 )
2493
2494 TRACEPOINT_EVENT(
2495 my_app,
2496 get_transaction,
2497 TP_ARGS(
2498 int, userid,
2499 size_t, len
2500 ),
2501 TP_FIELDS(
2502 ctf_integer(int, userid, userid)
2503 ctf_integer(size_t, len, len)
2504 )
2505 )
2506 ----
2507
2508 In this case, we create three tracepoint classes, with one implicit
2509 tracepoint instance for each of them: `get_account`, `get_settings`, and
2510 `get_transaction`. However, they all share the same event field names
2511 and types. Hence three identical, yet independent serialization
2512 functions are created when you build the tracepoint provider package.
2513
2514 A better design choice is to define a single tracepoint class and three
2515 tracepoint instances:
2516
2517 [source,c]
2518 ----
2519 /* The tracepoint class */
2520 TRACEPOINT_EVENT_CLASS(
2521 /* Tracepoint provider name */
2522 my_app,
2523
2524 /* Tracepoint class name */
2525 my_class,
2526
2527 /* Input arguments */
2528 TP_ARGS(
2529 int, userid,
2530 size_t, len
2531 ),
2532
2533 /* Output event fields */
2534 TP_FIELDS(
2535 ctf_integer(int, userid, userid)
2536 ctf_integer(size_t, len, len)
2537 )
2538 )
2539
2540 /* The tracepoint instances */
2541 TRACEPOINT_EVENT_INSTANCE(
2542 /* Tracepoint provider name */
2543 my_app,
2544
2545 /* Tracepoint class name */
2546 my_class,
2547
2548 /* Tracepoint name */
2549 get_account,
2550
2551 /* Input arguments */
2552 TP_ARGS(
2553 int, userid,
2554 size_t, len
2555 )
2556 )
2557 TRACEPOINT_EVENT_INSTANCE(
2558 my_app,
2559 my_class,
2560 get_settings,
2561 TP_ARGS(
2562 int, userid,
2563 size_t, len
2564 )
2565 )
2566 TRACEPOINT_EVENT_INSTANCE(
2567 my_app,
2568 my_class,
2569 get_transaction,
2570 TP_ARGS(
2571 int, userid,
2572 size_t, len
2573 )
2574 )
2575 ----
2576 ====
2577
2578
2579 [[assigning-log-levels]]
2580 ===== Assign a log level to a tracepoint definition
2581
2582 You can assign an optional _log level_ to a
2583 <<defining-tracepoints,tracepoint definition>>.
2584
2585 Assigning different levels of severity to tracepoint definitions can
2586 be useful: when you <<enabling-disabling-events,create an event rule>>,
2587 you can target tracepoints having a log level as severe as a specific
2588 value.
2589
2590 The concept of LTTng-UST log levels is similar to the levels found
2591 in typical logging frameworks:
2592
2593 * In a logging framework, the log level is given by the function
2594 or method name you use at the log statement site: `debug()`,
2595 `info()`, `warn()`, `error()`, and so on.
2596 * In LTTng-UST, you statically assign the log level to a tracepoint
2597 definition; any `tracepoint()` macro invocation which refers to
2598 this definition has this log level.
2599
2600 You can assign a log level to a tracepoint definition with the
2601 `TRACEPOINT_LOGLEVEL()` macro. You must use this macro _after_ the
2602 <<defining-tracepoints,`TRACEPOINT_EVENT()`>> or
2603 <<using-tracepoint-classes,`TRACEPOINT_INSTANCE()`>> macro for a given
2604 tracepoint.
2605
2606 The syntax of the `TRACEPOINT_LOGLEVEL()` macro is:
2607
2608 [source,c]
2609 .`TRACEPOINT_LOGLEVEL()` macro syntax.
2610 ----
2611 TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2612 ----
2613
2614 Replace:
2615
2616 * `provider_name` with the tracepoint provider name.
2617 * `tracepoint_name` with the tracepoint name.
2618 * `log_level` with the log level to assign to the tracepoint
2619 definition named `tracepoint_name` in the `provider_name`
2620 tracepoint provider.
2621 +
2622 See man:lttng-ust(3) for a list of available log level names.
2623
2624 .Assign the `TRACE_DEBUG_UNIT` log level to a tracepoint definition.
2625 ====
2626 [source,c]
2627 ----
2628 /* Tracepoint definition */
2629 TRACEPOINT_EVENT(
2630 my_app,
2631 get_transaction,
2632 TP_ARGS(
2633 int, userid,
2634 size_t, len
2635 ),
2636 TP_FIELDS(
2637 ctf_integer(int, userid, userid)
2638 ctf_integer(size_t, len, len)
2639 )
2640 )
2641
2642 /* Log level assignment */
2643 TRACEPOINT_LOGLEVEL(my_app, get_transaction, TRACE_DEBUG_UNIT)
2644 ----
2645 ====
2646
2647
2648 [[tpp-source]]
2649 ===== Create a tracepoint provider package source file
2650
2651 A _tracepoint provider package source file_ is a C source file which
2652 includes a <<tpp-header,tracepoint provider header file>> to expand its
2653 macros into event serialization and other functions.
2654
2655 You can always use the following tracepoint provider package source
2656 file template:
2657
2658 [source,c]
2659 .Tracepoint provider package source file template.
2660 ----
2661 #define TRACEPOINT_CREATE_PROBES
2662
2663 #include "tp.h"
2664 ----
2665
2666 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2667 header file>> name. You may also include more than one tracepoint
2668 provider header file here to create a tracepoint provider package
2669 holding more than one tracepoint providers.
2670
2671
2672 [[probing-the-application-source-code]]
2673 ==== Add tracepoints to an application's source code
2674
2675 Once you <<tpp-header,create a tracepoint provider header file>>, you
2676 can use the `tracepoint()` macro in your application's
2677 source code to insert the tracepoints that this header
2678 <<defining-tracepoints,defines>>.
2679
2680 The `tracepoint()` macro takes at least two parameters: the tracepoint
2681 provider name and the tracepoint name. The corresponding tracepoint
2682 definition defines the other parameters.
2683
2684 .`tracepoint()` usage.
2685 ====
2686 The following <<defining-tracepoints,tracepoint definition>> defines a
2687 tracepoint which takes two input arguments and has two output event
2688 fields.
2689
2690 [source,c]
2691 .Tracepoint provider header file.
2692 ----
2693 #include "my-custom-structure.h"
2694
2695 TRACEPOINT_EVENT(
2696 my_provider,
2697 my_tracepoint,
2698 TP_ARGS(
2699 int, argc,
2700 const char*, cmd_name
2701 ),
2702 TP_FIELDS(
2703 ctf_string(cmd_name, cmd_name)
2704 ctf_integer(int, number_of_args, argc)
2705 )
2706 )
2707 ----
2708
2709 You can refer to this tracepoint definition with the `tracepoint()`
2710 macro in your application's source code like this:
2711
2712 [source,c]
2713 .Application's source file.
2714 ----
2715 #include "tp.h"
2716
2717 int main(int argc, char* argv[])
2718 {
2719 tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2720
2721 return 0;
2722 }
2723 ----
2724
2725 Note how the application's source code includes
2726 the tracepoint provider header file containing the tracepoint
2727 definitions to use, path:{tp.h}.
2728 ====
2729
2730 .`tracepoint()` usage with a complex tracepoint definition.
2731 ====
2732 Consider this complex tracepoint definition, where multiple event
2733 fields refer to the same input arguments in their argument expression
2734 parameter:
2735
2736 [source,c]
2737 .Tracepoint provider header file.
2738 ----
2739 /* For `struct stat` */
2740 #include <sys/types.h>
2741 #include <sys/stat.h>
2742 #include <unistd.h>
2743
2744 TRACEPOINT_EVENT(
2745 my_provider,
2746 my_tracepoint,
2747 TP_ARGS(
2748 int, my_int_arg,
2749 char*, my_str_arg,
2750 struct stat*, st
2751 ),
2752 TP_FIELDS(
2753 ctf_integer(int, my_constant_field, 23 + 17)
2754 ctf_integer(int, my_int_arg_field, my_int_arg)
2755 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2756 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2757 my_str_arg[2] + my_str_arg[3])
2758 ctf_string(my_str_arg_field, my_str_arg)
2759 ctf_integer_hex(off_t, size_field, st->st_size)
2760 ctf_float(double, size_dbl_field, (double) st->st_size)
2761 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2762 size_t, strlen(my_str_arg) / 2)
2763 )
2764 )
2765 ----
2766
2767 You can refer to this tracepoint definition with the `tracepoint()`
2768 macro in your application's source code like this:
2769
2770 [source,c]
2771 .Application's source file.
2772 ----
2773 #define TRACEPOINT_DEFINE
2774 #include "tp.h"
2775
2776 int main(void)
2777 {
2778 struct stat s;
2779
2780 stat("/etc/fstab", &s);
2781 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2782
2783 return 0;
2784 }
2785 ----
2786
2787 If you look at the event record that LTTng writes when tracing this
2788 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2789 it should look like this:
2790
2791 .Event record fields
2792 |====
2793 |Field's name |Field's value
2794 |`my_constant_field` |40
2795 |`my_int_arg_field` |23
2796 |`my_int_arg_field2` |529
2797 |`sum4_field` |389
2798 |`my_str_arg_field` |`Hello, World!`
2799 |`size_field` |0x12d
2800 |`size_dbl_field` |301.0
2801 |`half_my_str_arg_field` |`Hello,`
2802 |====
2803 ====
2804
2805 Sometimes, the arguments you pass to `tracepoint()` are expensive to
2806 compute--they use the call stack, for example. To avoid this
2807 computation when the tracepoint is disabled, you can use the
2808 `tracepoint_enabled()` and `do_tracepoint()` macros.
2809
2810 The syntax of the `tracepoint_enabled()` and `do_tracepoint()` macros
2811 is:
2812
2813 [source,c]
2814 .`tracepoint_enabled()` and `do_tracepoint()` macros syntax.
2815 ----
2816 tracepoint_enabled(provider_name, tracepoint_name)
2817 do_tracepoint(provider_name, tracepoint_name, ...)
2818 ----
2819
2820 Replace:
2821
2822 * `provider_name` with the tracepoint provider name.
2823 * `tracepoint_name` with the tracepoint name.
2824
2825 `tracepoint_enabled()` returns a non-zero value if the tracepoint named
2826 `tracepoint_name` from the provider named `provider_name` is enabled
2827 **at run time**.
2828
2829 `do_tracepoint()` is like `tracepoint()`, except that it doesn't check
2830 if the tracepoint is enabled. Using `tracepoint()` with
2831 `tracepoint_enabled()` is dangerous since `tracepoint()` also contains
2832 the `tracepoint_enabled()` check, thus a race condition is
2833 possible in this situation:
2834
2835 [source,c]
2836 .Possible race condition when using `tracepoint_enabled()` with `tracepoint()`.
2837 ----
2838 if (tracepoint_enabled(my_provider, my_tracepoint)) {
2839 stuff = prepare_stuff();
2840 }
2841
2842 tracepoint(my_provider, my_tracepoint, stuff);
2843 ----
2844
2845 If the tracepoint is enabled after the condition, then `stuff` is not
2846 prepared: the emitted event will either contain wrong data, or the whole
2847 application could crash (segmentation fault, for example).
2848
2849 NOTE: Neither `tracepoint_enabled()` nor `do_tracepoint()` have an
2850 `STAP_PROBEV()` call. If you need it, you must emit
2851 this call yourself.
2852
2853
2854 [[building-tracepoint-providers-and-user-application]]
2855 ==== Build and link a tracepoint provider package and an application
2856
2857 Once you have one or more <<tpp-header,tracepoint provider header
2858 files>> and a <<tpp-source,tracepoint provider package source file>>,
2859 you can create the tracepoint provider package by compiling its source
2860 file. From here, multiple build and run scenarios are possible. The
2861 following table shows common application and library configurations
2862 along with the required command lines to achieve them.
2863
2864 In the following diagrams, we use the following file names:
2865
2866 `app`::
2867 Executable application.
2868
2869 `app.o`::
2870 Application's object file.
2871
2872 `tpp.o`::
2873 Tracepoint provider package object file.
2874
2875 `tpp.a`::
2876 Tracepoint provider package archive file.
2877
2878 `libtpp.so`::
2879 Tracepoint provider package shared object file.
2880
2881 `emon.o`::
2882 User library object file.
2883
2884 `libemon.so`::
2885 User library shared object file.
2886
2887 We use the following symbols in the diagrams of table below:
2888
2889 [role="img-100"]
2890 .Symbols used in the build scenario diagrams.
2891 image::ust-sit-symbols.png[]
2892
2893 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
2894 variable in the following instructions.
2895
2896 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
2897 .Common tracepoint provider package scenarios.
2898 |====
2899 |Scenario |Instructions
2900
2901 |
2902 The instrumented application is statically linked with
2903 the tracepoint provider package object.
2904
2905 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
2906
2907 |
2908 include::../common/ust-sit-step-tp-o.txt[]
2909
2910 To build the instrumented application:
2911
2912 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2913 +
2914 --
2915 [source,c]
2916 ----
2917 #define TRACEPOINT_DEFINE
2918 ----
2919 --
2920
2921 . Compile the application source file:
2922 +
2923 --
2924 [role="term"]
2925 ----
2926 $ gcc -c app.c
2927 ----
2928 --
2929
2930 . Build the application:
2931 +
2932 --
2933 [role="term"]
2934 ----
2935 $ gcc -o app app.o tpp.o -llttng-ust -ldl
2936 ----
2937 --
2938
2939 To run the instrumented application:
2940
2941 * Start the application:
2942 +
2943 --
2944 [role="term"]
2945 ----
2946 $ ./app
2947 ----
2948 --
2949
2950 |
2951 The instrumented application is statically linked with the
2952 tracepoint provider package archive file.
2953
2954 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
2955
2956 |
2957 To create the tracepoint provider package archive file:
2958
2959 . Compile the <<tpp-source,tracepoint provider package source file>>:
2960 +
2961 --
2962 [role="term"]
2963 ----
2964 $ gcc -I. -c tpp.c
2965 ----
2966 --
2967
2968 . Create the tracepoint provider package archive file:
2969 +
2970 --
2971 [role="term"]
2972 ----
2973 $ ar rcs tpp.a tpp.o
2974 ----
2975 --
2976
2977 To build the instrumented application:
2978
2979 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2980 +
2981 --
2982 [source,c]
2983 ----
2984 #define TRACEPOINT_DEFINE
2985 ----
2986 --
2987
2988 . Compile the application source file:
2989 +
2990 --
2991 [role="term"]
2992 ----
2993 $ gcc -c app.c
2994 ----
2995 --
2996
2997 . Build the application:
2998 +
2999 --
3000 [role="term"]
3001 ----
3002 $ gcc -o app app.o tpp.a -llttng-ust -ldl
3003 ----
3004 --
3005
3006 To run the instrumented application:
3007
3008 * Start the application:
3009 +
3010 --
3011 [role="term"]
3012 ----
3013 $ ./app
3014 ----
3015 --
3016
3017 |
3018 The instrumented application is linked with the tracepoint provider
3019 package shared object.
3020
3021 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
3022
3023 |
3024 include::../common/ust-sit-step-tp-so.txt[]
3025
3026 To build the instrumented application:
3027
3028 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3029 +
3030 --
3031 [source,c]
3032 ----
3033 #define TRACEPOINT_DEFINE
3034 ----
3035 --
3036
3037 . Compile the application source file:
3038 +
3039 --
3040 [role="term"]
3041 ----
3042 $ gcc -c app.c
3043 ----
3044 --
3045
3046 . Build the application:
3047 +
3048 --
3049 [role="term"]
3050 ----
3051 $ gcc -o app app.o -ldl -L. -ltpp
3052 ----
3053 --
3054
3055 To run the instrumented application:
3056
3057 * Start the application:
3058 +
3059 --
3060 [role="term"]
3061 ----
3062 $ ./app
3063 ----
3064 --
3065
3066 |
3067 The tracepoint provider package shared object is preloaded before the
3068 instrumented application starts.
3069
3070 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
3071
3072 |
3073 include::../common/ust-sit-step-tp-so.txt[]
3074
3075 To build the instrumented application:
3076
3077 . In path:{app.c}, before including path:{tpp.h}, add the
3078 following lines:
3079 +
3080 --
3081 [source,c]
3082 ----
3083 #define TRACEPOINT_DEFINE
3084 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3085 ----
3086 --
3087
3088 . Compile the application source file:
3089 +
3090 --
3091 [role="term"]
3092 ----
3093 $ gcc -c app.c
3094 ----
3095 --
3096
3097 . Build the application:
3098 +
3099 --
3100 [role="term"]
3101 ----
3102 $ gcc -o app app.o -ldl
3103 ----
3104 --
3105
3106 To run the instrumented application with tracing support:
3107
3108 * Preload the tracepoint provider package shared object and
3109 start the application:
3110 +
3111 --
3112 [role="term"]
3113 ----
3114 $ LD_PRELOAD=./libtpp.so ./app
3115 ----
3116 --
3117
3118 To run the instrumented application without tracing support:
3119
3120 * Start the application:
3121 +
3122 --
3123 [role="term"]
3124 ----
3125 $ ./app
3126 ----
3127 --
3128
3129 |
3130 The instrumented application dynamically loads the tracepoint provider
3131 package shared object.
3132
3133 See the <<dlclose-warning,warning about `dlclose()`>>.
3134
3135 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
3136
3137 |
3138 include::../common/ust-sit-step-tp-so.txt[]
3139
3140 To build the instrumented application:
3141
3142 . In path:{app.c}, before including path:{tpp.h}, add the
3143 following lines:
3144 +
3145 --
3146 [source,c]
3147 ----
3148 #define TRACEPOINT_DEFINE
3149 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3150 ----
3151 --
3152
3153 . Compile the application source file:
3154 +
3155 --
3156 [role="term"]
3157 ----
3158 $ gcc -c app.c
3159 ----
3160 --
3161
3162 . Build the application:
3163 +
3164 --
3165 [role="term"]
3166 ----
3167 $ gcc -o app app.o -ldl
3168 ----
3169 --
3170
3171 To run the instrumented application:
3172
3173 * Start the application:
3174 +
3175 --
3176 [role="term"]
3177 ----
3178 $ ./app
3179 ----
3180 --
3181
3182 |
3183 The application is linked with the instrumented user library.
3184
3185 The instrumented user library is statically linked with the tracepoint
3186 provider package object file.
3187
3188 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
3189
3190 |
3191 include::../common/ust-sit-step-tp-o-fpic.txt[]
3192
3193 To build the instrumented user library:
3194
3195 . In path:{emon.c}, before including path:{tpp.h}, add the
3196 following line:
3197 +
3198 --
3199 [source,c]
3200 ----
3201 #define TRACEPOINT_DEFINE
3202 ----
3203 --
3204
3205 . Compile the user library source file:
3206 +
3207 --
3208 [role="term"]
3209 ----
3210 $ gcc -I. -fpic -c emon.c
3211 ----
3212 --
3213
3214 . Build the user library shared object:
3215 +
3216 --
3217 [role="term"]
3218 ----
3219 $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
3220 ----
3221 --
3222
3223 To build the application:
3224
3225 . Compile the application source file:
3226 +
3227 --
3228 [role="term"]
3229 ----
3230 $ gcc -c app.c
3231 ----
3232 --
3233
3234 . Build the application:
3235 +
3236 --
3237 [role="term"]
3238 ----
3239 $ gcc -o app app.o -L. -lemon
3240 ----
3241 --
3242
3243 To run the application:
3244
3245 * Start the application:
3246 +
3247 --
3248 [role="term"]
3249 ----
3250 $ ./app
3251 ----
3252 --
3253
3254 |
3255 The application is linked with the instrumented user library.
3256
3257 The instrumented user library is linked with the tracepoint provider
3258 package shared object.
3259
3260 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3261
3262 |
3263 include::../common/ust-sit-step-tp-so.txt[]
3264
3265 To build the instrumented user library:
3266
3267 . In path:{emon.c}, before including path:{tpp.h}, add the
3268 following line:
3269 +
3270 --
3271 [source,c]
3272 ----
3273 #define TRACEPOINT_DEFINE
3274 ----
3275 --
3276
3277 . Compile the user library source file:
3278 +
3279 --
3280 [role="term"]
3281 ----
3282 $ gcc -I. -fpic -c emon.c
3283 ----
3284 --
3285
3286 . Build the user library shared object:
3287 +
3288 --
3289 [role="term"]
3290 ----
3291 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3292 ----
3293 --
3294
3295 To build the application:
3296
3297 . Compile the application source file:
3298 +
3299 --
3300 [role="term"]
3301 ----
3302 $ gcc -c app.c
3303 ----
3304 --
3305
3306 . Build the application:
3307 +
3308 --
3309 [role="term"]
3310 ----
3311 $ gcc -o app app.o -L. -lemon
3312 ----
3313 --
3314
3315 To run the application:
3316
3317 * Start the application:
3318 +
3319 --
3320 [role="term"]
3321 ----
3322 $ ./app
3323 ----
3324 --
3325
3326 |
3327 The tracepoint provider package shared object is preloaded before the
3328 application starts.
3329
3330 The application is linked with the instrumented user library.
3331
3332 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3333
3334 |
3335 include::../common/ust-sit-step-tp-so.txt[]
3336
3337 To build the instrumented user library:
3338
3339 . In path:{emon.c}, before including path:{tpp.h}, add the
3340 following lines:
3341 +
3342 --
3343 [source,c]
3344 ----
3345 #define TRACEPOINT_DEFINE
3346 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3347 ----
3348 --
3349
3350 . Compile the user library source file:
3351 +
3352 --
3353 [role="term"]
3354 ----
3355 $ gcc -I. -fpic -c emon.c
3356 ----
3357 --
3358
3359 . Build the user library shared object:
3360 +
3361 --
3362 [role="term"]
3363 ----
3364 $ gcc -shared -o libemon.so emon.o -ldl
3365 ----
3366 --
3367
3368 To build the application:
3369
3370 . Compile the application source file:
3371 +
3372 --
3373 [role="term"]
3374 ----
3375 $ gcc -c app.c
3376 ----
3377 --
3378
3379 . Build the application:
3380 +
3381 --
3382 [role="term"]
3383 ----
3384 $ gcc -o app app.o -L. -lemon
3385 ----
3386 --
3387
3388 To run the application with tracing support:
3389
3390 * Preload the tracepoint provider package shared object and
3391 start the application:
3392 +
3393 --
3394 [role="term"]
3395 ----
3396 $ LD_PRELOAD=./libtpp.so ./app
3397 ----
3398 --
3399
3400 To run the application without tracing support:
3401
3402 * Start the application:
3403 +
3404 --
3405 [role="term"]
3406 ----
3407 $ ./app
3408 ----
3409 --
3410
3411 |
3412 The application is linked with the instrumented user library.
3413
3414 The instrumented user library dynamically loads the tracepoint provider
3415 package shared object.
3416
3417 See the <<dlclose-warning,warning about `dlclose()`>>.
3418
3419 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3420
3421 |
3422 include::../common/ust-sit-step-tp-so.txt[]
3423
3424 To build the instrumented user library:
3425
3426 . In path:{emon.c}, before including path:{tpp.h}, add the
3427 following lines:
3428 +
3429 --
3430 [source,c]
3431 ----
3432 #define TRACEPOINT_DEFINE
3433 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3434 ----
3435 --
3436
3437 . Compile the user library source file:
3438 +
3439 --
3440 [role="term"]
3441 ----
3442 $ gcc -I. -fpic -c emon.c
3443 ----
3444 --
3445
3446 . Build the user library shared object:
3447 +
3448 --
3449 [role="term"]
3450 ----
3451 $ gcc -shared -o libemon.so emon.o -ldl
3452 ----
3453 --
3454
3455 To build the application:
3456
3457 . Compile the application source file:
3458 +
3459 --
3460 [role="term"]
3461 ----
3462 $ gcc -c app.c
3463 ----
3464 --
3465
3466 . Build the application:
3467 +
3468 --
3469 [role="term"]
3470 ----
3471 $ gcc -o app app.o -L. -lemon
3472 ----
3473 --
3474
3475 To run the application:
3476
3477 * Start the application:
3478 +
3479 --
3480 [role="term"]
3481 ----
3482 $ ./app
3483 ----
3484 --
3485
3486 |
3487 The application dynamically loads the instrumented user library.
3488
3489 The instrumented user library is linked with the tracepoint provider
3490 package shared object.
3491
3492 See the <<dlclose-warning,warning about `dlclose()`>>.
3493
3494 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3495
3496 |
3497 include::../common/ust-sit-step-tp-so.txt[]
3498
3499 To build the instrumented user library:
3500
3501 . In path:{emon.c}, before including path:{tpp.h}, add the
3502 following line:
3503 +
3504 --
3505 [source,c]
3506 ----
3507 #define TRACEPOINT_DEFINE
3508 ----
3509 --
3510
3511 . Compile the user library source file:
3512 +
3513 --
3514 [role="term"]
3515 ----
3516 $ gcc -I. -fpic -c emon.c
3517 ----
3518 --
3519
3520 . Build the user library shared object:
3521 +
3522 --
3523 [role="term"]
3524 ----
3525 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3526 ----
3527 --
3528
3529 To build the application:
3530
3531 . Compile the application source file:
3532 +
3533 --
3534 [role="term"]
3535 ----
3536 $ gcc -c app.c
3537 ----
3538 --
3539
3540 . Build the application:
3541 +
3542 --
3543 [role="term"]
3544 ----
3545 $ gcc -o app app.o -ldl -L. -lemon
3546 ----
3547 --
3548
3549 To run the application:
3550
3551 * Start the application:
3552 +
3553 --
3554 [role="term"]
3555 ----
3556 $ ./app
3557 ----
3558 --
3559
3560 |
3561 The application dynamically loads the instrumented user library.
3562
3563 The instrumented user library dynamically loads the tracepoint provider
3564 package shared object.
3565
3566 See the <<dlclose-warning,warning about `dlclose()`>>.
3567
3568 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3569
3570 |
3571 include::../common/ust-sit-step-tp-so.txt[]
3572
3573 To build the instrumented user library:
3574
3575 . In path:{emon.c}, before including path:{tpp.h}, add the
3576 following lines:
3577 +
3578 --
3579 [source,c]
3580 ----
3581 #define TRACEPOINT_DEFINE
3582 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3583 ----
3584 --
3585
3586 . Compile the user library source file:
3587 +
3588 --
3589 [role="term"]
3590 ----
3591 $ gcc -I. -fpic -c emon.c
3592 ----
3593 --
3594
3595 . Build the user library shared object:
3596 +
3597 --
3598 [role="term"]
3599 ----
3600 $ gcc -shared -o libemon.so emon.o -ldl
3601 ----
3602 --
3603
3604 To build the application:
3605
3606 . Compile the application source file:
3607 +
3608 --
3609 [role="term"]
3610 ----
3611 $ gcc -c app.c
3612 ----
3613 --
3614
3615 . Build the application:
3616 +
3617 --
3618 [role="term"]
3619 ----
3620 $ gcc -o app app.o -ldl -L. -lemon
3621 ----
3622 --
3623
3624 To run the application:
3625
3626 * Start the application:
3627 +
3628 --
3629 [role="term"]
3630 ----
3631 $ ./app
3632 ----
3633 --
3634
3635 |
3636 The tracepoint provider package shared object is preloaded before the
3637 application starts.
3638
3639 The application dynamically loads the instrumented user library.
3640
3641 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3642
3643 |
3644 include::../common/ust-sit-step-tp-so.txt[]
3645
3646 To build the instrumented user library:
3647
3648 . In path:{emon.c}, before including path:{tpp.h}, add the
3649 following lines:
3650 +
3651 --
3652 [source,c]
3653 ----
3654 #define TRACEPOINT_DEFINE
3655 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3656 ----
3657 --
3658
3659 . Compile the user library source file:
3660 +
3661 --
3662 [role="term"]
3663 ----
3664 $ gcc -I. -fpic -c emon.c
3665 ----
3666 --
3667
3668 . Build the user library shared object:
3669 +
3670 --
3671 [role="term"]
3672 ----
3673 $ gcc -shared -o libemon.so emon.o -ldl
3674 ----
3675 --
3676
3677 To build the application:
3678
3679 . Compile the application source file:
3680 +
3681 --
3682 [role="term"]
3683 ----
3684 $ gcc -c app.c
3685 ----
3686 --
3687
3688 . Build the application:
3689 +
3690 --
3691 [role="term"]
3692 ----
3693 $ gcc -o app app.o -L. -lemon
3694 ----
3695 --
3696
3697 To run the application with tracing support:
3698
3699 * Preload the tracepoint provider package shared object and
3700 start the application:
3701 +
3702 --
3703 [role="term"]
3704 ----
3705 $ LD_PRELOAD=./libtpp.so ./app
3706 ----
3707 --
3708
3709 To run the application without tracing support:
3710
3711 * Start the application:
3712 +
3713 --
3714 [role="term"]
3715 ----
3716 $ ./app
3717 ----
3718 --
3719
3720 |
3721 The application is statically linked with the tracepoint provider
3722 package object file.
3723
3724 The application is linked with the instrumented user library.
3725
3726 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3727
3728 |
3729 include::../common/ust-sit-step-tp-o.txt[]
3730
3731 To build the instrumented user library:
3732
3733 . In path:{emon.c}, before including path:{tpp.h}, add the
3734 following line:
3735 +
3736 --
3737 [source,c]
3738 ----
3739 #define TRACEPOINT_DEFINE
3740 ----
3741 --
3742
3743 . Compile the user library source file:
3744 +
3745 --
3746 [role="term"]
3747 ----
3748 $ gcc -I. -fpic -c emon.c
3749 ----
3750 --
3751
3752 . Build the user library shared object:
3753 +
3754 --
3755 [role="term"]
3756 ----
3757 $ gcc -shared -o libemon.so emon.o
3758 ----
3759 --
3760
3761 To build the application:
3762
3763 . Compile the application source file:
3764 +
3765 --
3766 [role="term"]
3767 ----
3768 $ gcc -c app.c
3769 ----
3770 --
3771
3772 . Build the application:
3773 +
3774 --
3775 [role="term"]
3776 ----
3777 $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3778 ----
3779 --
3780
3781 To run the instrumented application:
3782
3783 * Start the application:
3784 +
3785 --
3786 [role="term"]
3787 ----
3788 $ ./app
3789 ----
3790 --
3791
3792 |
3793 The application is statically linked with the tracepoint provider
3794 package object file.
3795
3796 The application dynamically loads the instrumented user library.
3797
3798 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3799
3800 |
3801 include::../common/ust-sit-step-tp-o.txt[]
3802
3803 To build the application:
3804
3805 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3806 +
3807 --
3808 [source,c]
3809 ----
3810 #define TRACEPOINT_DEFINE
3811 ----
3812 --
3813
3814 . Compile the application source file:
3815 +
3816 --
3817 [role="term"]
3818 ----
3819 $ gcc -c app.c
3820 ----
3821 --
3822
3823 . Build the application:
3824 +
3825 --
3826 [role="term"]
3827 ----
3828 $ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3829 -llttng-ust -ldl
3830 ----
3831 --
3832 +
3833 The `--export-dynamic` option passed to the linker is necessary for the
3834 dynamically loaded library to ``see'' the tracepoint symbols defined in
3835 the application.
3836
3837 To build the instrumented user library:
3838
3839 . Compile the user library source file:
3840 +
3841 --
3842 [role="term"]
3843 ----
3844 $ gcc -I. -fpic -c emon.c
3845 ----
3846 --
3847
3848 . Build the user library shared object:
3849 +
3850 --
3851 [role="term"]
3852 ----
3853 $ gcc -shared -o libemon.so emon.o
3854 ----
3855 --
3856
3857 To run the application:
3858
3859 * Start the application:
3860 +
3861 --
3862 [role="term"]
3863 ----
3864 $ ./app
3865 ----
3866 --
3867 |====
3868
3869 [[dlclose-warning]]
3870 [IMPORTANT]
3871 .Do not use man:dlclose(3) on a tracepoint provider package
3872 ====
3873 Never use man:dlclose(3) on any shared object which:
3874
3875 * Is linked with, statically or dynamically, a tracepoint provider
3876 package.
3877 * Calls man:dlopen(3) itself to dynamically open a tracepoint provider
3878 package shared object.
3879
3880 This is currently considered **unsafe** due to a lack of reference
3881 counting from LTTng-UST to the shared object.
3882
3883 A known workaround (available since glibc 2.2) is to use the
3884 `RTLD_NODELETE` flag when calling man:dlopen(3) initially. This has the
3885 effect of not unloading the loaded shared object, even if man:dlclose(3)
3886 is called.
3887
3888 You can also preload the tracepoint provider package shared object with
3889 the env:LD_PRELOAD environment variable to overcome this limitation.
3890 ====
3891
3892
3893 [[using-lttng-ust-with-daemons]]
3894 ===== Use noch:{LTTng-UST} with daemons
3895
3896 If your instrumented application calls man:fork(2), man:clone(2),
3897 or BSD's man:rfork(2), without a following man:exec(3)-family
3898 system call, you must preload the path:{liblttng-ust-fork.so} shared
3899 object when you start the application.
3900
3901 [role="term"]
3902 ----
3903 $ LD_PRELOAD=liblttng-ust-fork.so ./my-app
3904 ----
3905
3906 If your tracepoint provider package is
3907 a shared library which you also preload, you must put both
3908 shared objects in env:LD_PRELOAD:
3909
3910 [role="term"]
3911 ----
3912 $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
3913 ----
3914
3915
3916 [role="since-2.9"]
3917 [[liblttng-ust-fd]]
3918 ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
3919
3920 If your instrumented application closes one or more file descriptors
3921 which it did not open itself, you must preload the
3922 path:{liblttng-ust-fd.so} shared object when you start the application:
3923
3924 [role="term"]
3925 ----
3926 $ LD_PRELOAD=liblttng-ust-fd.so ./my-app
3927 ----
3928
3929 Typical use cases include closing all the file descriptors after
3930 man:fork(2) or man:rfork(2) and buggy applications doing
3931 ``double closes''.
3932
3933
3934 [[lttng-ust-pkg-config]]
3935 ===== Use noch:{pkg-config}
3936
3937 On some distributions, LTTng-UST ships with a
3938 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
3939 metadata file. If this is your case, then you can use cmd:pkg-config to
3940 build an application on the command line:
3941
3942 [role="term"]
3943 ----
3944 $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
3945 ----
3946
3947
3948 [[instrumenting-32-bit-app-on-64-bit-system]]
3949 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
3950
3951 In order to trace a 32-bit application running on a 64-bit system,
3952 LTTng must use a dedicated 32-bit
3953 <<lttng-consumerd,consumer daemon>>.
3954
3955 The following steps show how to build and install a 32-bit consumer
3956 daemon, which is _not_ part of the default 64-bit LTTng build, how to
3957 build and install the 32-bit LTTng-UST libraries, and how to build and
3958 link an instrumented 32-bit application in that context.
3959
3960 To build a 32-bit instrumented application for a 64-bit target system,
3961 assuming you have a fresh target system with no installed Userspace RCU
3962 or LTTng packages:
3963
3964 . Download, build, and install a 32-bit version of Userspace RCU:
3965 +
3966 --
3967 [role="term"]
3968 ----
3969 $ cd $(mktemp -d) &&
3970 wget http://lttng.org/files/urcu/userspace-rcu-latest-0.9.tar.bz2 &&
3971 tar -xf userspace-rcu-latest-0.9.tar.bz2 &&
3972 cd userspace-rcu-0.9.* &&
3973 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
3974 make &&
3975 sudo make install &&
3976 sudo ldconfig
3977 ----
3978 --
3979
3980 . Using your distribution's package manager, or from source, install
3981 the following 32-bit versions of the following dependencies of
3982 LTTng-tools and LTTng-UST:
3983 +
3984 --
3985 * https://sourceforge.net/projects/libuuid/[libuuid]
3986 * http://directory.fsf.org/wiki/Popt[popt]
3987 * http://www.xmlsoft.org/[libxml2]
3988 --
3989
3990 . Download, build, and install a 32-bit version of the latest
3991 LTTng-UST{nbsp}{revision}:
3992 +
3993 --
3994 [role="term"]
3995 ----
3996 $ cd $(mktemp -d) &&
3997 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.9.tar.bz2 &&
3998 tar -xf lttng-ust-latest-2.9.tar.bz2 &&
3999 cd lttng-ust-2.9.* &&
4000 ./configure --libdir=/usr/local/lib32 \
4001 CFLAGS=-m32 CXXFLAGS=-m32 \
4002 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
4003 make &&
4004 sudo make install &&
4005 sudo ldconfig
4006 ----
4007 --
4008 +
4009 [NOTE]
4010 ====
4011 Depending on your distribution,
4012 32-bit libraries could be installed at a different location than
4013 `/usr/lib32`. For example, Debian is known to install
4014 some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
4015
4016 In this case, make sure to set `LDFLAGS` to all the
4017 relevant 32-bit library paths, for example:
4018
4019 [role="term"]
4020 ----
4021 $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
4022 ----
4023 ====
4024
4025 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
4026 the 32-bit consumer daemon:
4027 +
4028 --
4029 [role="term"]
4030 ----
4031 $ cd $(mktemp -d) &&
4032 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
4033 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
4034 cd lttng-tools-2.9.* &&
4035 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
4036 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
4037 --disable-bin-lttng --disable-bin-lttng-crash \
4038 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
4039 make &&
4040 cd src/bin/lttng-consumerd &&
4041 sudo make install &&
4042 sudo ldconfig
4043 ----
4044 --
4045
4046 . From your distribution or from source,
4047 <<installing-lttng,install>> the 64-bit versions of
4048 LTTng-UST and Userspace RCU.
4049 . Download, build, and install the 64-bit version of the
4050 latest LTTng-tools{nbsp}{revision}:
4051 +
4052 --
4053 [role="term"]
4054 ----
4055 $ cd $(mktemp -d) &&
4056 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
4057 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
4058 cd lttng-tools-2.9.* &&
4059 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
4060 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
4061 make &&
4062 sudo make install &&
4063 sudo ldconfig
4064 ----
4065 --
4066
4067 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
4068 when linking your 32-bit application:
4069 +
4070 ----
4071 -m32 -L/usr/lib32 -L/usr/local/lib32 \
4072 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
4073 ----
4074 +
4075 For example, let's rebuild the quick start example in
4076 <<tracing-your-own-user-application,Trace a user application>> as an
4077 instrumented 32-bit application:
4078 +
4079 --
4080 [role="term"]
4081 ----
4082 $ gcc -m32 -c -I. hello-tp.c
4083 $ gcc -m32 -c hello.c
4084 $ gcc -m32 -o hello hello.o hello-tp.o \
4085 -L/usr/lib32 -L/usr/local/lib32 \
4086 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
4087 -llttng-ust -ldl
4088 ----
4089 --
4090
4091 No special action is required to execute the 32-bit application and
4092 to trace it: use the command-line man:lttng(1) tool as usual.
4093
4094
4095 [role="since-2.5"]
4096 [[tracef]]
4097 ==== Use `tracef()`
4098
4099 man:tracef(3) is a small LTTng-UST API designed for quick,
4100 man:printf(3)-like instrumentation without the burden of
4101 <<tracepoint-provider,creating>> and
4102 <<building-tracepoint-providers-and-user-application,building>>
4103 a tracepoint provider package.
4104
4105 To use `tracef()` in your application:
4106
4107 . In the C or C++ source files where you need to use `tracef()`,
4108 include `<lttng/tracef.h>`:
4109 +
4110 --
4111 [source,c]
4112 ----
4113 #include <lttng/tracef.h>
4114 ----
4115 --
4116
4117 . In the application's source code, use `tracef()` like you would use
4118 man:printf(3):
4119 +
4120 --
4121 [source,c]
4122 ----
4123 /* ... */
4124
4125 tracef("my message: %d (%s)", my_integer, my_string);
4126
4127 /* ... */
4128 ----
4129 --
4130
4131 . Link your application with `liblttng-ust`:
4132 +
4133 --
4134 [role="term"]
4135 ----
4136 $ gcc -o app app.c -llttng-ust
4137 ----
4138 --
4139
4140 To trace the events that `tracef()` calls emit:
4141
4142 * <<enabling-disabling-events,Create an event rule>> which matches the
4143 `lttng_ust_tracef:*` event name:
4144 +
4145 --
4146 [role="term"]
4147 ----
4148 $ lttng enable-event --userspace 'lttng_ust_tracef:*'
4149 ----
4150 --
4151
4152 [IMPORTANT]
4153 .Limitations of `tracef()`
4154 ====
4155 The `tracef()` utility function was developed to make user space tracing
4156 super simple, albeit with notable disadvantages compared to
4157 <<defining-tracepoints,user-defined tracepoints>>:
4158
4159 * All the emitted events have the same tracepoint provider and
4160 tracepoint names, respectively `lttng_ust_tracef` and `event`.
4161 * There is no static type checking.
4162 * The only event record field you actually get, named `msg`, is a string
4163 potentially containing the values you passed to `tracef()`
4164 using your own format string. This also means that you cannot filter
4165 events with a custom expression at run time because there are no
4166 isolated fields.
4167 * Since `tracef()` uses the C standard library's man:vasprintf(3)
4168 function behind the scenes to format the strings at run time, its
4169 expected performance is lower than with user-defined tracepoints,
4170 which do not require a conversion to a string.
4171
4172 Taking this into consideration, `tracef()` is useful for some quick
4173 prototyping and debugging, but you should not consider it for any
4174 permanent and serious applicative instrumentation.
4175 ====
4176
4177
4178 [role="since-2.7"]
4179 [[tracelog]]
4180 ==== Use `tracelog()`
4181
4182 The man:tracelog(3) API is very similar to <<tracef,`tracef()`>>, with
4183 the difference that it accepts an additional log level parameter.
4184
4185 The goal of `tracelog()` is to ease the migration from logging to
4186 tracing.
4187
4188 To use `tracelog()` in your application:
4189
4190 . In the C or C++ source files where you need to use `tracelog()`,
4191 include `<lttng/tracelog.h>`:
4192 +
4193 --
4194 [source,c]
4195 ----
4196 #include <lttng/tracelog.h>
4197 ----
4198 --
4199
4200 . In the application's source code, use `tracelog()` like you would use
4201 man:printf(3), except for the first parameter which is the log
4202 level:
4203 +
4204 --
4205 [source,c]
4206 ----
4207 /* ... */
4208
4209 tracelog(TRACE_WARNING, "my message: %d (%s)",
4210 my_integer, my_string);
4211
4212 /* ... */
4213 ----
4214 --
4215 +
4216 See man:lttng-ust(3) for a list of available log level names.
4217
4218 . Link your application with `liblttng-ust`:
4219 +
4220 --
4221 [role="term"]
4222 ----
4223 $ gcc -o app app.c -llttng-ust
4224 ----
4225 --
4226
4227 To trace the events that `tracelog()` calls emit with a log level
4228 _as severe as_ a specific log level:
4229
4230 * <<enabling-disabling-events,Create an event rule>> which matches the
4231 `lttng_ust_tracelog:*` event name and a minimum level
4232 of severity:
4233 +
4234 --
4235 [role="term"]
4236 ----
4237 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
4238 --loglevel=TRACE_WARNING
4239 ----
4240 --
4241
4242 To trace the events that `tracelog()` calls emit with a
4243 _specific log level_:
4244
4245 * Create an event rule which matches the `lttng_ust_tracelog:*`
4246 event name and a specific log level:
4247 +
4248 --
4249 [role="term"]
4250 ----
4251 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
4252 --loglevel-only=TRACE_INFO
4253 ----
4254 --
4255
4256
4257 [[prebuilt-ust-helpers]]
4258 === Prebuilt user space tracing helpers
4259
4260 The LTTng-UST package provides a few helpers in the form or preloadable
4261 shared objects which automatically instrument system functions and
4262 calls.
4263
4264 The helper shared objects are normally found in dir:{/usr/lib}. If you
4265 built LTTng-UST <<building-from-source,from source>>, they are probably
4266 located in dir:{/usr/local/lib}.
4267
4268 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4269 are:
4270
4271 path:{liblttng-ust-libc-wrapper.so}::
4272 path:{liblttng-ust-pthread-wrapper.so}::
4273 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4274 memory and POSIX threads function tracing>>.
4275
4276 path:{liblttng-ust-cyg-profile.so}::
4277 path:{liblttng-ust-cyg-profile-fast.so}::
4278 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4279
4280 path:{liblttng-ust-dl.so}::
4281 <<liblttng-ust-dl,Dynamic linker tracing>>.
4282
4283 To use a user space tracing helper with any user application:
4284
4285 * Preload the helper shared object when you start the application:
4286 +
4287 --
4288 [role="term"]
4289 ----
4290 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4291 ----
4292 --
4293 +
4294 You can preload more than one helper:
4295 +
4296 --
4297 [role="term"]
4298 ----
4299 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4300 ----
4301 --
4302
4303
4304 [role="since-2.3"]
4305 [[liblttng-ust-libc-pthread-wrapper]]
4306 ==== Instrument C standard library memory and POSIX threads functions
4307
4308 The path:{liblttng-ust-libc-wrapper.so} and
4309 path:{liblttng-ust-pthread-wrapper.so} helpers
4310 add instrumentation to some C standard library and POSIX
4311 threads functions.
4312
4313 [role="growable"]
4314 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4315 |====
4316 |TP provider name |TP name |Instrumented function
4317
4318 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4319 |`calloc` |man:calloc(3)
4320 |`realloc` |man:realloc(3)
4321 |`free` |man:free(3)
4322 |`memalign` |man:memalign(3)
4323 |`posix_memalign` |man:posix_memalign(3)
4324 |====
4325
4326 [role="growable"]
4327 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4328 |====
4329 |TP provider name |TP name |Instrumented function
4330
4331 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4332 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4333 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4334 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4335 |====
4336
4337 When you preload the shared object, it replaces the functions listed
4338 in the previous tables by wrappers which contain tracepoints and call
4339 the replaced functions.
4340
4341
4342 [[liblttng-ust-cyg-profile]]
4343 ==== Instrument function entry and exit
4344
4345 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4346 to the entry and exit points of functions.
4347
4348 man:gcc(1) and man:clang(1) have an option named
4349 https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
4350 which generates instrumentation calls for entry and exit to functions.
4351 The LTTng-UST function tracing helpers,
4352 path:{liblttng-ust-cyg-profile.so} and
4353 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4354 to add tracepoints to the two generated functions (which contain
4355 `cyg_profile` in their names, hence the helper's name).
4356
4357 To use the LTTng-UST function tracing helper, the source files to
4358 instrument must be built using the `-finstrument-functions` compiler
4359 flag.
4360
4361 There are two versions of the LTTng-UST function tracing helper:
4362
4363 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4364 that you should only use when it can be _guaranteed_ that the
4365 complete event stream is recorded without any lost event record.
4366 Any kind of duplicate information is left out.
4367 +
4368 Assuming no event record is lost, having only the function addresses on
4369 entry is enough to create a call graph, since an event record always
4370 contains the ID of the CPU that generated it.
4371 +
4372 You can use a tool like man:addr2line(1) to convert function addresses
4373 back to source file names and line numbers.
4374
4375 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4376 which also works in use cases where event records might get discarded or
4377 not recorded from application startup.
4378 In these cases, the trace analyzer needs more information to be
4379 able to reconstruct the program flow.
4380
4381 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4382 points of this helper.
4383
4384 All the tracepoints that this helper provides have the
4385 log level `TRACE_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4386
4387 TIP: It's sometimes a good idea to limit the number of source files that
4388 you compile with the `-finstrument-functions` option to prevent LTTng
4389 from writing an excessive amount of trace data at run time. When using
4390 man:gcc(1), you can use the
4391 `-finstrument-functions-exclude-function-list` option to avoid
4392 instrument entries and exits of specific function names.
4393
4394
4395 [role="since-2.4"]
4396 [[liblttng-ust-dl]]
4397 ==== Instrument the dynamic linker
4398
4399 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4400 man:dlopen(3) and man:dlclose(3) function calls.
4401
4402 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4403 of this helper.
4404
4405
4406 [role="since-2.4"]
4407 [[java-application]]
4408 === User space Java agent
4409
4410 You can instrument any Java application which uses one of the following
4411 logging frameworks:
4412
4413 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4414 (JUL) core logging facilities.
4415 * http://logging.apache.org/log4j/1.2/[**Apache log4j 1.2**], since
4416 LTTng 2.6. Note that Apache Log4j{nbsp}2 is not supported.
4417
4418 [role="img-100"]
4419 .LTTng-UST Java agent imported by a Java application.
4420 image::java-app.png[]
4421
4422 Note that the methods described below are new in LTTng{nbsp}{revision}.
4423 Previous LTTng versions use another technique.
4424
4425 NOTE: We use http://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4426 and https://ci.lttng.org/[continuous integration], thus this version is
4427 directly supported. However, the LTTng-UST Java agent is also tested
4428 with OpenJDK{nbsp}7.
4429
4430
4431 [role="since-2.8"]
4432 [[jul]]
4433 ==== Use the LTTng-UST Java agent for `java.util.logging`
4434
4435 To use the LTTng-UST Java agent in a Java application which uses
4436 `java.util.logging` (JUL):
4437
4438 . In the Java application's source code, import the LTTng-UST
4439 log handler package for `java.util.logging`:
4440 +
4441 --
4442 [source,java]
4443 ----
4444 import org.lttng.ust.agent.jul.LttngLogHandler;
4445 ----
4446 --
4447
4448 . Create an LTTng-UST JUL log handler:
4449 +
4450 --
4451 [source,java]
4452 ----
4453 Handler lttngUstLogHandler = new LttngLogHandler();
4454 ----
4455 --
4456
4457 . Add this handler to the JUL loggers which should emit LTTng events:
4458 +
4459 --
4460 [source,java]
4461 ----
4462 Logger myLogger = Logger.getLogger("some-logger");
4463
4464 myLogger.addHandler(lttngUstLogHandler);
4465 ----
4466 --
4467
4468 . Use `java.util.logging` log statements and configuration as usual.
4469 The loggers with an attached LTTng-UST log handler can emit
4470 LTTng events.
4471
4472 . Before exiting the application, remove the LTTng-UST log handler from
4473 the loggers attached to it and call its `close()` method:
4474 +
4475 --
4476 [source,java]
4477 ----
4478 myLogger.removeHandler(lttngUstLogHandler);
4479 lttngUstLogHandler.close();
4480 ----
4481 --
4482 +
4483 This is not strictly necessary, but it is recommended for a clean
4484 disposal of the handler's resources.
4485
4486 . Include the LTTng-UST Java agent's common and JUL-specific JAR files,
4487 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4488 in the
4489 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4490 path] when you build the Java application.
4491 +
4492 The JAR files are typically located in dir:{/usr/share/java}.
4493 +
4494 IMPORTANT: The LTTng-UST Java agent must be
4495 <<installing-lttng,installed>> for the logging framework your
4496 application uses.
4497
4498 .Use the LTTng-UST Java agent for `java.util.logging`.
4499 ====
4500 [source,java]
4501 .path:{Test.java}
4502 ----
4503 import java.io.IOException;
4504 import java.util.logging.Handler;
4505 import java.util.logging.Logger;
4506 import org.lttng.ust.agent.jul.LttngLogHandler;
4507
4508 public class Test
4509 {
4510 private static final int answer = 42;
4511
4512 public static void main(String[] argv) throws Exception
4513 {
4514 // Create a logger
4515 Logger logger = Logger.getLogger("jello");
4516
4517 // Create an LTTng-UST log handler
4518 Handler lttngUstLogHandler = new LttngLogHandler();
4519
4520 // Add the LTTng-UST log handler to our logger
4521 logger.addHandler(lttngUstLogHandler);
4522
4523 // Log at will!
4524 logger.info("some info");
4525 logger.warning("some warning");
4526 Thread.sleep(500);
4527 logger.finer("finer information; the answer is " + answer);
4528 Thread.sleep(123);
4529 logger.severe("error!");
4530
4531 // Not mandatory, but cleaner
4532 logger.removeHandler(lttngUstLogHandler);
4533 lttngUstLogHandler.close();
4534 }
4535 }
4536 ----
4537
4538 Build this example:
4539
4540 [role="term"]
4541 ----
4542 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4543 ----
4544
4545 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4546 <<enabling-disabling-events,create an event rule>> matching the
4547 `jello` JUL logger, and <<basic-tracing-session-control,start tracing>>:
4548
4549 [role="term"]
4550 ----
4551 $ lttng create
4552 $ lttng enable-event --jul jello
4553 $ lttng start
4554 ----
4555
4556 Run the compiled class:
4557
4558 [role="term"]
4559 ----
4560 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4561 ----
4562
4563 <<basic-tracing-session-control,Stop tracing>> and inspect the
4564 recorded events:
4565
4566 [role="term"]
4567 ----
4568 $ lttng stop
4569 $ lttng view
4570 ----
4571 ====
4572
4573 In the resulting trace, an <<event,event record>> generated by a Java
4574 application using `java.util.logging` is named `lttng_jul:event` and
4575 has the following fields:
4576
4577 `msg`::
4578 Log record's message.
4579
4580 `logger_name`::
4581 Logger name.
4582
4583 `class_name`::
4584 Name of the class in which the log statement was executed.
4585
4586 `method_name`::
4587 Name of the method in which the log statement was executed.
4588
4589 `long_millis`::
4590 Logging time (timestamp in milliseconds).
4591
4592 `int_loglevel`::
4593 Log level integer value.
4594
4595 `int_threadid`::
4596 ID of the thread in which the log statement was executed.
4597
4598 You can use the opt:lttng-enable-event(1):--loglevel or
4599 opt:lttng-enable-event(1):--loglevel-only option of the
4600 man:lttng-enable-event(1) command to target a range of JUL log levels
4601 or a specific JUL log level.
4602
4603
4604 [role="since-2.8"]
4605 [[log4j]]
4606 ==== Use the LTTng-UST Java agent for Apache log4j
4607
4608 To use the LTTng-UST Java agent in a Java application which uses
4609 Apache log4j 1.2:
4610
4611 . In the Java application's source code, import the LTTng-UST
4612 log appender package for Apache log4j:
4613 +
4614 --
4615 [source,java]
4616 ----
4617 import org.lttng.ust.agent.log4j.LttngLogAppender;
4618 ----
4619 --
4620
4621 . Create an LTTng-UST log4j log appender:
4622 +
4623 --
4624 [source,java]
4625 ----
4626 Appender lttngUstLogAppender = new LttngLogAppender();
4627 ----
4628 --
4629
4630 . Add this appender to the log4j loggers which should emit LTTng events:
4631 +
4632 --
4633 [source,java]
4634 ----
4635 Logger myLogger = Logger.getLogger("some-logger");
4636
4637 myLogger.addAppender(lttngUstLogAppender);
4638 ----
4639 --
4640
4641 . Use Apache log4j log statements and configuration as usual. The
4642 loggers with an attached LTTng-UST log appender can emit LTTng events.
4643
4644 . Before exiting the application, remove the LTTng-UST log appender from
4645 the loggers attached to it and call its `close()` method:
4646 +
4647 --
4648 [source,java]
4649 ----
4650 myLogger.removeAppender(lttngUstLogAppender);
4651 lttngUstLogAppender.close();
4652 ----
4653 --
4654 +
4655 This is not strictly necessary, but it is recommended for a clean
4656 disposal of the appender's resources.
4657
4658 . Include the LTTng-UST Java agent's common and log4j-specific JAR
4659 files, path:{lttng-ust-agent-common.jar} and
4660 path:{lttng-ust-agent-log4j.jar}, in the
4661 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4662 path] when you build the Java application.
4663 +
4664 The JAR files are typically located in dir:{/usr/share/java}.
4665 +
4666 IMPORTANT: The LTTng-UST Java agent must be
4667 <<installing-lttng,installed>> for the logging framework your
4668 application uses.
4669
4670 .Use the LTTng-UST Java agent for Apache log4j.
4671 ====
4672 [source,java]
4673 .path:{Test.java}
4674 ----
4675 import org.apache.log4j.Appender;
4676 import org.apache.log4j.Logger;
4677 import org.lttng.ust.agent.log4j.LttngLogAppender;
4678
4679 public class Test
4680 {
4681 private static final int answer = 42;
4682
4683 public static void main(String[] argv) throws Exception
4684 {
4685 // Create a logger
4686 Logger logger = Logger.getLogger("jello");
4687
4688 // Create an LTTng-UST log appender
4689 Appender lttngUstLogAppender = new LttngLogAppender();
4690
4691 // Add the LTTng-UST log appender to our logger
4692 logger.addAppender(lttngUstLogAppender);
4693
4694 // Log at will!
4695 logger.info("some info");
4696 logger.warn("some warning");
4697 Thread.sleep(500);
4698 logger.debug("debug information; the answer is " + answer);
4699 Thread.sleep(123);
4700 logger.fatal("error!");
4701
4702 // Not mandatory, but cleaner
4703 logger.removeAppender(lttngUstLogAppender);
4704 lttngUstLogAppender.close();
4705 }
4706 }
4707
4708 ----
4709
4710 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4711 file):
4712
4713 [role="term"]
4714 ----
4715 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4716 ----
4717
4718 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4719 <<enabling-disabling-events,create an event rule>> matching the
4720 `jello` log4j logger, and <<basic-tracing-session-control,start tracing>>:
4721
4722 [role="term"]
4723 ----
4724 $ lttng create
4725 $ lttng enable-event --log4j jello
4726 $ lttng start
4727 ----
4728
4729 Run the compiled class:
4730
4731 [role="term"]
4732 ----
4733 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4734 ----
4735
4736 <<basic-tracing-session-control,Stop tracing>> and inspect the
4737 recorded events:
4738
4739 [role="term"]
4740 ----
4741 $ lttng stop
4742 $ lttng view
4743 ----
4744 ====
4745
4746 In the resulting trace, an <<event,event record>> generated by a Java
4747 application using log4j is named `lttng_log4j:event` and
4748 has the following fields:
4749
4750 `msg`::
4751 Log record's message.
4752
4753 `logger_name`::
4754 Logger name.
4755
4756 `class_name`::
4757 Name of the class in which the log statement was executed.
4758
4759 `method_name`::
4760 Name of the method in which the log statement was executed.
4761
4762 `filename`::
4763 Name of the file in which the executed log statement is located.
4764
4765 `line_number`::
4766 Line number at which the log statement was executed.
4767
4768 `timestamp`::
4769 Logging timestamp.
4770
4771 `int_loglevel`::
4772 Log level integer value.
4773
4774 `thread_name`::
4775 Name of the Java thread in which the log statement was executed.
4776
4777 You can use the opt:lttng-enable-event(1):--loglevel or
4778 opt:lttng-enable-event(1):--loglevel-only option of the
4779 man:lttng-enable-event(1) command to target a range of Apache log4j log levels
4780 or a specific log4j log level.
4781
4782
4783 [role="since-2.8"]
4784 [[java-application-context]]
4785 ==== Provide application-specific context fields in a Java application
4786
4787 A Java application-specific context field is a piece of state provided
4788 by the application which <<adding-context,you can add>>, using the
4789 man:lttng-add-context(1) command, to each <<event,event record>>
4790 produced by the log statements of this application.
4791
4792 For example, a given object might have a current request ID variable.
4793 You can create a context information retriever for this object and
4794 assign a name to this current request ID. You can then, using the
4795 man:lttng-add-context(1) command, add this context field by name to
4796 the JUL or log4j <<channel,channel>>.
4797
4798 To provide application-specific context fields in a Java application:
4799
4800 . In the Java application's source code, import the LTTng-UST
4801 Java agent context classes and interfaces:
4802 +
4803 --
4804 [source,java]
4805 ----
4806 import org.lttng.ust.agent.context.ContextInfoManager;
4807 import org.lttng.ust.agent.context.IContextInfoRetriever;
4808 ----
4809 --
4810
4811 . Create a context information retriever class, that is, a class which
4812 implements the `IContextInfoRetriever` interface:
4813 +
4814 --
4815 [source,java]
4816 ----
4817 class MyContextInfoRetriever implements IContextInfoRetriever
4818 {
4819 @Override
4820 public Object retrieveContextInfo(String key)
4821 {
4822 if (key.equals("intCtx")) {
4823 return (short) 17;
4824 } else if (key.equals("strContext")) {
4825 return "context value!";
4826 } else {
4827 return null;
4828 }
4829 }
4830 }
4831 ----
4832 --
4833 +
4834 This `retrieveContextInfo()` method is the only member of the
4835 `IContextInfoRetriever` interface. Its role is to return the current
4836 value of a state by name to create a context field. The names of the
4837 context fields and which state variables they return depends on your
4838 specific scenario.
4839 +
4840 All primitive types and objects are supported as context fields.
4841 When `retrieveContextInfo()` returns an object, the context field
4842 serializer calls its `toString()` method to add a string field to
4843 event records. The method can also return `null`, which means that
4844 no context field is available for the required name.
4845
4846 . Register an instance of your context information retriever class to
4847 the context information manager singleton:
4848 +
4849 --
4850 [source,java]
4851 ----
4852 IContextInfoRetriever cir = new MyContextInfoRetriever();
4853 ContextInfoManager cim = ContextInfoManager.getInstance();
4854 cim.registerContextInfoRetriever("retrieverName", cir);
4855 ----
4856 --
4857
4858 . Before exiting the application, remove your context information
4859 retriever from the context information manager singleton:
4860 +
4861 --
4862 [source,java]
4863 ----
4864 ContextInfoManager cim = ContextInfoManager.getInstance();
4865 cim.unregisterContextInfoRetriever("retrieverName");
4866 ----
4867 --
4868 +
4869 This is not strictly necessary, but it is recommended for a clean
4870 disposal of some manager's resources.
4871
4872 . Build your Java application with LTTng-UST Java agent support as
4873 usual, following the procedure for either the <<jul,JUL>> or
4874 <<log4j,Apache log4j>> framework.
4875
4876
4877 .Provide application-specific context fields in a Java application.
4878 ====
4879 [source,java]
4880 .path:{Test.java}
4881 ----
4882 import java.util.logging.Handler;
4883 import java.util.logging.Logger;
4884 import org.lttng.ust.agent.jul.LttngLogHandler;
4885 import org.lttng.ust.agent.context.ContextInfoManager;
4886 import org.lttng.ust.agent.context.IContextInfoRetriever;
4887
4888 public class Test
4889 {
4890 // Our context information retriever class
4891 private static class MyContextInfoRetriever
4892 implements IContextInfoRetriever
4893 {
4894 @Override
4895 public Object retrieveContextInfo(String key) {
4896 if (key.equals("intCtx")) {
4897 return (short) 17;
4898 } else if (key.equals("strContext")) {
4899 return "context value!";
4900 } else {
4901 return null;
4902 }
4903 }
4904 }
4905
4906 private static final int answer = 42;
4907
4908 public static void main(String args[]) throws Exception
4909 {
4910 // Get the context information manager instance
4911 ContextInfoManager cim = ContextInfoManager.getInstance();
4912
4913 // Create and register our context information retriever
4914 IContextInfoRetriever cir = new MyContextInfoRetriever();
4915 cim.registerContextInfoRetriever("myRetriever", cir);
4916
4917 // Create a logger
4918 Logger logger = Logger.getLogger("jello");
4919
4920 // Create an LTTng-UST log handler
4921 Handler lttngUstLogHandler = new LttngLogHandler();
4922
4923 // Add the LTTng-UST log handler to our logger
4924 logger.addHandler(lttngUstLogHandler);
4925
4926 // Log at will!
4927 logger.info("some info");
4928 logger.warning("some warning");
4929 Thread.sleep(500);
4930 logger.finer("finer information; the answer is " + answer);
4931 Thread.sleep(123);
4932 logger.severe("error!");
4933
4934 // Not mandatory, but cleaner
4935 logger.removeHandler(lttngUstLogHandler);
4936 lttngUstLogHandler.close();
4937 cim.unregisterContextInfoRetriever("myRetriever");
4938 }
4939 }
4940 ----
4941
4942 Build this example:
4943
4944 [role="term"]
4945 ----
4946 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4947 ----
4948
4949 <<creating-destroying-tracing-sessions,Create a tracing session>>
4950 and <<enabling-disabling-events,create an event rule>> matching the
4951 `jello` JUL logger:
4952
4953 [role="term"]
4954 ----
4955 $ lttng create
4956 $ lttng enable-event --jul jello
4957 ----
4958
4959 <<adding-context,Add the application-specific context fields>> to the
4960 JUL channel:
4961
4962 [role="term"]
4963 ----
4964 $ lttng add-context --jul --type='$app.myRetriever:intCtx'
4965 $ lttng add-context --jul --type='$app.myRetriever:strContext'
4966 ----
4967
4968 <<basic-tracing-session-control,Start tracing>>:
4969
4970 [role="term"]
4971 ----
4972 $ lttng start
4973 ----
4974
4975 Run the compiled class:
4976
4977 [role="term"]
4978 ----
4979 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4980 ----
4981
4982 <<basic-tracing-session-control,Stop tracing>> and inspect the
4983 recorded events:
4984
4985 [role="term"]
4986 ----
4987 $ lttng stop
4988 $ lttng view
4989 ----
4990 ====
4991
4992
4993 [role="since-2.7"]
4994 [[python-application]]
4995 === User space Python agent
4996
4997 You can instrument a Python 2 or Python 3 application which uses the
4998 standard https://docs.python.org/3/library/logging.html[`logging`]
4999 package.
5000
5001 Each log statement emits an LTTng event once the
5002 application module imports the
5003 <<lttng-ust-agents,LTTng-UST Python agent>> package.
5004
5005 [role="img-100"]
5006 .A Python application importing the LTTng-UST Python agent.
5007 image::python-app.png[]
5008
5009 To use the LTTng-UST Python agent:
5010
5011 . In the Python application's source code, import the LTTng-UST Python
5012 agent:
5013 +
5014 --
5015 [source,python]
5016 ----
5017 import lttngust
5018 ----
5019 --
5020 +
5021 The LTTng-UST Python agent automatically adds its logging handler to the
5022 root logger at import time.
5023 +
5024 Any log statement that the application executes before this import does
5025 not emit an LTTng event.
5026 +
5027 IMPORTANT: The LTTng-UST Python agent must be
5028 <<installing-lttng,installed>>.
5029
5030 . Use log statements and logging configuration as usual.
5031 Since the LTTng-UST Python agent adds a handler to the _root_
5032 logger, you can trace any log statement from any logger.
5033
5034 .Use the LTTng-UST Python agent.
5035 ====
5036 [source,python]
5037 .path:{test.py}
5038 ----
5039 import lttngust
5040 import logging
5041 import time
5042
5043
5044 def example():
5045 logging.basicConfig()
5046 logger = logging.getLogger('my-logger')
5047
5048 while True:
5049 logger.debug('debug message')
5050 logger.info('info message')
5051 logger.warn('warn message')
5052 logger.error('error message')
5053 logger.critical('critical message')
5054 time.sleep(1)
5055
5056
5057 if __name__ == '__main__':
5058 example()
5059 ----
5060
5061 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
5062 logging handler which prints to the standard error stream, is not
5063 strictly required for LTTng-UST tracing to work, but in versions of
5064 Python preceding 3.2, you could see a warning message which indicates
5065 that no handler exists for the logger `my-logger`.
5066
5067 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5068 <<enabling-disabling-events,create an event rule>> matching the
5069 `my-logger` Python logger, and <<basic-tracing-session-control,start
5070 tracing>>:
5071
5072 [role="term"]
5073 ----
5074 $ lttng create
5075 $ lttng enable-event --python my-logger
5076 $ lttng start
5077 ----
5078
5079 Run the Python script:
5080
5081 [role="term"]
5082 ----
5083 $ python test.py
5084 ----
5085
5086 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5087 events:
5088
5089 [role="term"]
5090 ----
5091 $ lttng stop
5092 $ lttng view
5093 ----
5094 ====
5095
5096 In the resulting trace, an <<event,event record>> generated by a Python
5097 application is named `lttng_python:event` and has the following fields:
5098
5099 `asctime`::
5100 Logging time (string).
5101
5102 `msg`::
5103 Log record's message.
5104
5105 `logger_name`::
5106 Logger name.
5107
5108 `funcName`::
5109 Name of the function in which the log statement was executed.
5110
5111 `lineno`::
5112 Line number at which the log statement was executed.
5113
5114 `int_loglevel`::
5115 Log level integer value.
5116
5117 `thread`::
5118 ID of the Python thread in which the log statement was executed.
5119
5120 `threadName`::
5121 Name of the Python thread in which the log statement was executed.
5122
5123 You can use the opt:lttng-enable-event(1):--loglevel or
5124 opt:lttng-enable-event(1):--loglevel-only option of the
5125 man:lttng-enable-event(1) command to target a range of Python log levels
5126 or a specific Python log level.
5127
5128 When an application imports the LTTng-UST Python agent, the agent tries
5129 to register to a <<lttng-sessiond,session daemon>>. Note that you must
5130 <<start-sessiond,start the session daemon>> _before_ you run the Python
5131 application. If a session daemon is found, the agent tries to register
5132 to it during 5{nbsp}seconds, after which the application continues
5133 without LTTng tracing support. You can override this timeout value with
5134 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
5135 (milliseconds).
5136
5137 If the session daemon stops while a Python application with an imported
5138 LTTng-UST Python agent runs, the agent retries to connect and to
5139 register to a session daemon every 3{nbsp}seconds. You can override this
5140 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
5141 variable.
5142
5143
5144 [role="since-2.5"]
5145 [[proc-lttng-logger-abi]]
5146 === LTTng logger
5147
5148 The `lttng-tracer` Linux kernel module, part of
5149 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger file
5150 path:{/proc/lttng-logger} when it's loaded. Any application can write
5151 text data to this file to emit an LTTng event.
5152
5153 [role="img-100"]
5154 .An application writes to the LTTng logger file to emit an LTTng event.
5155 image::lttng-logger.png[]
5156
5157 The LTTng logger is the quickest method--not the most efficient,
5158 however--to add instrumentation to an application. It is designed
5159 mostly to instrument shell scripts:
5160
5161 [role="term"]
5162 ----
5163 $ echo "Some message, some $variable" > /proc/lttng-logger
5164 ----
5165
5166 Any event that the LTTng logger emits is named `lttng_logger` and
5167 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
5168 other instrumentation points in the kernel tracing domain, **any Unix
5169 user** can <<enabling-disabling-events,create an event rule>> which
5170 matches its event name, not only the root user or users in the
5171 <<tracing-group,tracing group>>.
5172
5173 To use the LTTng logger:
5174
5175 * From any application, write text data to the path:{/proc/lttng-logger}
5176 file.
5177
5178 The `msg` field of `lttng_logger` event records contains the
5179 recorded message.
5180
5181 NOTE: The maximum message length of an LTTng logger event is
5182 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
5183 than one event to contain the remaining data.
5184
5185 You should not use the LTTng logger to trace a user application which
5186 can be instrumented in a more efficient way, namely:
5187
5188 * <<c-application,C and $$C++$$ applications>>.
5189 * <<java-application,Java applications>>.
5190 * <<python-application,Python applications>>.
5191
5192 .Use the LTTng logger.
5193 ====
5194 [source,bash]
5195 .path:{test.bash}
5196 ----
5197 echo 'Hello, World!' > /proc/lttng-logger
5198 sleep 2
5199 df --human-readable --print-type / > /proc/lttng-logger
5200 ----
5201
5202 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5203 <<enabling-disabling-events,create an event rule>> matching the
5204 `lttng_logger` Linux kernel tracepoint, and
5205 <<basic-tracing-session-control,start tracing>>:
5206
5207 [role="term"]
5208 ----
5209 $ lttng create
5210 $ lttng enable-event --kernel lttng_logger
5211 $ lttng start
5212 ----
5213
5214 Run the Bash script:
5215
5216 [role="term"]
5217 ----
5218 $ bash test.bash
5219 ----
5220
5221 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5222 events:
5223
5224 [role="term"]
5225 ----
5226 $ lttng stop
5227 $ lttng view
5228 ----
5229 ====
5230
5231
5232 [[instrumenting-linux-kernel]]
5233 === LTTng kernel tracepoints
5234
5235 NOTE: This section shows how to _add_ instrumentation points to the
5236 Linux kernel. The kernel's subsystems are already thoroughly
5237 instrumented at strategic places for LTTng when you
5238 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
5239 package.
5240
5241 ////
5242 There are two methods to instrument the Linux kernel:
5243
5244 . <<linux-add-lttng-layer,Add an LTTng layer>> over an existing ftrace
5245 tracepoint which uses the `TRACE_EVENT()` API.
5246 +
5247 Choose this if you want to instrumentation a Linux kernel tree with an
5248 instrumentation point compatible with ftrace, perf, and SystemTap.
5249
5250 . Use an <<linux-lttng-tracepoint-event,LTTng-only approach>> to
5251 instrument an out-of-tree kernel module.
5252 +
5253 Choose this if you don't need ftrace, perf, or SystemTap support.
5254 ////
5255
5256
5257 [[linux-add-lttng-layer]]
5258 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5259
5260 This section shows how to add an LTTng layer to existing ftrace
5261 instrumentation using the `TRACE_EVENT()` API.
5262
5263 This section does not document the `TRACE_EVENT()` macro. You can
5264 read the following articles to learn more about this API:
5265
5266 * http://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part 1)]
5267 * http://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part 2)]
5268 * http://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part 3)]
5269
5270 The following procedure assumes that your ftrace tracepoints are
5271 correctly defined in their own header and that they are created in
5272 one source file using the `CREATE_TRACE_POINTS` definition.
5273
5274 To add an LTTng layer over an existing ftrace tracepoint:
5275
5276 . Make sure the following kernel configuration options are
5277 enabled:
5278 +
5279 --
5280 * `CONFIG_MODULES`
5281 * `CONFIG_KALLSYMS`
5282 * `CONFIG_HIGH_RES_TIMERS`
5283 * `CONFIG_TRACEPOINTS`
5284 --
5285
5286 . Build the Linux source tree with your custom ftrace tracepoints.
5287 . Boot the resulting Linux image on your target system.
5288 +
5289 Confirm that the tracepoints exist by looking for their names in the
5290 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5291 is your subsystem's name.
5292
5293 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
5294 +
5295 --
5296 [role="term"]
5297 ----
5298 $ cd $(mktemp -d) &&
5299 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
5300 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
5301 cd lttng-modules-2.9.*
5302 ----
5303 --
5304
5305 . In dir:{instrumentation/events/lttng-module}, relative to the root
5306 of the LTTng-modules source tree, create a header file named
5307 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5308 LTTng-modules tracepoint definitions using the LTTng-modules
5309 macros in it.
5310 +
5311 Start with this template:
5312 +
5313 --
5314 [source,c]
5315 .path:{instrumentation/events/lttng-module/my_subsys.h}
5316 ----
5317 #undef TRACE_SYSTEM
5318 #define TRACE_SYSTEM my_subsys
5319
5320 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5321 #define _LTTNG_MY_SUBSYS_H
5322
5323 #include "../../../probes/lttng-tracepoint-event.h"
5324 #include <linux/tracepoint.h>
5325
5326 LTTNG_TRACEPOINT_EVENT(
5327 /*
5328 * Format is identical to TRACE_EVENT()'s version for the three
5329 * following macro parameters:
5330 */
5331 my_subsys_my_event,
5332 TP_PROTO(int my_int, const char *my_string),
5333 TP_ARGS(my_int, my_string),
5334
5335 /* LTTng-modules specific macros */
5336 TP_FIELDS(
5337 ctf_integer(int, my_int_field, my_int)
5338 ctf_string(my_bar_field, my_bar)
5339 )
5340 )
5341
5342 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5343
5344 #include "../../../probes/define_trace.h"
5345 ----
5346 --
5347 +
5348 The entries in the `TP_FIELDS()` section are the list of fields for the
5349 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5350 ftrace's `TRACE_EVENT()` macro.
5351 +
5352 See <<lttng-modules-tp-fields,Tracepoint fields macros>> for a
5353 complete description of the available `ctf_*()` macros.
5354
5355 . Create the LTTng-modules probe's kernel module C source file,
5356 +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5357 subsystem name:
5358 +
5359 --
5360 [source,c]
5361 .path:{probes/lttng-probe-my-subsys.c}
5362 ----
5363 #include <linux/module.h>
5364 #include "../lttng-tracer.h"
5365
5366 /*
5367 * Build-time verification of mismatch between mainline
5368 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5369 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5370 */
5371 #include <trace/events/my_subsys.h>
5372
5373 /* Create LTTng tracepoint probes */
5374 #define LTTNG_PACKAGE_BUILD
5375 #define CREATE_TRACE_POINTS
5376 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5377
5378 #include "../instrumentation/events/lttng-module/my_subsys.h"
5379
5380 MODULE_LICENSE("GPL and additional rights");
5381 MODULE_AUTHOR("Your name <your-email>");
5382 MODULE_DESCRIPTION("LTTng my_subsys probes");
5383 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5384 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5385 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5386 LTTNG_MODULES_EXTRAVERSION);
5387 ----
5388 --
5389
5390 . Edit path:{probes/KBuild} and add your new kernel module object
5391 next to the existing ones:
5392 +
5393 --
5394 [source,make]
5395 .path:{probes/KBuild}
5396 ----
5397 # ...
5398
5399 obj-m += lttng-probe-module.o
5400 obj-m += lttng-probe-power.o
5401
5402 obj-m += lttng-probe-my-subsys.o
5403
5404 # ...
5405 ----
5406 --
5407
5408 . Build and install the LTTng kernel modules:
5409 +
5410 --
5411 [role="term"]
5412 ----
5413 $ make KERNELDIR=/path/to/linux
5414 # make modules_install && depmod -a
5415 ----
5416 --
5417 +
5418 Replace `/path/to/linux` with the path to the Linux source tree where
5419 you defined and used tracepoints with ftrace's `TRACE_EVENT()` macro.
5420
5421 Note that you can also use the
5422 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5423 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5424 C code that need to be executed before the event fields are recorded.
5425
5426 The best way to learn how to use the previous LTTng-modules macros is to
5427 inspect the existing LTTng-modules tracepoint definitions in the
5428 dir:{instrumentation/events/lttng-module} header files. Compare them
5429 with the Linux kernel mainline versions in the
5430 dir:{include/trace/events} directory of the Linux source tree.
5431
5432
5433 [role="since-2.7"]
5434 [[lttng-tracepoint-event-code]]
5435 ===== Use custom C code to access the data for tracepoint fields
5436
5437 Although we recommended to always use the
5438 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5439 the arguments and fields of an LTTng-modules tracepoint when possible,
5440 sometimes you need a more complex process to access the data that the
5441 tracer records as event record fields. In other words, you need local
5442 variables and multiple C{nbsp}statements instead of simple
5443 argument-based expressions that you pass to the
5444 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5445
5446 You can use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5447 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5448 a block of C{nbsp}code to be executed before LTTng records the fields.
5449 The structure of this macro is:
5450
5451 [source,c]
5452 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5453 ----
5454 LTTNG_TRACEPOINT_EVENT_CODE(
5455 /*
5456 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5457 * version for the following three macro parameters:
5458 */
5459 my_subsys_my_event,
5460 TP_PROTO(int my_int, const char *my_string),
5461 TP_ARGS(my_int, my_string),
5462
5463 /* Declarations of custom local variables */
5464 TP_locvar(
5465 int a = 0;
5466 unsigned long b = 0;
5467 const char *name = "(undefined)";
5468 struct my_struct *my_struct;
5469 ),
5470
5471 /*
5472 * Custom code which uses both tracepoint arguments
5473 * (in TP_ARGS()) and local variables (in TP_locvar()).
5474 *
5475 * Local variables are actually members of a structure pointed
5476 * to by the special variable tp_locvar.
5477 */
5478 TP_code(
5479 if (my_int) {
5480 tp_locvar->a = my_int + 17;
5481 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5482 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5483 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5484 put_my_struct(tp_locvar->my_struct);
5485
5486 if (tp_locvar->b) {
5487 tp_locvar->a = 1;
5488 }
5489 }
5490 ),
5491
5492 /*
5493 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5494 * version for this, except that tp_locvar members can be
5495 * used in the argument expression parameters of
5496 * the ctf_*() macros.
5497 */
5498 TP_FIELDS(
5499 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5500 ctf_integer(int, my_struct_a, tp_locvar->a)
5501 ctf_string(my_string_field, my_string)
5502 ctf_string(my_struct_name, tp_locvar->name)
5503 )
5504 )
5505 ----
5506
5507 IMPORTANT: The C code defined in `TP_code()` must not have any side
5508 effects when executed. In particular, the code must not allocate
5509 memory or get resources without deallocating this memory or putting
5510 those resources afterwards.
5511
5512
5513 [[instrumenting-linux-kernel-tracing]]
5514 ==== Load and unload a custom probe kernel module
5515
5516 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5517 kernel module>> in the kernel before it can emit LTTng events.
5518
5519 To load the default probe kernel modules and a custom probe kernel
5520 module:
5521
5522 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5523 probe modules to load when starting a root <<lttng-sessiond,session
5524 daemon>>:
5525 +
5526 --
5527 .Load the `my_subsys`, `usb`, and the default probe modules.
5528 ====
5529 [role="term"]
5530 ----
5531 # lttng-sessiond --extra-kmod-probes=my_subsys,usb
5532 ----
5533 ====
5534 --
5535 +
5536 You only need to pass the subsystem name, not the whole kernel module
5537 name.
5538
5539 To load _only_ a given custom probe kernel module:
5540
5541 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5542 modules to load when starting a root session daemon:
5543 +
5544 --
5545 .Load only the `my_subsys` and `usb` probe modules.
5546 ====
5547 [role="term"]
5548 ----
5549 # lttng-sessiond --kmod-probes=my_subsys,usb
5550 ----
5551 ====
5552 --
5553
5554 To confirm that a probe module is loaded:
5555
5556 * Use man:lsmod(8):
5557 +
5558 --
5559 [role="term"]
5560 ----
5561 $ lsmod | grep lttng_probe_usb
5562 ----
5563 --
5564
5565 To unload the loaded probe modules:
5566
5567 * Kill the session daemon with `SIGTERM`:
5568 +
5569 --
5570 [role="term"]
5571 ----
5572 # pkill lttng-sessiond
5573 ----
5574 --
5575 +
5576 You can also use man:modprobe(8)'s `--remove` option if the session
5577 daemon terminates abnormally.
5578
5579
5580 [[controlling-tracing]]
5581 == Tracing control
5582
5583 Once an application or a Linux kernel is
5584 <<instrumenting,instrumented>> for LTTng tracing,
5585 you can _trace_ it.
5586
5587 This section is divided in topics on how to use the various
5588 <<plumbing,components of LTTng>>, in particular the <<lttng-cli,cmd:lttng
5589 command-line tool>>, to _control_ the LTTng daemons and tracers.
5590
5591 NOTE: In the following subsections, we refer to an man:lttng(1) command
5592 using its man page name. For example, instead of _Run the `create`
5593 command to..._, we use _Run the man:lttng-create(1) command to..._.
5594
5595
5596 [[start-sessiond]]
5597 === Start a session daemon
5598
5599 In some situations, you need to run a <<lttng-sessiond,session daemon>>
5600 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5601 command-line tool.
5602
5603 You will see the following error when you run a command while no session
5604 daemon is running:
5605
5606 ----
5607 Error: No session daemon is available
5608 ----
5609
5610 The only command that automatically runs a session daemon is
5611 man:lttng-create(1), which you use to
5612 <<creating-destroying-tracing-sessions,create a tracing session>>. While
5613 this is most of the time the first operation that you do, sometimes it's
5614 not. Some examples are:
5615
5616 * <<list-instrumentation-points,List the available instrumentation points>>.
5617 * <<saving-loading-tracing-session,Load a tracing session configuration>>.
5618
5619 [[tracing-group]] Each Unix user must have its own running session
5620 daemon to trace user applications. The session daemon that the root user
5621 starts is the only one allowed to control the LTTng kernel tracer. Users
5622 that are part of the _tracing group_ can control the root session
5623 daemon. The default tracing group name is `tracing`; you can set it to
5624 something else with the opt:lttng-sessiond(8):--group option when you
5625 start the root session daemon.
5626
5627 To start a user session daemon:
5628
5629 * Run man:lttng-sessiond(8):
5630 +
5631 --
5632 [role="term"]
5633 ----
5634 $ lttng-sessiond --daemonize
5635 ----
5636 --
5637
5638 To start the root session daemon:
5639
5640 * Run man:lttng-sessiond(8) as the root user:
5641 +
5642 --
5643 [role="term"]
5644 ----
5645 # lttng-sessiond --daemonize
5646 ----
5647 --
5648
5649 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5650 start the session daemon in foreground.
5651
5652 To stop a session daemon, use man:kill(1) on its process ID (standard
5653 `TERM` signal).
5654
5655 Note that some Linux distributions could manage the LTTng session daemon
5656 as a service. In this case, you should use the service manager to
5657 start, restart, and stop session daemons.
5658
5659
5660 [[creating-destroying-tracing-sessions]]
5661 === Create and destroy a tracing session
5662
5663 Almost all the LTTng control operations happen in the scope of
5664 a <<tracing-session,tracing session>>, which is the dialogue between the
5665 <<lttng-sessiond,session daemon>> and you.
5666
5667 To create a tracing session with a generated name:
5668
5669 * Use the man:lttng-create(1) command:
5670 +
5671 --
5672 [role="term"]
5673 ----
5674 $ lttng create
5675 ----
5676 --
5677
5678 The created tracing session's name is `auto` followed by the
5679 creation date.
5680
5681 To create a tracing session with a specific name:
5682
5683 * Use the optional argument of the man:lttng-create(1) command:
5684 +
5685 --
5686 [role="term"]
5687 ----
5688 $ lttng create my-session
5689 ----
5690 --
5691 +
5692 Replace `my-session` with the specific tracing session name.
5693
5694 LTTng appends the creation date to the created tracing session's name.
5695
5696 LTTng writes the traces of a tracing session in
5697 +$LTTNG_HOME/lttng-trace/__name__+ by default, where +__name__+ is the
5698 name of the tracing session. Note that the env:LTTNG_HOME environment
5699 variable defaults to `$HOME` if not set.
5700
5701 To output LTTng traces to a non-default location:
5702
5703 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command:
5704 +
5705 --
5706 [role="term"]
5707 ----
5708 $ lttng create my-session --output=/tmp/some-directory
5709 ----
5710 --
5711
5712 You may create as many tracing sessions as you wish.
5713
5714 To list all the existing tracing sessions for your Unix user:
5715
5716 * Use the man:lttng-list(1) command:
5717 +
5718 --
5719 [role="term"]
5720 ----
5721 $ lttng list
5722 ----
5723 --
5724
5725 When you create a tracing session, it is set as the _current tracing
5726 session_. The following man:lttng(1) commands operate on the current
5727 tracing session when you don't specify one:
5728
5729 [role="list-3-cols"]
5730 * `add-context`
5731 * `destroy`
5732 * `disable-channel`
5733 * `disable-event`
5734 * `enable-channel`
5735 * `enable-event`
5736 * `load`
5737 * `regenerate`
5738 * `save`
5739 * `snapshot`
5740 * `start`
5741 * `stop`
5742 * `track`
5743 * `untrack`
5744 * `view`
5745
5746 To change the current tracing session:
5747
5748 * Use the man:lttng-set-session(1) command:
5749 +
5750 --
5751 [role="term"]
5752 ----
5753 $ lttng set-session new-session
5754 ----
5755 --
5756 +
5757 Replace `new-session` by the name of the new current tracing session.
5758
5759 When you are done tracing in a given tracing session, you can destroy
5760 it. This operation frees the resources taken by the tracing session
5761 to destroy; it does not destroy the trace data that LTTng wrote for
5762 this tracing session.
5763
5764 To destroy the current tracing session:
5765
5766 * Use the man:lttng-destroy(1) command:
5767 +
5768 --
5769 [role="term"]
5770 ----
5771 $ lttng destroy
5772 ----
5773 --
5774
5775
5776 [[list-instrumentation-points]]
5777 === List the available instrumentation points
5778
5779 The <<lttng-sessiond,session daemon>> can query the running instrumented
5780 user applications and the Linux kernel to get a list of available
5781 instrumentation points. For the Linux kernel <<domain,tracing domain>>,
5782 they are tracepoints and system calls. For the user space tracing
5783 domain, they are tracepoints. For the other tracing domains, they are
5784 logger names.
5785
5786 To list the available instrumentation points:
5787
5788 * Use the man:lttng-list(1) command with the requested tracing domain's
5789 option amongst:
5790 +
5791 --
5792 * opt:lttng-list(1):--kernel: Linux kernel tracepoints (your Unix user
5793 must be a root user, or it must be a member of the
5794 <<tracing-group,tracing group>>).
5795 * opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall: Linux
5796 kernel system calls (your Unix user must be a root user, or it must be
5797 a member of the tracing group).
5798 * opt:lttng-list(1):--userspace: user space tracepoints.
5799 * opt:lttng-list(1):--jul: `java.util.logging` loggers.
5800 * opt:lttng-list(1):--log4j: Apache log4j loggers.
5801 * opt:lttng-list(1):--python: Python loggers.
5802 --
5803
5804 .List the available user space tracepoints.
5805 ====
5806 [role="term"]
5807 ----
5808 $ lttng list --userspace
5809 ----
5810 ====
5811
5812 .List the available Linux kernel system call tracepoints.
5813 ====
5814 [role="term"]
5815 ----
5816 $ lttng list --kernel --syscall
5817 ----
5818 ====
5819
5820
5821 [[enabling-disabling-events]]
5822 === Create and enable an event rule
5823
5824 Once you <<creating-destroying-tracing-sessions,create a tracing
5825 session>>, you can create <<event,event rules>> with the
5826 man:lttng-enable-event(1) command.
5827
5828 You specify each condition with a command-line option. The available
5829 condition options are shown in the following table.
5830
5831 [role="growable",cols="asciidoc,asciidoc,default"]
5832 .Condition command-line options for the man:lttng-enable-event(1) command.
5833 |====
5834 |Option |Description |Applicable tracing domains
5835
5836 |
5837 One of:
5838
5839 . `--syscall`
5840 . +--probe=__ADDR__+
5841 . +--function=__ADDR__+
5842
5843 |
5844 Instead of using the default _tracepoint_ instrumentation type, use:
5845
5846 . A Linux system call.
5847 . A Linux https://lwn.net/Articles/132196/[KProbe] (symbol or address).
5848 . The entry and return points of a Linux function (symbol or address).
5849
5850 |Linux kernel.
5851
5852 |First positional argument.
5853
5854 |
5855 Tracepoint or system call name. In the case of a Linux KProbe or
5856 function, this is a custom name given to the event rule. With the
5857 JUL, log4j, and Python domains, this is a logger name.
5858
5859 With a tracepoint, logger, or system call name, the last character
5860 can be `*` to match anything that remains.
5861
5862 |All.
5863
5864 |
5865 One of:
5866
5867 . +--loglevel=__LEVEL__+
5868 . +--loglevel-only=__LEVEL__+
5869
5870 |
5871 . Match only tracepoints or log statements with a logging level at
5872 least as severe as +__LEVEL__+.
5873 . Match only tracepoints or log statements with a logging level
5874 equal to +__LEVEL__+.
5875
5876 See man:lttng-enable-event(1) for the list of available logging level
5877 names.
5878
5879 |User space, JUL, log4j, and Python.
5880
5881 |+--exclude=__EXCLUSIONS__+
5882
5883 |
5884 When you use a `*` character at the end of the tracepoint or logger
5885 name (first positional argument), exclude the specific names in the
5886 comma-delimited list +__EXCLUSIONS__+.
5887
5888 |
5889 User space, JUL, log4j, and Python.
5890
5891 |+--filter=__EXPR__+
5892
5893 |
5894 Match only events which satisfy the expression +__EXPR__+.
5895
5896 See man:lttng-enable-event(1) to learn more about the syntax of a
5897 filter expression.
5898
5899 |All.
5900
5901 |====
5902
5903 You attach an event rule to a <<channel,channel>> on creation. If you do
5904 not specify the channel with the opt:lttng-enable-event(1):--channel
5905 option, and if the event rule to create is the first in its
5906 <<domain,tracing domain>> for a given tracing session, then LTTng
5907 creates a _default channel_ for you. This default channel is reused in
5908 subsequent invocations of the man:lttng-enable-event(1) command for the
5909 same tracing domain.
5910
5911 An event rule is always enabled at creation time.
5912
5913 The following examples show how you can combine the previous
5914 command-line options to create simple to more complex event rules.
5915
5916 .Create an event rule targetting a Linux kernel tracepoint (default channel).
5917 ====
5918 [role="term"]
5919 ----
5920 $ lttng enable-event --kernel sched_switch
5921 ----
5922 ====
5923
5924 .Create an event rule matching four Linux kernel system calls (default channel).
5925 ====
5926 [role="term"]
5927 ----
5928 $ lttng enable-event --kernel --syscall open,write,read,close
5929 ----
5930 ====
5931
5932 .Create event rules matching tracepoints with filter expressions (default channel).
5933 ====
5934 [role="term"]
5935 ----
5936 $ lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
5937 ----
5938
5939 [role="term"]
5940 ----
5941 $ lttng enable-event --kernel --all \
5942 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
5943 ----
5944
5945 [role="term"]
5946 ----
5947 $ lttng enable-event --jul my_logger \
5948 --filter='$app.retriever:cur_msg_id > 3'
5949 ----
5950
5951 IMPORTANT: Make sure to always quote the filter string when you
5952 use man:lttng(1) from a shell.
5953 ====
5954
5955 .Create an event rule matching any user space tracepoint of a given tracepoint provider with a log level range (default channel).
5956 ====
5957 [role="term"]
5958 ----
5959 $ lttng enable-event --userspace my_app:'*' --loglevel=TRACE_INFO
5960 ----
5961
5962 IMPORTANT: Make sure to always quote the wildcard character when you
5963 use man:lttng(1) from a shell.
5964 ====
5965
5966 .Create an event rule matching multiple Python loggers with a wildcard and with exclusions (default channel).
5967 ====
5968 [role="term"]
5969 ----
5970 $ lttng enable-event --python my-app.'*' \
5971 --exclude='my-app.module,my-app.hello'
5972 ----
5973 ====
5974
5975 .Create an event rule matching any Apache log4j logger with a specific log level (default channel).
5976 ====
5977 [role="term"]
5978 ----
5979 $ lttng enable-event --log4j --all --loglevel-only=LOG4J_WARN
5980 ----
5981 ====
5982
5983 .Create an event rule attached to a specific channel matching a specific user space tracepoint provider and tracepoint.
5984 ====
5985 [role="term"]
5986 ----
5987 $ lttng enable-event --userspace my_app:my_tracepoint --channel=my-channel
5988 ----
5989 ====
5990
5991 The event rules of a given channel form a whitelist: as soon as an
5992 emitted event passes one of them, LTTng can record the event. For
5993 example, an event named `my_app:my_tracepoint` emitted from a user space
5994 tracepoint with a `TRACE_ERROR` log level passes both of the following
5995 rules:
5996
5997 [role="term"]
5998 ----
5999 $ lttng enable-event --userspace my_app:my_tracepoint
6000 $ lttng enable-event --userspace my_app:my_tracepoint \
6001 --loglevel=TRACE_INFO
6002 ----
6003
6004 The second event rule is redundant: the first one includes
6005 the second one.
6006
6007
6008 [[disable-event-rule]]
6009 === Disable an event rule
6010
6011 To disable an event rule that you <<enabling-disabling-events,created>>
6012 previously, use the man:lttng-disable-event(1) command. This command
6013 disables _all_ the event rules (of a given tracing domain and channel)
6014 which match an instrumentation point. The other conditions are not
6015 supported as of LTTng{nbsp}{revision}.
6016
6017 The LTTng tracer does not record an emitted event which passes
6018 a _disabled_ event rule.
6019
6020 .Disable an event rule matching a Python logger (default channel).
6021 ====
6022 [role="term"]
6023 ----
6024 $ lttng disable-event --python my-logger
6025 ----
6026 ====
6027
6028 .Disable an event rule matching all `java.util.logging` loggers (default channel).
6029 ====
6030 [role="term"]
6031 ----
6032 $ lttng disable-event --jul '*'
6033 ----
6034 ====
6035
6036 .Disable _all_ the event rules of the default channel.
6037 ====
6038 The opt:lttng-disable-event(1):--all-events option is not, like the
6039 opt:lttng-enable-event(1):--all option of man:lttng-enable-event(1), the
6040 equivalent of the event name `*` (wildcard): it disables _all_ the event
6041 rules of a given channel.
6042
6043 [role="term"]
6044 ----
6045 $ lttng disable-event --jul --all-events
6046 ----
6047 ====
6048
6049 NOTE: You cannot delete an event rule once you create it.
6050
6051
6052 [[status]]
6053 === Get the status of a tracing session
6054
6055 To get the status of the current tracing session, that is, its
6056 parameters, its channels, event rules, and their attributes:
6057
6058 * Use the man:lttng-status(1) command:
6059 +
6060 --
6061 [role="term"]
6062 ----
6063 $ lttng status
6064 ----
6065 --
6066 +
6067
6068 To get the status of any tracing session:
6069
6070 * Use the man:lttng-list(1) command with the tracing session's name:
6071 +
6072 --
6073 [role="term"]
6074 ----
6075 $ lttng list my-session
6076 ----
6077 --
6078 +
6079 Replace `my-session` with the desired tracing session's name.
6080
6081
6082 [[basic-tracing-session-control]]
6083 === Start and stop a tracing session
6084
6085 Once you <<creating-destroying-tracing-sessions,create a tracing
6086 session>> and
6087 <<enabling-disabling-events,create one or more event rules>>,
6088 you can start and stop the tracers for this tracing session.
6089
6090 To start tracing in the current tracing session:
6091
6092 * Use the man:lttng-start(1) command:
6093 +
6094 --
6095 [role="term"]
6096 ----
6097 $ lttng start
6098 ----
6099 --
6100
6101 LTTng is very flexible: you can launch user applications before
6102 or after the you start the tracers. The tracers only record the events
6103 if they pass enabled event rules and if they occur while the tracers are
6104 started.
6105
6106 To stop tracing in the current tracing session:
6107
6108 * Use the man:lttng-stop(1) command:
6109 +
6110 --
6111 [role="term"]
6112 ----
6113 $ lttng stop
6114 ----
6115 --
6116 +
6117 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
6118 records>> or lost sub-buffers since the last time you ran
6119 man:lttng-start(1), warnings are printed when you run the
6120 man:lttng-stop(1) command.
6121
6122
6123 [[enabling-disabling-channels]]
6124 === Create a channel
6125
6126 Once you create a tracing session, you can create a <<channel,channel>>
6127 with the man:lttng-enable-channel(1) command.
6128
6129 Note that LTTng automatically creates a default channel when, for a
6130 given <<domain,tracing domain>>, no channels exist and you
6131 <<enabling-disabling-events,create>> the first event rule. This default
6132 channel is named `channel0` and its attributes are set to reasonable
6133 values. Therefore, you only need to create a channel when you need
6134 non-default attributes.
6135
6136 You specify each non-default channel attribute with a command-line
6137 option when you use the man:lttng-enable-channel(1) command. The
6138 available command-line options are:
6139
6140 [role="growable",cols="asciidoc,asciidoc"]
6141 .Command-line options for the man:lttng-enable-channel(1) command.
6142 |====
6143 |Option |Description
6144
6145 |`--overwrite`
6146
6147 |
6148 Use the _overwrite_
6149 <<channel-overwrite-mode-vs-discard-mode,event loss mode>> instead of
6150 the default _discard_ mode.
6151
6152 |`--buffers-pid` (user space tracing domain only)
6153
6154 |
6155 Use the per-process <<channel-buffering-schemes,buffering scheme>>
6156 instead of the default per-user buffering scheme.
6157
6158 |+--subbuf-size=__SIZE__+
6159
6160 |
6161 Allocate sub-buffers of +__SIZE__+ bytes (power of two), for each CPU,
6162 either for each Unix user (default), or for each instrumented process.
6163
6164 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6165
6166 |+--num-subbuf=__COUNT__+
6167
6168 |
6169 Allocate +__COUNT__+ sub-buffers (power of two), for each CPU, either
6170 for each Unix user (default), or for each instrumented process.
6171
6172 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6173
6174 |+--tracefile-size=__SIZE__+
6175
6176 |
6177 Set the maximum size of each trace file that this channel writes within
6178 a stream to +__SIZE__+ bytes instead of no maximum.
6179
6180 See <<tracefile-rotation,Trace file count and size>>.
6181
6182 |+--tracefile-count=__COUNT__+
6183
6184 |
6185 Limit the number of trace files that this channel creates to
6186 +__COUNT__+ channels instead of no limit.
6187
6188 See <<tracefile-rotation,Trace file count and size>>.
6189
6190 |+--switch-timer=__PERIODUS__+
6191
6192 |
6193 Set the <<channel-switch-timer,switch timer period>>
6194 to +__PERIODUS__+{nbsp}µs.
6195
6196 |+--read-timer=__PERIODUS__+
6197
6198 |
6199 Set the <<channel-read-timer,read timer period>>
6200 to +__PERIODUS__+{nbsp}µs.
6201
6202 |+--output=__TYPE__+ (Linux kernel tracing domain only)
6203
6204 |
6205 Set the channel's output type to +__TYPE__+, either `mmap` or `splice`.
6206
6207 |====
6208
6209 You can only create a channel in the Linux kernel and user space
6210 <<domain,tracing domains>>: other tracing domains have their own channel
6211 created on the fly when <<enabling-disabling-events,creating event
6212 rules>>.
6213
6214 [IMPORTANT]
6215 ====
6216 Because of a current LTTng limitation, you must create all channels
6217 _before_ you <<basic-tracing-session-control,start tracing>> in a given
6218 tracing session, that is, before the first time you run
6219 man:lttng-start(1).
6220
6221 Since LTTng automatically creates a default channel when you use the
6222 man:lttng-enable-event(1) command with a specific tracing domain, you
6223 cannot, for example, create a Linux kernel event rule, start tracing,
6224 and then create a user space event rule, because no user space channel
6225 exists yet and it's too late to create one.
6226
6227 For this reason, make sure to configure your channels properly
6228 before starting the tracers for the first time!
6229 ====
6230
6231 The following examples show how you can combine the previous
6232 command-line options to create simple to more complex channels.
6233
6234 .Create a Linux kernel channel with default attributes.
6235 ====
6236 [role="term"]
6237 ----
6238 $ lttng enable-channel --kernel my-channel
6239 ----
6240 ====
6241
6242 .Create a user space channel with 4 sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6243 ====
6244 [role="term"]
6245 ----
6246 $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6247 --buffers-pid my-channel
6248 ----
6249 ====
6250
6251 .Create a Linux kernel channel which rotates 8 trace files of 4{nbsp}MiB each for each stream
6252 ====
6253 [role="term"]
6254 ----
6255 $ lttng enable-channel --kernel --tracefile-count=8 \
6256 --tracefile-size=4194304 my-channel
6257 ----
6258 ====
6259
6260 .Create a user space channel in overwrite (or _flight recorder_) mode.
6261 ====
6262 [role="term"]
6263 ----
6264 $ lttng enable-channel --userspace --overwrite my-channel
6265 ----
6266 ====
6267
6268 You can <<enabling-disabling-events,create>> the same event rule in
6269 two different channels:
6270
6271 [role="term"]
6272 ----
6273 $ lttng enable-event --userspace --channel=my-channel app:tp
6274 $ lttng enable-event --userspace --channel=other-channel app:tp
6275 ----
6276
6277 If both channels are enabled, when a tracepoint named `app:tp` is
6278 reached, LTTng records two events, one for each channel.
6279
6280
6281 [[disable-channel]]
6282 === Disable a channel
6283
6284 To disable a specific channel that you <<enabling-disabling-channels,created>>
6285 previously, use the man:lttng-disable-channel(1) command.
6286
6287 .Disable a specific Linux kernel channel.
6288 ====
6289 [role="term"]
6290 ----
6291 $ lttng disable-channel --kernel my-channel
6292 ----
6293 ====
6294
6295 The state of a channel precedes the individual states of event rules
6296 attached to it: event rules which belong to a disabled channel, even if
6297 they are enabled, are also considered disabled.
6298
6299
6300 [[adding-context]]
6301 === Add context fields to a channel
6302
6303 Event record fields in trace files provide important information about
6304 events that occured previously, but sometimes some external context may
6305 help you solve a problem faster. Examples of context fields are:
6306
6307 * The **process ID**, **thread ID**, **process name**, and
6308 **process priority** of the thread in which the event occurs.
6309 * The **hostname** of the system on which the event occurs.
6310 * The current values of many possible **performance counters** using
6311 perf, for example:
6312 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6313 ** Cache misses.
6314 ** Branch instructions, misses, and loads.
6315 ** CPU faults.
6316 * Any context defined at the application level (supported for the
6317 JUL and log4j <<domain,tracing domains>>).
6318
6319 To get the full list of available context fields, see
6320 `lttng add-context --list`. Some context fields are reserved for a
6321 specific <<domain,tracing domain>> (Linux kernel or user space).
6322
6323 You add context fields to <<channel,channels>>. All the events
6324 that a channel with added context fields records contain those fields.
6325
6326 To add context fields to one or all the channels of a given tracing
6327 session:
6328
6329 * Use the man:lttng-add-context(1) command.
6330
6331 .Add context fields to all the channels of the current tracing session.
6332 ====
6333 The following command line adds the virtual process identifier and
6334 the per-thread CPU cycles count fields to all the user space channels
6335 of the current tracing session.
6336
6337 [role="term"]
6338 ----
6339 $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6340 ----
6341 ====
6342
6343 .Add performance counter context fields by raw ID
6344 ====
6345 See man:lttng-add-context(1) for the exact format of the context field
6346 type, which is partly compatible with the format used in
6347 man:perf-record(1).
6348
6349 [role="term"]
6350 ----
6351 $ lttng add-context --userspace --type=perf:thread:raw:r0110:test
6352 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6353 ----
6354 ====
6355
6356 .Add a context field to a specific channel.
6357 ====
6358 The following command line adds the thread identifier context field
6359 to the Linux kernel channel named `my-channel` in the current
6360 tracing session.
6361
6362 [role="term"]
6363 ----
6364 $ lttng add-context --kernel --channel=my-channel --type=tid
6365 ----
6366 ====
6367
6368 .Add an application-specific context field to a specific channel.
6369 ====
6370 The following command line adds the `cur_msg_id` context field of the
6371 `retriever` context retriever for all the instrumented
6372 <<java-application,Java applications>> recording <<event,event records>>
6373 in the channel named `my-channel`:
6374
6375 [role="term"]
6376 ----
6377 $ lttng add-context --kernel --channel=my-channel \
6378 --type='$app:retriever:cur_msg_id'
6379 ----
6380
6381 IMPORTANT: Make sure to always quote the `$` character when you
6382 use man:lttng-add-context(1) from a shell.
6383 ====
6384
6385 NOTE: You cannot remove context fields from a channel once you add it.
6386
6387
6388 [role="since-2.7"]
6389 [[pid-tracking]]
6390 === Track process IDs
6391
6392 It's often useful to allow only specific process IDs (PIDs) to emit
6393 events. For example, you may wish to record all the system calls made by
6394 a given process (à la http://linux.die.net/man/1/strace[strace]).
6395
6396 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6397 purpose. Both commands operate on a whitelist of process IDs. You _add_
6398 entries to this whitelist with the man:lttng-track(1) command and remove
6399 entries with the man:lttng-untrack(1) command. Any process which has one
6400 of the PIDs in the whitelist is allowed to emit LTTng events which pass
6401 an enabled <<event,event rule>>.
6402
6403 NOTE: The PID tracker tracks the _numeric process IDs_. Should a
6404 process with a given tracked ID exit and another process be given this
6405 ID, then the latter would also be allowed to emit events.
6406
6407 .Track and untrack process IDs.
6408 ====
6409 For the sake of the following example, assume the target system has 16
6410 possible PIDs.
6411
6412 When you
6413 <<creating-destroying-tracing-sessions,create a tracing session>>,
6414 the whitelist contains all the possible PIDs:
6415
6416 [role="img-100"]
6417 .All PIDs are tracked.
6418 image::track-all.png[]
6419
6420 When the whitelist is full and you use the man:lttng-track(1) command to
6421 specify some PIDs to track, LTTng first clears the whitelist, then it
6422 tracks the specific PIDs. After:
6423
6424 [role="term"]
6425 ----
6426 $ lttng track --pid=3,4,7,10,13
6427 ----
6428
6429 the whitelist is:
6430
6431 [role="img-100"]
6432 .PIDs 3, 4, 7, 10, and 13 are tracked.
6433 image::track-3-4-7-10-13.png[]
6434
6435 You can add more PIDs to the whitelist afterwards:
6436
6437 [role="term"]
6438 ----
6439 $ lttng track --pid=1,15,16
6440 ----
6441
6442 The result is:
6443
6444 [role="img-100"]
6445 .PIDs 1, 15, and 16 are added to the whitelist.
6446 image::track-1-3-4-7-10-13-15-16.png[]
6447
6448 The man:lttng-untrack(1) command removes entries from the PID tracker's
6449 whitelist. Given the previous example, the following command:
6450
6451 [role="term"]
6452 ----
6453 $ lttng untrack --pid=3,7,10,13
6454 ----
6455
6456 leads to this whitelist:
6457
6458 [role="img-100"]
6459 .PIDs 3, 7, 10, and 13 are removed from the whitelist.
6460 image::track-1-4-15-16.png[]
6461
6462 LTTng can track all possible PIDs again using the opt:track(1):--all
6463 option:
6464
6465 [role="term"]
6466 ----
6467 $ lttng track --pid --all
6468 ----
6469
6470 The result is, again:
6471
6472 [role="img-100"]
6473 .All PIDs are tracked.
6474 image::track-all.png[]
6475 ====
6476
6477 .Track only specific PIDs
6478 ====
6479 A very typical use case with PID tracking is to start with an empty
6480 whitelist, then <<basic-tracing-session-control,start the tracers>>, and
6481 then add PIDs manually while tracers are active. You can accomplish this
6482 by using the opt:lttng-untrack(1):--all option of the
6483 man:lttng-untrack(1) command to clear the whitelist after you
6484 <<creating-destroying-tracing-sessions,create a tracing session>>:
6485
6486 [role="term"]
6487 ----
6488 $ lttng untrack --pid --all
6489 ----
6490
6491 gives:
6492
6493 [role="img-100"]
6494 .No PIDs are tracked.
6495 image::untrack-all.png[]
6496
6497 If you trace with this whitelist configuration, the tracer records no
6498 events for this <<domain,tracing domain>> because no processes are
6499 tracked. You can use the man:lttng-track(1) command as usual to track
6500 specific PIDs, for example:
6501
6502 [role="term"]
6503 ----
6504 $ lttng track --pid=6,11
6505 ----
6506
6507 Result:
6508
6509 [role="img-100"]
6510 .PIDs 6 and 11 are tracked.
6511 image::track-6-11.png[]
6512 ====
6513
6514
6515 [role="since-2.5"]
6516 [[saving-loading-tracing-session]]
6517 === Save and load tracing session configurations
6518
6519 Configuring a <<tracing-session,tracing session>> can be long. Some of
6520 the tasks involved are:
6521
6522 * <<enabling-disabling-channels,Create channels>> with
6523 specific attributes.
6524 * <<adding-context,Add context fields>> to specific channels.
6525 * <<enabling-disabling-events,Create event rules>> with specific log
6526 level and filter conditions.
6527
6528 If you use LTTng to solve real world problems, chances are you have to
6529 record events using the same tracing session setup over and over,
6530 modifying a few variables each time in your instrumented program
6531 or environment. To avoid constant tracing session reconfiguration,
6532 the man:lttng(1) command-line tool can save and load tracing session
6533 configurations to/from XML files.
6534
6535 To save a given tracing session configuration:
6536
6537 * Use the man:lttng-save(1) command:
6538 +
6539 --
6540 [role="term"]
6541 ----
6542 $ lttng save my-session
6543 ----
6544 --
6545 +
6546 Replace `my-session` with the name of the tracing session to save.
6547
6548 LTTng saves tracing session configurations to
6549 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6550 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. Use
6551 the opt:lttng-save(1):--output-path option to change this destination
6552 directory.
6553
6554 LTTng saves all configuration parameters, for example:
6555
6556 * The tracing session name.
6557 * The trace data output path.
6558 * The channels with their state and all their attributes.
6559 * The context fields you added to channels.
6560 * The event rules with their state, log level and filter conditions.
6561
6562 To load a tracing session:
6563
6564 * Use the man:lttng-load(1) command:
6565 +
6566 --
6567 [role="term"]
6568 ----
6569 $ lttng load my-session
6570 ----
6571 --
6572 +
6573 Replace `my-session` with the name of the tracing session to load.
6574
6575 When LTTng loads a configuration, it restores your saved tracing session
6576 as if you just configured it manually.
6577
6578 See man:lttng(1) for the complete list of command-line options. You
6579 can also save and load all many sessions at a time, and decide in which
6580 directory to output the XML files.
6581
6582
6583 [[sending-trace-data-over-the-network]]
6584 === Send trace data over the network
6585
6586 LTTng can send the recorded trace data to a remote system over the
6587 network instead of writing it to the local file system.
6588
6589 To send the trace data over the network:
6590
6591 . On the _remote_ system (which can also be the target system),
6592 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6593 +
6594 --
6595 [role="term"]
6596 ----
6597 $ lttng-relayd
6598 ----
6599 --
6600
6601 . On the _target_ system, create a tracing session configured to
6602 send trace data over the network:
6603 +
6604 --
6605 [role="term"]
6606 ----
6607 $ lttng create my-session --set-url=net://remote-system
6608 ----
6609 --
6610 +
6611 Replace `remote-system` by the host name or IP address of the
6612 remote system. See man:lttng-create(1) for the exact URL format.
6613
6614 . On the target system, use the man:lttng(1) command-line tool as usual.
6615 When tracing is active, the target's consumer daemon sends sub-buffers
6616 to the relay daemon running on the remote system instead of flushing
6617 them to the local file system. The relay daemon writes the received
6618 packets to the local file system.
6619
6620 The relay daemon writes trace files to
6621 +$LTTNG_HOME/lttng-traces/__hostname__/__session__+ by default, where
6622 +__hostname__+ is the host name of the target system and +__session__+
6623 is the tracing session name. Note that the env:LTTNG_HOME environment
6624 variable defaults to `$HOME` if not set. Use the
6625 opt:lttng-relayd(8):--output option of man:lttng-relayd(8) to write
6626 trace files to another base directory.
6627
6628
6629 [role="since-2.4"]
6630 [[lttng-live]]
6631 === View events as LTTng emits them (noch:{LTTng} live)
6632
6633 LTTng live is a network protocol implemented by the <<lttng-relayd,relay
6634 daemon>> (man:lttng-relayd(8)) to allow compatible trace viewers to
6635 display events as LTTng emits them on the target system while tracing is
6636 active.
6637
6638 The relay daemon creates a _tee_: it forwards the trace data to both
6639 the local file system and to connected live viewers:
6640
6641 [role="img-90"]
6642 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live viewer.
6643 image::live.png[]
6644
6645 To use LTTng live:
6646
6647 . On the _target system_, create a <<tracing-session,tracing session>>
6648 in _live mode_:
6649 +
6650 --
6651 [role="term"]
6652 ----
6653 $ lttng create my-session --live
6654 ----
6655 --
6656 +
6657 This spawns a local relay daemon.
6658
6659 . Start the live viewer and configure it to connect to the relay
6660 daemon. For example, with http://diamon.org/babeltrace[Babeltrace]:
6661 +
6662 --
6663 [role="term"]
6664 ----
6665 $ babeltrace --input-format=lttng-live \
6666 net://localhost/host/hostname/my-session
6667 ----
6668 --
6669 +
6670 Replace:
6671 +
6672 --
6673 * `hostname` with the host name of the target system.
6674 * `my-session` with the name of the tracing session to view.
6675 --
6676
6677 . Configure the tracing session as usual with the man:lttng(1)
6678 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6679
6680 You can list the available live tracing sessions with Babeltrace:
6681
6682 [role="term"]
6683 ----
6684 $ babeltrace --input-format=lttng-live net://localhost
6685 ----
6686
6687 You can start the relay daemon on another system. In this case, you need
6688 to specify the relay daemon's URL when you create the tracing session
6689 with the opt:lttng-create(1):--set-url option. You also need to replace
6690 `localhost` in the procedure above with the host name of the system on
6691 which the relay daemon is running.
6692
6693 See man:lttng-create(1) and man:lttng-relayd(8) for the complete list of
6694 command-line options.
6695
6696
6697 [role="since-2.3"]
6698 [[taking-a-snapshot]]
6699 === Take a snapshot of the current sub-buffers of a tracing session
6700
6701 The normal behavior of LTTng is to append full sub-buffers to growing
6702 trace data files. This is ideal to keep a full history of the events
6703 that occurred on the target system, but it can
6704 represent too much data in some situations. For example, you may wish
6705 to trace your application continuously until some critical situation
6706 happens, in which case you only need the latest few recorded
6707 events to perform the desired analysis, not multi-gigabyte trace files.
6708
6709 With the man:lttng-snapshot(1) command, you can take a snapshot of the
6710 current sub-buffers of a given <<tracing-session,tracing session>>.
6711 LTTng can write the snapshot to the local file system or send it over
6712 the network.
6713
6714 To take a snapshot:
6715
6716 . Create a tracing session in _snapshot mode_:
6717 +
6718 --
6719 [role="term"]
6720 ----
6721 $ lttng create my-session --snapshot
6722 ----
6723 --
6724 +
6725 The <<channel-overwrite-mode-vs-discard-mode,event loss mode>> of
6726 <<channel,channels>> created in this mode is automatically set to
6727 _overwrite_ (flight recorder mode).
6728
6729 . Configure the tracing session as usual with the man:lttng(1)
6730 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6731
6732 . **Optional**: When you need to take a snapshot,
6733 <<basic-tracing-session-control,stop tracing>>.
6734 +
6735 You can take a snapshot when the tracers are active, but if you stop
6736 them first, you are sure that the data in the sub-buffers does not
6737 change before you actually take the snapshot.
6738
6739 . Take a snapshot:
6740 +
6741 --
6742 [role="term"]
6743 ----
6744 $ lttng snapshot record --name=my-first-snapshot
6745 ----
6746 --
6747 +
6748 LTTng writes the current sub-buffers of all the current tracing
6749 session's channels to trace files on the local file system. Those trace
6750 files have `my-first-snapshot` in their name.
6751
6752 There is no difference between the format of a normal trace file and the
6753 format of a snapshot: viewers of LTTng traces also support LTTng
6754 snapshots.
6755
6756 By default, LTTng writes snapshot files to the path shown by
6757 `lttng snapshot list-output`. You can change this path or decide to send
6758 snapshots over the network using either:
6759
6760 . An output path or URL that you specify when you create the
6761 tracing session.
6762 . An snapshot output path or URL that you add using
6763 `lttng snapshot add-output`
6764 . An output path or URL that you provide directly to the
6765 `lttng snapshot record` command.
6766
6767 Method 3 overrides method 2, which overrides method 1. When you
6768 specify a URL, a relay daemon must listen on a remote system (see
6769 <<sending-trace-data-over-the-network,Send trace data over the network>>).
6770
6771
6772 [role="since-2.6"]
6773 [[mi]]
6774 === Use the machine interface
6775
6776 With any command of the man:lttng(1) command-line tool, you can set the
6777 opt:lttng(1):--mi option to `xml` (before the command name) to get an
6778 XML machine interface output, for example:
6779
6780 [role="term"]
6781 ----
6782 $ lttng --mi=xml enable-event --kernel --syscall open
6783 ----
6784
6785 A schema definition (XSD) is
6786 https://github.com/lttng/lttng-tools/blob/stable-2.9/src/common/mi-lttng-3.0.xsd[available]
6787 to ease the integration with external tools as much as possible.
6788
6789
6790 [role="since-2.8"]
6791 [[metadata-regenerate]]
6792 === Regenerate the metadata of an LTTng trace
6793
6794 An LTTng trace, which is a http://diamon.org/ctf[CTF] trace, has both
6795 data stream files and a metadata file. This metadata file contains,
6796 amongst other things, information about the offset of the clock sources
6797 used to timestamp <<event,event records>> when tracing.
6798
6799 If, once a <<tracing-session,tracing session>> is
6800 <<basic-tracing-session-control,started>>, a major
6801 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
6802 happens, the trace's clock offset also needs to be updated. You
6803 can use the `metadata` item of the man:lttng-regenerate(1) command
6804 to do so.
6805
6806 The main use case of this command is to allow a system to boot with
6807 an incorrect wall time and trace it with LTTng before its wall time
6808 is corrected. Once the system is known to be in a state where its
6809 wall time is correct, it can run `lttng regenerate metadata`.
6810
6811 To regenerate the metadata of an LTTng trace:
6812
6813 * Use the `metadata` item of the man:lttng-regenerate(1) command:
6814 +
6815 --
6816 [role="term"]
6817 ----
6818 $ lttng regenerate metadata
6819 ----
6820 --
6821
6822 [IMPORTANT]
6823 ====
6824 `lttng regenerate metadata` has the following limitations:
6825
6826 * Tracing session <<creating-destroying-tracing-sessions,created>>
6827 in non-live mode.
6828 * User space <<channel,channels>>, if any, are using
6829 <<channel-buffering-schemes,per-user buffering>>.
6830 ====
6831
6832
6833 [role="since-2.9"]
6834 [[regenerate-statedump]]
6835 === Regenerate the state dump of a tracing session
6836
6837 The LTTng kernel and user space tracers generate state dump
6838 <<event,event records>> when the application starts or when you
6839 <<basic-tracing-session-control,start a tracing session>>. An analysis
6840 can use the state dump event records to set an initial state before it
6841 builds the rest of the state from the following event records.
6842 http://tracecompass.org/[Trace Compass] is a notable example of an
6843 application which uses the state dump of an LTTng trace.
6844
6845 When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
6846 state dump event records are not included in the snapshot because they
6847 were recorded to a sub-buffer that has been consumed or overwritten
6848 already.
6849
6850 You can use the `lttng regenerate statedump` command to emit the state
6851 dump event records again.
6852
6853 To regenerate the state dump of the current tracing session, provided
6854 create it in snapshot mode, before you take a snapshot:
6855
6856 . Use the `statedump` item of the man:lttng-regenerate(1) command:
6857 +
6858 --
6859 [role="term"]
6860 ----
6861 $ lttng regenerate statedump
6862 ----
6863 --
6864
6865 . <<basic-tracing-session-control,Stop the tracing session>>:
6866 +
6867 --
6868 [role="term"]
6869 ----
6870 $ lttng stop
6871 ----
6872 --
6873
6874 . <<taking-a-snapshot,Take a snapshot>>:
6875 +
6876 --
6877 [role="term"]
6878 ----
6879 $ lttng snapshot record --name=my-snapshot
6880 ----
6881 --
6882
6883 Depending on the event throughput, you should run steps 1 and 2
6884 as closely as possible.
6885
6886 NOTE: To record the state dump events, you need to
6887 <<enabling-disabling-events,create event rules>> which enable them.
6888 LTTng-UST state dump tracepoints start with `lttng_ust_statedump:`.
6889 LTTng-modules state dump tracepoints start with `lttng_statedump_`.
6890
6891
6892 [role="since-2.7"]
6893 [[persistent-memory-file-systems]]
6894 === Record trace data on persistent memory file systems
6895
6896 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory]
6897 (NVRAM) is random-access memory that retains its information when power
6898 is turned off (non-volatile). Systems with such memory can store data
6899 structures in RAM and retrieve them after a reboot, without flushing
6900 to typical _storage_.
6901
6902 Linux supports NVRAM file systems thanks to either
6903 http://pramfs.sourceforge.net/[PRAMFS] or
6904 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
6905 (requires Linux 4.1+).
6906
6907 This section does not describe how to operate such file systems;
6908 we assume that you have a working persistent memory file system.
6909
6910 When you create a <<tracing-session,tracing session>>, you can specify
6911 the path of the shared memory holding the sub-buffers. If you specify a
6912 location on an NVRAM file system, then you can retrieve the latest
6913 recorded trace data when the system reboots after a crash.
6914
6915 To record trace data on a persistent memory file system and retrieve the
6916 trace data after a system crash:
6917
6918 . Create a tracing session with a sub-buffer shared memory path located
6919 on an NVRAM file system:
6920 +
6921 --
6922 [role="term"]
6923 ----
6924 $ lttng create my-session --shm-path=/path/to/shm
6925 ----
6926 --
6927
6928 . Configure the tracing session as usual with the man:lttng(1)
6929 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6930
6931 . After a system crash, use the man:lttng-crash(1) command-line tool to
6932 view the trace data recorded on the NVRAM file system:
6933 +
6934 --
6935 [role="term"]
6936 ----
6937 $ lttng-crash /path/to/shm
6938 ----
6939 --
6940
6941 The binary layout of the ring buffer files is not exactly the same as
6942 the trace files layout. This is why you need to use man:lttng-crash(1)
6943 instead of your preferred trace viewer directly.
6944
6945 To convert the ring buffer files to LTTng trace files:
6946
6947 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
6948 +
6949 --
6950 [role="term"]
6951 ----
6952 $ lttng-crash --extract=/path/to/trace /path/to/shm
6953 ----
6954 --
6955
6956
6957 [[reference]]
6958 == Reference
6959
6960 [[lttng-modules-ref]]
6961 === noch:{LTTng-modules}
6962
6963
6964 [role="since-2.9"]
6965 [[lttng-tracepoint-enum]]
6966 ==== `LTTNG_TRACEPOINT_ENUM()` usage
6967
6968 Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
6969
6970 [source,c]
6971 ----
6972 LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
6973 ----
6974
6975 Replace:
6976
6977 * `name` with the name of the enumeration (C identifier, unique
6978 amongst all the defined enumerations).
6979 * `entries` with a list of enumeration entries.
6980
6981 The available enumeration entry macros are:
6982
6983 +ctf_enum_value(__name__, __value__)+::
6984 Entry named +__name__+ mapped to the integral value +__value__+.
6985
6986 +ctf_enum_range(__name__, __begin__, __end__)+::
6987 Entry named +__name__+ mapped to the range of integral values between
6988 +__begin__+ (included) and +__end__+ (included).
6989
6990 +ctf_enum_auto(__name__)+::
6991 Entry named +__name__+ mapped to the integral value following the
6992 last mapping's value.
6993 +
6994 The last value of a `ctf_enum_value()` entry is its +__value__+
6995 parameter.
6996 +
6997 The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
6998 +
6999 If `ctf_enum_auto()` is the first entry in the list, its integral
7000 value is 0.
7001
7002 Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
7003 to use a defined enumeration as a tracepoint field.
7004
7005 .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
7006 ====
7007 [source,c]
7008 ----
7009 LTTNG_TRACEPOINT_ENUM(
7010 my_enum,
7011 TP_ENUM_VALUES(
7012 ctf_enum_auto("AUTO: EXPECT 0")
7013 ctf_enum_value("VALUE: 23", 23)
7014 ctf_enum_value("VALUE: 27", 27)
7015 ctf_enum_auto("AUTO: EXPECT 28")
7016 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
7017 ctf_enum_auto("AUTO: EXPECT 304")
7018 )
7019 )
7020 ----
7021 ====
7022
7023
7024 [role="since-2.7"]
7025 [[lttng-modules-tp-fields]]
7026 ==== Tracepoint fields macros (for `TP_FIELDS()`)
7027
7028 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
7029 tracepoint fields, which must be listed within `TP_FIELDS()` in
7030 `LTTNG_TRACEPOINT_EVENT()`, are:
7031
7032 [role="func-desc growable",cols="asciidoc,asciidoc"]
7033 .Available macros to define LTTng-modules tracepoint fields
7034 |====
7035 |Macro |Description and parameters
7036
7037 |
7038 +ctf_integer(__t__, __n__, __e__)+
7039
7040 +ctf_integer_nowrite(__t__, __n__, __e__)+
7041
7042 +ctf_user_integer(__t__, __n__, __e__)+
7043
7044 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
7045 |
7046 Standard integer, displayed in base 10.
7047
7048 +__t__+::
7049 Integer C type (`int`, `long`, `size_t`, ...).
7050
7051 +__n__+::
7052 Field name.
7053
7054 +__e__+::
7055 Argument expression.
7056
7057 |
7058 +ctf_integer_hex(__t__, __n__, __e__)+
7059
7060 +ctf_user_integer_hex(__t__, __n__, __e__)+
7061 |
7062 Standard integer, displayed in base 16.
7063
7064 +__t__+::
7065 Integer C type.
7066
7067 +__n__+::
7068 Field name.
7069
7070 +__e__+::
7071 Argument expression.
7072
7073 |+ctf_integer_oct(__t__, __n__, __e__)+
7074 |
7075 Standard integer, displayed in base 8.
7076
7077 +__t__+::
7078 Integer C type.
7079
7080 +__n__+::
7081 Field name.
7082
7083 +__e__+::
7084 Argument expression.
7085
7086 |
7087 +ctf_integer_network(__t__, __n__, __e__)+
7088
7089 +ctf_user_integer_network(__t__, __n__, __e__)+
7090 |
7091 Integer in network byte order (big-endian), displayed in base 10.
7092
7093 +__t__+::
7094 Integer C type.
7095
7096 +__n__+::
7097 Field name.
7098
7099 +__e__+::
7100 Argument expression.
7101
7102 |
7103 +ctf_integer_network_hex(__t__, __n__, __e__)+
7104
7105 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
7106 |
7107 Integer in network byte order, displayed in base 16.
7108
7109 +__t__+::
7110 Integer C type.
7111
7112 +__n__+::
7113 Field name.
7114
7115 +__e__+::
7116 Argument expression.
7117
7118 |
7119 +ctf_enum(__N__, __t__, __n__, __e__)+
7120
7121 +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
7122
7123 +ctf_user_enum(__N__, __t__, __n__, __e__)+
7124
7125 +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
7126 |
7127 Enumeration.
7128
7129 +__N__+::
7130 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
7131
7132 +__t__+::
7133 Integer C type (`int`, `long`, `size_t`, ...).
7134
7135 +__n__+::
7136 Field name.
7137
7138 +__e__+::
7139 Argument expression.
7140
7141 |
7142 +ctf_string(__n__, __e__)+
7143
7144 +ctf_string_nowrite(__n__, __e__)+
7145
7146 +ctf_user_string(__n__, __e__)+
7147
7148 +ctf_user_string_nowrite(__n__, __e__)+
7149 |
7150 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
7151
7152 +__n__+::
7153 Field name.
7154
7155 +__e__+::
7156 Argument expression.
7157
7158 |
7159 +ctf_array(__t__, __n__, __e__, __s__)+
7160
7161 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
7162
7163 +ctf_user_array(__t__, __n__, __e__, __s__)+
7164
7165 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
7166 |
7167 Statically-sized array of integers.
7168
7169 +__t__+::
7170 Array element C type.
7171
7172 +__n__+::
7173 Field name.
7174
7175 +__e__+::
7176 Argument expression.
7177
7178 +__s__+::
7179 Number of elements.
7180
7181 |
7182 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
7183
7184 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7185
7186 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
7187
7188 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7189 |
7190 Statically-sized array of bits.
7191
7192 The type of +__e__+ must be an integer type. +__s__+ is the number
7193 of elements of such type in +__e__+, not the number of bits.
7194
7195 +__t__+::
7196 Array element C type.
7197
7198 +__n__+::
7199 Field name.
7200
7201 +__e__+::
7202 Argument expression.
7203
7204 +__s__+::
7205 Number of elements.
7206
7207 |
7208 +ctf_array_text(__t__, __n__, __e__, __s__)+
7209
7210 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
7211
7212 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
7213
7214 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
7215 |
7216 Statically-sized array, printed as text.
7217
7218 The string does not need to be null-terminated.
7219
7220 +__t__+::
7221 Array element C type (always `char`).
7222
7223 +__n__+::
7224 Field name.
7225
7226 +__e__+::
7227 Argument expression.
7228
7229 +__s__+::
7230 Number of elements.
7231
7232 |
7233 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
7234
7235 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7236
7237 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
7238
7239 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7240 |
7241 Dynamically-sized array of integers.
7242
7243 The type of +__E__+ must be unsigned.
7244
7245 +__t__+::
7246 Array element C type.
7247
7248 +__n__+::
7249 Field name.
7250
7251 +__e__+::
7252 Argument expression.
7253
7254 +__T__+::
7255 Length expression C type.
7256
7257 +__E__+::
7258 Length expression.
7259
7260 |
7261 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7262
7263 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7264 |
7265 Dynamically-sized array of integers, displayed in base 16.
7266
7267 The type of +__E__+ must be unsigned.
7268
7269 +__t__+::
7270 Array element C type.
7271
7272 +__n__+::
7273 Field name.
7274
7275 +__e__+::
7276 Argument expression.
7277
7278 +__T__+::
7279 Length expression C type.
7280
7281 +__E__+::
7282 Length expression.
7283
7284 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
7285 |
7286 Dynamically-sized array of integers in network byte order (big-endian),
7287 displayed in base 10.
7288
7289 The type of +__E__+ must be unsigned.
7290
7291 +__t__+::
7292 Array element C type.
7293
7294 +__n__+::
7295 Field name.
7296
7297 +__e__+::
7298 Argument expression.
7299
7300 +__T__+::
7301 Length expression C type.
7302
7303 +__E__+::
7304 Length expression.
7305
7306 |
7307 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7308
7309 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7310
7311 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7312
7313 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7314 |
7315 Dynamically-sized array of bits.
7316
7317 The type of +__e__+ must be an integer type. +__s__+ is the number
7318 of elements of such type in +__e__+, not the number of bits.
7319
7320 The type of +__E__+ must be unsigned.
7321
7322 +__t__+::
7323 Array element C type.
7324
7325 +__n__+::
7326 Field name.
7327
7328 +__e__+::
7329 Argument expression.
7330
7331 +__T__+::
7332 Length expression C type.
7333
7334 +__E__+::
7335 Length expression.
7336
7337 |
7338 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7339
7340 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7341
7342 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7343
7344 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7345 |
7346 Dynamically-sized array, displayed as text.
7347
7348 The string does not need to be null-terminated.
7349
7350 The type of +__E__+ must be unsigned.
7351
7352 The behaviour is undefined if +__e__+ is `NULL`.
7353
7354 +__t__+::
7355 Sequence element C type (always `char`).
7356
7357 +__n__+::
7358 Field name.
7359
7360 +__e__+::
7361 Argument expression.
7362
7363 +__T__+::
7364 Length expression C type.
7365
7366 +__E__+::
7367 Length expression.
7368 |====
7369
7370 Use the `_user` versions when the argument expression, `e`, is
7371 a user space address. In the cases of `ctf_user_integer*()` and
7372 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
7373 be addressable.
7374
7375 The `_nowrite` versions omit themselves from the session trace, but are
7376 otherwise identical. This means the `_nowrite` fields won't be written
7377 in the recorded trace. Their primary purpose is to make some
7378 of the event context available to the
7379 <<enabling-disabling-events,event filters>> without having to
7380 commit the data to sub-buffers.
7381
7382
7383 [[glossary]]
7384 == Glossary
7385
7386 Terms related to LTTng and to tracing in general:
7387
7388 Babeltrace::
7389 The http://diamon.org/babeltrace[Babeltrace] project, which includes
7390 the cmd:babeltrace command, some libraries, and Python bindings.
7391
7392 <<channel-buffering-schemes,buffering scheme>>::
7393 A layout of sub-buffers applied to a given channel.
7394
7395 <<channel,channel>>::
7396 An entity which is responsible for a set of ring buffers.
7397 +
7398 <<event,Event rules>> are always attached to a specific channel.
7399
7400 clock::
7401 A reference of time for a tracer.
7402
7403 <<lttng-consumerd,consumer daemon>>::
7404 A process which is responsible for consuming the full sub-buffers
7405 and write them to a file system or send them over the network.
7406
7407 <<channel-overwrite-mode-vs-discard-mode,discard mode>>:: The event loss
7408 mode in which the tracer _discards_ new event records when there's no
7409 sub-buffer space left to store them.
7410
7411 event::
7412 The consequence of the execution of an instrumentation
7413 point, like a tracepoint that you manually place in some source code,
7414 or a Linux kernel KProbe.
7415 +
7416 An event is said to _occur_ at a specific time. Different actions can
7417 be taken upon the occurrence of an event, like record the event's payload
7418 to a sub-buffer.
7419
7420 <<channel-overwrite-mode-vs-discard-mode,event loss mode>>::
7421 The mechanism by which event records of a given channel are lost
7422 (not recorded) when there is no sub-buffer space left to store them.
7423
7424 [[def-event-name]]event name::
7425 The name of an event, which is also the name of the event record.
7426 This is also called the _instrumentation point name_.
7427
7428 event record::
7429 A record, in a trace, of the payload of an event which occured.
7430
7431 <<event,event rule>>::
7432 Set of conditions which must be satisfied for one or more occuring
7433 events to be recorded.
7434
7435 `java.util.logging`::
7436 Java platform's
7437 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities].
7438
7439 <<instrumenting,instrumentation>>::
7440 The use of LTTng probes to make a piece of software traceable.
7441
7442 instrumentation point::
7443 A point in the execution path of a piece of software that, when
7444 reached by this execution, can emit an event.
7445
7446 instrumentation point name::
7447 See _<<def-event-name,event name>>_.
7448
7449 log4j::
7450 A http://logging.apache.org/log4j/1.2/[logging library] for Java
7451 developed by the Apache Software Foundation.
7452
7453 log level::
7454 Level of severity of a log statement or user space
7455 instrumentation point.
7456
7457 LTTng::
7458 The _Linux Trace Toolkit: next generation_ project.
7459
7460 <<lttng-cli,cmd:lttng>>::
7461 A command-line tool provided by the LTTng-tools project which you
7462 can use to send and receive control messages to and from a
7463 session daemon.
7464
7465 LTTng analyses::
7466 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
7467 which is a set of analyzing programs that are used to obtain a
7468 higher level view of an LTTng trace.
7469
7470 cmd:lttng-consumerd::
7471 The name of the consumer daemon program.
7472
7473 cmd:lttng-crash::
7474 A utility provided by the LTTng-tools project which can convert
7475 ring buffer files (usually
7476 <<persistent-memory-file-systems,saved on a persistent memory file system>>)
7477 to trace files.
7478
7479 LTTng Documentation::
7480 This document.
7481
7482 <<lttng-live,LTTng live>>::
7483 A communication protocol between the relay daemon and live viewers
7484 which makes it possible to see events "live", as they are received by
7485 the relay daemon.
7486
7487 <<lttng-modules,LTTng-modules>>::
7488 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
7489 which contains the Linux kernel modules to make the Linux kernel
7490 instrumentation points available for LTTng tracing.
7491
7492 cmd:lttng-relayd::
7493 The name of the relay daemon program.
7494
7495 cmd:lttng-sessiond::
7496 The name of the session daemon program.
7497
7498 LTTng-tools::
7499 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
7500 contains the various programs and libraries used to
7501 <<controlling-tracing,control tracing>>.
7502
7503 <<lttng-ust,LTTng-UST>>::
7504 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
7505 contains libraries to instrument user applications.
7506
7507 <<lttng-ust-agents,LTTng-UST Java agent>>::
7508 A Java package provided by the LTTng-UST project to allow the
7509 LTTng instrumentation of `java.util.logging` and Apache log4j 1.2
7510 logging statements.
7511
7512 <<lttng-ust-agents,LTTng-UST Python agent>>::
7513 A Python package provided by the LTTng-UST project to allow the
7514 LTTng instrumentation of Python logging statements.
7515
7516 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
7517 The event loss mode in which new event records overwrite older
7518 event records when there's no sub-buffer space left to store them.
7519
7520 <<channel-buffering-schemes,per-process buffering>>::
7521 A buffering scheme in which each instrumented process has its own
7522 sub-buffers for a given user space channel.
7523
7524 <<channel-buffering-schemes,per-user buffering>>::
7525 A buffering scheme in which all the processes of a Unix user share the
7526 same sub-buffer for a given user space channel.
7527
7528 <<lttng-relayd,relay daemon>>::
7529 A process which is responsible for receiving the trace data sent by
7530 a distant consumer daemon.
7531
7532 ring buffer::
7533 A set of sub-buffers.
7534
7535 <<lttng-sessiond,session daemon>>::
7536 A process which receives control commands from you and orchestrates
7537 the tracers and various LTTng daemons.
7538
7539 <<taking-a-snapshot,snapshot>>::
7540 A copy of the current data of all the sub-buffers of a given tracing
7541 session, saved as trace files.
7542
7543 sub-buffer::
7544 One part of an LTTng ring buffer which contains event records.
7545
7546 timestamp::
7547 The time information attached to an event when it is emitted.
7548
7549 trace (_noun_)::
7550 A set of files which are the concatenations of one or more
7551 flushed sub-buffers.
7552
7553 trace (_verb_)::
7554 The action of recording the events emitted by an application
7555 or by a system, or to initiate such recording by controlling
7556 a tracer.
7557
7558 Trace Compass::
7559 The http://tracecompass.org[Trace Compass] project and application.
7560
7561 tracepoint::
7562 An instrumentation point using the tracepoint mechanism of the Linux
7563 kernel or of LTTng-UST.
7564
7565 tracepoint definition::
7566 The definition of a single tracepoint.
7567
7568 tracepoint name::
7569 The name of a tracepoint.
7570
7571 tracepoint provider::
7572 A set of functions providing tracepoints to an instrumented user
7573 application.
7574 +
7575 Not to be confused with a _tracepoint provider package_: many tracepoint
7576 providers can exist within a tracepoint provider package.
7577
7578 tracepoint provider package::
7579 One or more tracepoint providers compiled as an object file or as
7580 a shared library.
7581
7582 tracer::
7583 A software which records emitted events.
7584
7585 <<domain,tracing domain>>::
7586 A namespace for event sources.
7587
7588 <<tracing-group,tracing group>>::
7589 The Unix group in which a Unix user can be to be allowed to trace the
7590 Linux kernel.
7591
7592 <<tracing-session,tracing session>>::
7593 A stateful dialogue between you and a <<lttng-sessiond,session
7594 daemon>>.
7595
7596 user application::
7597 An application running in user space, as opposed to a Linux kernel
7598 module, for example.
This page took 0.187003 seconds and 4 git commands to generate.