Fix: [2.8, 2.11]: do not use `{revision}` for fixed versions
[lttng-docs.git] / 2.9 / lttng-docs-2.9.txt
1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.9, 22 January 2018
5
6
7 include::../common/copyright.txt[]
8
9
10 include::../common/warning-not-maintained.txt[]
11
12
13 include::../common/welcome.txt[]
14
15
16 include::../common/audience.txt[]
17
18
19 [[chapters]]
20 === What's in this documentation?
21
22 The LTTng Documentation is divided into the following sections:
23
24 * **<<nuts-and-bolts,Nuts and bolts>>** explains the
25 rudiments of software tracing and the rationale behind the
26 LTTng project.
27 +
28 You can skip this section if you’re familiar with software tracing and
29 with the LTTng project.
30
31 * **<<installing-lttng,Installation>>** describes the steps to
32 install the LTTng packages on common Linux distributions and from
33 their sources.
34 +
35 You can skip this section if you already properly installed LTTng on
36 your target system.
37
38 * **<<getting-started,Quick start>>** is a concise guide to
39 getting started quickly with LTTng kernel and user space tracing.
40 +
41 We recommend this section if you're new to LTTng or to software tracing
42 in general.
43 +
44 You can skip this section if you're not new to LTTng.
45
46 * **<<core-concepts,Core concepts>>** explains the concepts at
47 the heart of LTTng.
48 +
49 It's a good idea to become familiar with the core concepts
50 before attempting to use the toolkit.
51
52 * **<<plumbing,Components of LTTng>>** describes the various components
53 of the LTTng machinery, like the daemons, the libraries, and the
54 command-line interface.
55 * **<<instrumenting,Instrumentation>>** shows different ways to
56 instrument user applications and the Linux kernel.
57 +
58 Instrumenting source code is essential to provide a meaningful
59 source of events.
60 +
61 You can skip this section if you do not have a programming background.
62
63 * **<<controlling-tracing,Tracing control>>** is divided into topics
64 which demonstrate how to use the vast array of features that
65 LTTng{nbsp}{revision} offers.
66 * **<<reference,Reference>>** contains reference tables.
67 * **<<glossary,Glossary>>** is a specialized dictionary of terms related
68 to LTTng or to the field of software tracing.
69
70
71 include::../common/convention.txt[]
72
73
74 include::../common/acknowledgements.txt[]
75
76
77 [[whats-new]]
78 == What's new in LTTng {revision}?
79
80 LTTng{nbsp}{revision} bears the name _Joannès_. A Berliner Weisse style
81 beer from the http://letreflenoir.com/[Trèfle Noir] microbrewery in
82 https://en.wikipedia.org/wiki/Rouyn-Noranda[Rouyn-Noranda], the
83 https://www.beeradvocate.com/beer/profile/20537/238967/[_**Joannès**_]
84 is a tangy beer with a distinct pink dress and intense fruit flavor,
85 thanks to the presence of fresh blackcurrant grown in Témiscamingue.
86
87 New features and changes in LTTng{nbsp}{revision}:
88
89 * **Tracing control**:
90 ** You can override the name or the URL of a tracing session
91 configuration when you use man:lttng-load(1) thanks to the new
92 opt:lttng-load(1):--override-name and
93 opt:lttng-load(1):--override-url options.
94 ** The new `lttng regenerate` command replaces the now deprecated
95 `lttng metadata` command of LTTng 2.8. man:lttng-regenerate(1) can
96 also <<regenerate-statedump,generate the state dump event records>>
97 of a given tracing session on demand, a handy feature when
98 <<taking-a-snapshot,taking a snapshot>>.
99 ** You can add PMU counters by raw ID with man:lttng-add-context(1):
100 +
101 --
102 [role="term"]
103 ----
104 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
105 ----
106 --
107 +
108 The format of the raw ID is the same as used with man:perf-record(1).
109 See <<adding-context,Add context fields to a channel>> for more
110 examples.
111
112 ** The LTTng <<lttng-relayd,relay daemon>> is now supported on
113 OS{nbsp}X and macOS for a smoother integration within a trace
114 analysis workflow, regardless of the platform used.
115
116 * **User space tracing**:
117 ** Improved performance (tested on x86-64 and ARMv7-A
118 (https://en.wikipedia.org/wiki/Cubieboard[Cubieboard])
119 architectures).
120 ** New helper library (`liblttng-ust-fd`) to help with
121 <<liblttng-ust-fd,applications which close file descriptors that
122 don't belong to them>>, for example, in a loop which closes file
123 descriptors after man:fork(2), or BSD's `closeall()`.
124 ** More accurate <<liblttng-ust-dl,dynamic linker instrumentation>> and
125 state dump event records, especially when a dynamically loaded
126 library manually loads its own dependencies.
127 ** New `ctf_*()` field definition macros (see man:lttng-ust(3)):
128 *** `ctf_array_hex()`
129 *** `ctf_array_network()`
130 *** `ctf_array_network_hex()`
131 *** `ctf_sequence_hex()`
132 *** `ctf_sequence_network()`
133 *** `ctf_sequence_network_hex()`
134 ** New `lttng_ust_loaded` weak symbol defined by `liblttng-ust` for
135 an application to know if the LTTng-UST shared library is loaded
136 or not:
137 +
138 --
139 [source,c]
140 ----
141 #include <stdio.h>
142
143 int lttng_ust_loaded __attribute__((weak));
144
145 int main(void)
146 {
147 if (lttng_ust_loaded) {
148 puts("LTTng-UST is loaded!");
149 } else {
150 puts("LTTng-UST is not loaded!");
151 }
152
153 return 0;
154 }
155 ----
156 --
157
158 ** LTTng-UST thread names have the `-ust` suffix.
159
160 * **Linux kernel tracing**:
161 ** Improved performance (tested on x86-64 and ARMv7-A
162 (https://en.wikipedia.org/wiki/Cubieboard[Cubieboard])
163 architectures).
164 ** New enumeration <<lttng-modules-tp-fields,field definition macros>>:
165 `ctf_enum()` and `ctf_user_enum()`.
166 ** IPv4, IPv6, and TCP header data is recorded in the event records
167 produced by tracepoints starting with `net_`.
168 ** Detailed system call event records: `select`, `pselect6`, `poll`,
169 `ppoll`, `epoll_wait`, `epoll_pwait`, and `epoll_ctl` on all
170 architectures supported by LTTng-modules, and `accept4` on x86-64.
171 ** New I²C instrumentation: the `extract_sensitive_payload` parameter
172 of the new `lttng-probe-i2c` LTTng module controls whether or not
173 the payloads of I²C messages are recorded in I²C event records, since
174 they may contain sensitive data (for example, keystrokes).
175 ** When the LTTng kernel modules are built into the Linux kernel image,
176 the `CONFIG_TRACEPOINTS` configuration option is automatically
177 selected.
178
179
180 [[nuts-and-bolts]]
181 == Nuts and bolts
182
183 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
184 generation_ is a modern toolkit for tracing Linux systems and
185 applications. So your first question might be:
186 **what is tracing?**
187
188
189 [[what-is-tracing]]
190 === What is tracing?
191
192 As the history of software engineering progressed and led to what
193 we now take for granted--complex, numerous and
194 interdependent software applications running in parallel on
195 sophisticated operating systems like Linux--the authors of such
196 components, software developers, began feeling a natural
197 urge to have tools that would ensure the robustness and good performance
198 of their masterpieces.
199
200 One major achievement in this field is, inarguably, the
201 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
202 an essential tool for developers to find and fix bugs. But even the best
203 debugger won't help make your software run faster, and nowadays, faster
204 software means either more work done by the same hardware, or cheaper
205 hardware for the same work.
206
207 A _profiler_ is often the tool of choice to identify performance
208 bottlenecks. Profiling is suitable to identify _where_ performance is
209 lost in a given software. The profiler outputs a profile, a statistical
210 summary of observed events, which you may use to discover which
211 functions took the most time to execute. However, a profiler won't
212 report _why_ some identified functions are the bottleneck. Bottlenecks
213 might only occur when specific conditions are met, conditions that are
214 sometimes impossible to capture by a statistical profiler, or impossible
215 to reproduce with an application altered by the overhead of an
216 event-based profiler. For a thorough investigation of software
217 performance issues, a history of execution is essential, with the
218 recorded values of variables and context fields you choose, and
219 with as little influence as possible on the instrumented software. This
220 is where tracing comes in handy.
221
222 _Tracing_ is a technique used to understand what goes on in a running
223 software system. The software used for tracing is called a _tracer_,
224 which is conceptually similar to a tape recorder. When recording,
225 specific instrumentation points placed in the software source code
226 generate events that are saved on a giant tape: a _trace_ file. You
227 can trace user applications and the operating system at the same time,
228 opening the possibility of resolving a wide range of problems that would
229 otherwise be extremely challenging.
230
231 Tracing is often compared to _logging_. However, tracers and loggers are
232 two different tools, serving two different purposes. Tracers are
233 designed to record much lower-level events that occur much more
234 frequently than log messages, often in the range of thousands per
235 second, with very little execution overhead. Logging is more appropriate
236 for a very high-level analysis of less frequent events: user accesses,
237 exceptional conditions (errors and warnings, for example), database
238 transactions, instant messaging communications, and such. Simply put,
239 logging is one of the many use cases that can be satisfied with tracing.
240
241 The list of recorded events inside a trace file can be read manually
242 like a log file for the maximum level of detail, but it is generally
243 much more interesting to perform application-specific analyses to
244 produce reduced statistics and graphs that are useful to resolve a
245 given problem. Trace viewers and analyzers are specialized tools
246 designed to do this.
247
248 In the end, this is what LTTng is: a powerful, open source set of
249 tools to trace the Linux kernel and user applications at the same time.
250 LTTng is composed of several components actively maintained and
251 developed by its link:/community/#where[community].
252
253
254 [[lttng-alternatives]]
255 === Alternatives to noch:{LTTng}
256
257 Excluding proprietary solutions, a few competing software tracers
258 exist for Linux:
259
260 * https://github.com/dtrace4linux/linux[dtrace4linux] is a port of
261 Sun Microsystems's DTrace to Linux. The cmd:dtrace tool interprets
262 user scripts and is responsible for loading code into the
263 Linux kernel for further execution and collecting the outputted data.
264 * https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF] is a
265 subsystem in the Linux kernel in which a virtual machine can execute
266 programs passed from the user space to the kernel. You can attach
267 such programs to tracepoints and KProbes thanks to a system call, and
268 they can output data to the user space when executed thanks to
269 different mechanisms (pipe, VM register values, and eBPF maps, to name
270 a few).
271 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
272 is the de facto function tracer of the Linux kernel. Its user
273 interface is a set of special files in sysfs.
274 * https://perf.wiki.kernel.org/[perf] is
275 a performance analyzing tool for Linux which supports hardware
276 performance counters, tracepoints, as well as other counters and
277 types of probes. perf's controlling utility is the cmd:perf command
278 line/curses tool.
279 * http://linux.die.net/man/1/strace[strace]
280 is a command-line utility which records system calls made by a
281 user process, as well as signal deliveries and changes of process
282 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
283 to fulfill its function.
284 * http://www.sysdig.org/[sysdig], like SystemTap, uses scripts to
285 analyze Linux kernel events. You write scripts, or _chisels_ in
286 sysdig's jargon, in Lua and sysdig executes them while the system is
287 being traced or afterwards. sysdig's interface is the cmd:sysdig
288 command-line tool as well as the curses-based cmd:csysdig tool.
289 * https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and
290 user space tracer which uses custom user scripts to produce plain text
291 traces. SystemTap converts the scripts to the C language, and then
292 compiles them as Linux kernel modules which are loaded to produce
293 trace data. SystemTap's primary user interface is the cmd:stap
294 command-line tool.
295
296 The main distinctive features of LTTng is that it produces correlated
297 kernel and user space traces, as well as doing so with the lowest
298 overhead amongst other solutions. It produces trace files in the
299 http://diamon.org/ctf[CTF] format, a file format optimized
300 for the production and analyses of multi-gigabyte data.
301
302 LTTng is the result of more than 10 years of active open source
303 development by a community of passionate developers.
304 LTTng{nbsp}{revision} is currently available on major desktop and server
305 Linux distributions.
306
307 The main interface for tracing control is a single command-line tool
308 named cmd:lttng. The latter can create several tracing sessions, enable
309 and disable events on the fly, filter events efficiently with custom
310 user expressions, start and stop tracing, and much more. LTTng can
311 record the traces on the file system or send them over the network, and
312 keep them totally or partially. You can view the traces once tracing
313 becomes inactive or in real-time.
314
315 <<installing-lttng,Install LTTng now>> and
316 <<getting-started,start tracing>>!
317
318
319 [[installing-lttng]]
320 == Installation
321
322 **LTTng** is a set of software <<plumbing,components>> which interact to
323 <<instrumenting,instrument>> the Linux kernel and user applications, and
324 to <<controlling-tracing,control tracing>> (start and stop
325 tracing, enable and disable event rules, and the rest). Those
326 components are bundled into the following packages:
327
328 * **LTTng-tools**: Libraries and command-line interface to
329 control tracing.
330 * **LTTng-modules**: Linux kernel modules to instrument and
331 trace the kernel.
332 * **LTTng-UST**: Libraries and Java/Python packages to instrument and
333 trace user applications.
334
335 Most distributions mark the LTTng-modules and LTTng-UST packages as
336 optional when installing LTTng-tools (which is always required). In the
337 following sections, we always provide the steps to install all three,
338 but note that:
339
340 * You only need to install LTTng-modules if you intend to trace the
341 Linux kernel.
342 * You only need to install LTTng-UST if you intend to trace user
343 applications.
344
345 [role="growable"]
346 .Availability of LTTng{nbsp}{revision} for major Linux distributions as of 22 January 2018.
347 |====
348 |Distribution |Available in releases |Alternatives
349
350 |https://www.ubuntu.com/[Ubuntu]
351 |<<ubuntu,Ubuntu{nbsp}17.04 _Zesty Zapus_ and Ubuntu{nbsp}17.10 _Artful Aardvark_>>.
352
353 Ubuntu{nbsp}14.04 _Trusty Tahr_ and Ubuntu{nbsp}16.04 _Xenial Xerus_:
354 <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
355 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
356 other Ubuntu releases.
357
358 |https://getfedora.org/[Fedora]
359 |<<fedora,Fedora{nbsp}26>>.
360 |link:/docs/v2.10#doc-fedora[LTTng{nbsp}2.10 for Fedora{nbsp}27].
361
362 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
363 other Fedora releases.
364
365 |https://www.debian.org/[Debian]
366 |<<debian,Debian "stretch" (stable)>>.
367 |link:/docs/v2.10#doc-debian[LTTng{nbsp}2.10 for Debian "buster" (testing)
368 and Debian "sid" (unstable)].
369
370
371 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
372 other Debian releases.
373
374 |https://www.archlinux.org/[Arch Linux]
375 |_Not available_
376 |link:/docs/v2.10#doc-arch-linux[LTTng{nbsp}2.10 for the current Arch Linux build].
377
378 <<building-from-source,Build LTTng{nbsp}{revision} from source>>.
379
380 |https://alpinelinux.org/[Alpine Linux]
381 |_Not available_
382 |link:/docs/v2.10#doc-alpine-linux[LTTng{nbsp}2.10 for Alpine Linux{nbsp}3.7
383 and Alpine Linux{nbsp}"edge"].
384
385 <<building-from-source,Build LTTng{nbsp}{revision} from source>>.
386
387 |https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
388 |See http://packages.efficios.com/[EfficiOS Enterprise Packages].
389 |
390
391 |https://buildroot.org/[Buildroot]
392 |<<"buildroot", "Buildroot{nbsp}2017.02, Buildroot{nbsp}2017.05, Buildroot{nbsp}2017.08, and Buildroot{nbsp}2017.11">>.
393 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
394 other Buildroot releases.
395
396 |http://www.openembedded.org/wiki/Main_Page[OpenEmbedded] and
397 https://www.yoctoproject.org/[Yocto]
398 |<<oe-yocto,Yocto Project{nbsp}2.3 _Pyro_ and Yocto Project{nbsp}2.4 _Rocko_>>
399 (`openembedded-core` layer).
400 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
401 other Yocto/OpenEmbedded releases.
402 |====
403
404
405 [[ubuntu]]
406 === [[ubuntu-official-repositories]]Ubuntu
407
408 LTTng{nbsp}{revision} is available on Ubuntu{nbsp}17.04 _Zesty Zapus_
409 and Ubuntu{nbsp}17.10 _Artful Aardvark_. For previous releases of
410 Ubuntu, <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
411
412 To install LTTng{nbsp}{revision} on Ubuntu{nbsp}17.04 _Zesty Zapus_:
413
414 . Install the main LTTng{nbsp}{revision} packages:
415 +
416 --
417 [role="term"]
418 ----
419 # apt-get install lttng-tools
420 # apt-get install lttng-modules-dkms
421 # apt-get install liblttng-ust-dev
422 ----
423 --
424
425 . **If you need to instrument and trace
426 <<java-application,Java applications>>**, install the LTTng-UST
427 Java agent:
428 +
429 --
430 [role="term"]
431 ----
432 # apt-get install liblttng-ust-agent-java
433 ----
434 --
435
436 . **If you need to instrument and trace
437 <<python-application,Python{nbsp}3 applications>>**, install the
438 LTTng-UST Python agent:
439 +
440 --
441 [role="term"]
442 ----
443 # apt-get install python3-lttngust
444 ----
445 --
446
447
448 [[ubuntu-ppa]]
449 ==== noch:{LTTng} Stable {revision} PPA
450
451 The https://launchpad.net/~lttng/+archive/ubuntu/stable-{revision}[LTTng
452 Stable{nbsp}{revision} PPA] offers the latest stable
453 LTTng{nbsp}{revision} packages for:
454
455 * Ubuntu{nbsp}14.04 _Trusty Tahr_
456 * Ubuntu{nbsp}16.04 _Xenial Xerus_
457
458 To install LTTng{nbsp}{revision} from the LTTng Stable{nbsp}{revision} PPA:
459
460 . Add the LTTng Stable{nbsp}{revision} PPA repository and update the
461 list of packages:
462 +
463 --
464 [role="term"]
465 ----
466 # apt-add-repository ppa:lttng/stable-2.9
467 # apt-get update
468 ----
469 --
470
471 . Install the main LTTng{nbsp}{revision} packages:
472 +
473 --
474 [role="term"]
475 ----
476 # apt-get install lttng-tools
477 # apt-get install lttng-modules-dkms
478 # apt-get install liblttng-ust-dev
479 ----
480 --
481
482 . **If you need to instrument and trace
483 <<java-application,Java applications>>**, install the LTTng-UST
484 Java agent:
485 +
486 --
487 [role="term"]
488 ----
489 # apt-get install liblttng-ust-agent-java
490 ----
491 --
492
493 . **If you need to instrument and trace
494 <<python-application,Python{nbsp}3 applications>>**, install the
495 LTTng-UST Python agent:
496 +
497 --
498 [role="term"]
499 ----
500 # apt-get install python3-lttngust
501 ----
502 --
503
504
505 [[fedora]]
506 === Fedora
507
508 To install LTTng{nbsp}{revision} on Fedora{nbsp}26:
509
510 . Install the LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision}
511 packages:
512 +
513 --
514 [role="term"]
515 ----
516 # yum install lttng-tools
517 # yum install lttng-ust
518 ----
519 --
520
521 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
522 +
523 --
524 [role="term"]
525 ----
526 $ cd $(mktemp -d) &&
527 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
528 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
529 cd lttng-modules-2.9.* &&
530 make &&
531 sudo make modules_install &&
532 sudo depmod -a
533 ----
534 --
535
536 [IMPORTANT]
537 .Java and Python application instrumentation and tracing
538 ====
539 If you need to instrument and trace <<java-application,Java
540 applications>> on Fedora, you need to build and install
541 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
542 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
543 `--enable-java-agent-all` options to the `configure` script, depending
544 on which Java logging framework you use.
545
546 If you need to instrument and trace <<python-application,Python
547 applications>> on Fedora, you need to build and install
548 LTTng-UST{nbsp}{revision} from source and pass the
549 `--enable-python-agent` option to the `configure` script.
550 ====
551
552
553 [[debian]]
554 === Debian
555
556 To install LTTng{nbsp}{revision} on Debian "stretch" (stable):
557
558 . Install the main LTTng{nbsp}{revision} packages:
559 +
560 --
561 [role="term"]
562 ----
563 # apt-get install lttng-modules-dkms
564 # apt-get install liblttng-ust-dev
565 # apt-get install lttng-tools
566 ----
567 --
568
569 . **If you need to instrument and trace <<java-application,Java
570 applications>>**, install the LTTng-UST Java agent:
571 +
572 --
573 [role="term"]
574 ----
575 # apt-get install liblttng-ust-agent-java
576 ----
577 --
578
579 . **If you need to instrument and trace <<python-application,Python
580 applications>>**, install the LTTng-UST Python agent:
581 +
582 --
583 [role="term"]
584 ----
585 # apt-get install python3-lttngust
586 ----
587 --
588
589
590 [[enterprise-distributions]]
591 === RHEL, SUSE, and other enterprise distributions
592
593 To install LTTng on enterprise Linux distributions, such as Red Hat
594 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SUSE), please
595 see http://packages.efficios.com/[EfficiOS Enterprise Packages].
596
597
598 [[buildroot]]
599 === Buildroot
600
601 To install LTTng{nbsp}{revision} on Buildroot{nbsp}2017.02,
602 Buildroot{nbsp}2017.05, Buildroot{nbsp}2017.08, or
603 Buildroot{nbsp}2017.11:
604
605 . Launch the Buildroot configuration tool:
606 +
607 --
608 [role="term"]
609 ----
610 $ make menuconfig
611 ----
612 --
613
614 . In **Kernel**, check **Linux kernel**.
615 . In **Toolchain**, check **Enable WCHAR support**.
616 . In **Target packages**{nbsp}&#8594; **Debugging, profiling and benchmark**,
617 check **lttng-modules** and **lttng-tools**.
618 . In **Target packages**{nbsp}&#8594; **Libraries**{nbsp}&#8594;
619 **Other**, check **lttng-libust**.
620
621
622 [[oe-yocto]]
623 === OpenEmbedded and Yocto
624
625 LTTng{nbsp}{revision} recipes are available in the
626 http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
627 layer for Yocto Project{nbsp}2.3 _Pyro_ and Yocto Project{nbsp}2.4 _Rocko_
628 under the following names:
629
630 * `lttng-tools`
631 * `lttng-modules`
632 * `lttng-ust`
633
634 With BitBake, the simplest way to include LTTng recipes in your target
635 image is to add them to `IMAGE_INSTALL_append` in path:{conf/local.conf}:
636
637 ----
638 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
639 ----
640
641 If you use Hob:
642
643 . Select a machine and an image recipe.
644 . Click **Edit image recipe**.
645 . Under the **All recipes** tab, search for **lttng**.
646 . Check the desired LTTng recipes.
647
648 [IMPORTANT]
649 .Java and Python application instrumentation and tracing
650 ====
651 If you need to instrument and trace <<java-application,Java
652 applications>> on Yocto/OpenEmbedded, you need to build and install
653 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
654 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
655 `--enable-java-agent-all` options to the `configure` script, depending
656 on which Java logging framework you use.
657
658 If you need to instrument and trace <<python-application,Python
659 applications>> on Yocto/OpenEmbedded, you need to build and install
660 LTTng-UST{nbsp}{revision} from source and pass the
661 `--enable-python-agent` option to the `configure` script.
662 ====
663
664
665 [[building-from-source]]
666 === Build from source
667
668 To build and install LTTng{nbsp}{revision} from source:
669
670 . Using your distribution's package manager, or from source, install
671 the following dependencies of LTTng-tools and LTTng-UST:
672 +
673 --
674 * https://sourceforge.net/projects/libuuid/[libuuid]
675 * http://directory.fsf.org/wiki/Popt[popt]
676 * http://liburcu.org/[Userspace RCU]
677 * http://www.xmlsoft.org/[libxml2]
678 --
679
680 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
681 +
682 --
683 [role="term"]
684 ----
685 $ cd $(mktemp -d) &&
686 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
687 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
688 cd lttng-modules-2.9.* &&
689 make &&
690 sudo make modules_install &&
691 sudo depmod -a
692 ----
693 --
694
695 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
696 +
697 --
698 [role="term"]
699 ----
700 $ cd $(mktemp -d) &&
701 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.9.tar.bz2 &&
702 tar -xf lttng-ust-latest-2.9.tar.bz2 &&
703 cd lttng-ust-2.9.* &&
704 ./configure &&
705 make &&
706 sudo make install &&
707 sudo ldconfig
708 ----
709 --
710 +
711 --
712 [IMPORTANT]
713 .Java and Python application tracing
714 ====
715 If you need to instrument and trace <<java-application,Java
716 applications>>, pass the `--enable-java-agent-jul`,
717 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
718 `configure` script, depending on which Java logging framework you use.
719
720 If you need to instrument and trace <<python-application,Python
721 applications>>, pass the `--enable-python-agent` option to the
722 `configure` script. You can set the `PYTHON` environment variable to the
723 path to the Python interpreter for which to install the LTTng-UST Python
724 agent package.
725 ====
726 --
727 +
728 --
729 [NOTE]
730 ====
731 By default, LTTng-UST libraries are installed to
732 dir:{/usr/local/lib}, which is the de facto directory in which to
733 keep self-compiled and third-party libraries.
734
735 When <<building-tracepoint-providers-and-user-application,linking an
736 instrumented user application with `liblttng-ust`>>:
737
738 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
739 variable.
740 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
741 man:gcc(1), man:g++(1), or man:clang(1).
742 ====
743 --
744
745 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
746 +
747 --
748 [role="term"]
749 ----
750 $ cd $(mktemp -d) &&
751 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
752 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
753 cd lttng-tools-2.9.* &&
754 ./configure &&
755 make &&
756 sudo make install &&
757 sudo ldconfig
758 ----
759 --
760
761 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
762 previous steps automatically for a given version of LTTng and confine
763 the installed files in a specific directory. This can be useful to test
764 LTTng without installing it on your system.
765
766
767 [[getting-started]]
768 == Quick start
769
770 This is a short guide to get started quickly with LTTng kernel and user
771 space tracing.
772
773 Before you follow this guide, make sure to <<installing-lttng,install>>
774 LTTng.
775
776 This tutorial walks you through the steps to:
777
778 . <<tracing-the-linux-kernel,Trace the Linux kernel>>.
779 . <<tracing-your-own-user-application,Trace a user application>> written
780 in C.
781 . <<viewing-and-analyzing-your-traces,View and analyze the
782 recorded events>>.
783
784
785 [[tracing-the-linux-kernel]]
786 === Trace the Linux kernel
787
788 The following command lines start with the `#` prompt because you need
789 root privileges to trace the Linux kernel. You can also trace the kernel
790 as a regular user if your Unix user is a member of the
791 <<tracing-group,tracing group>>.
792
793 . Create a <<tracing-session,tracing session>> which writes its traces
794 to dir:{/tmp/my-kernel-trace}:
795 +
796 --
797 [role="term"]
798 ----
799 # lttng create my-kernel-session --output=/tmp/my-kernel-trace
800 ----
801 --
802
803 . List the available kernel tracepoints and system calls:
804 +
805 --
806 [role="term"]
807 ----
808 # lttng list --kernel
809 # lttng list --kernel --syscall
810 ----
811 --
812
813 . Create <<event,event rules>> which match the desired instrumentation
814 point names, for example the `sched_switch` and `sched_process_fork`
815 tracepoints, and the man:open(2) and man:close(2) system calls:
816 +
817 --
818 [role="term"]
819 ----
820 # lttng enable-event --kernel sched_switch,sched_process_fork
821 # lttng enable-event --kernel --syscall open,close
822 ----
823 --
824 +
825 You can also create an event rule which matches _all_ the Linux kernel
826 tracepoints (this will generate a lot of data when tracing):
827 +
828 --
829 [role="term"]
830 ----
831 # lttng enable-event --kernel --all
832 ----
833 --
834
835 . <<basic-tracing-session-control,Start tracing>>:
836 +
837 --
838 [role="term"]
839 ----
840 # lttng start
841 ----
842 --
843
844 . Do some operation on your system for a few seconds. For example,
845 load a website, or list the files of a directory.
846 . <<basic-tracing-session-control,Stop tracing>> and destroy the
847 tracing session:
848 +
849 --
850 [role="term"]
851 ----
852 # lttng stop
853 # lttng destroy
854 ----
855 --
856 +
857 The man:lttng-destroy(1) command does not destroy the trace data; it
858 only destroys the state of the tracing session.
859
860 . For the sake of this example, make the recorded trace accessible to
861 the non-root users:
862 +
863 --
864 [role="term"]
865 ----
866 # chown -R $(whoami) /tmp/my-kernel-trace
867 ----
868 --
869
870 See <<viewing-and-analyzing-your-traces,View and analyze the
871 recorded events>> to view the recorded events.
872
873
874 [[tracing-your-own-user-application]]
875 === Trace a user application
876
877 This section steps you through a simple example to trace a
878 _Hello world_ program written in C.
879
880 To create the traceable user application:
881
882 . Create the tracepoint provider header file, which defines the
883 tracepoints and the events they can generate:
884 +
885 --
886 [source,c]
887 .path:{hello-tp.h}
888 ----
889 #undef TRACEPOINT_PROVIDER
890 #define TRACEPOINT_PROVIDER hello_world
891
892 #undef TRACEPOINT_INCLUDE
893 #define TRACEPOINT_INCLUDE "./hello-tp.h"
894
895 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
896 #define _HELLO_TP_H
897
898 #include <lttng/tracepoint.h>
899
900 TRACEPOINT_EVENT(
901 hello_world,
902 my_first_tracepoint,
903 TP_ARGS(
904 int, my_integer_arg,
905 char*, my_string_arg
906 ),
907 TP_FIELDS(
908 ctf_string(my_string_field, my_string_arg)
909 ctf_integer(int, my_integer_field, my_integer_arg)
910 )
911 )
912
913 #endif /* _HELLO_TP_H */
914
915 #include <lttng/tracepoint-event.h>
916 ----
917 --
918
919 . Create the tracepoint provider package source file:
920 +
921 --
922 [source,c]
923 .path:{hello-tp.c}
924 ----
925 #define TRACEPOINT_CREATE_PROBES
926 #define TRACEPOINT_DEFINE
927
928 #include "hello-tp.h"
929 ----
930 --
931
932 . Build the tracepoint provider package:
933 +
934 --
935 [role="term"]
936 ----
937 $ gcc -c -I. hello-tp.c
938 ----
939 --
940
941 . Create the _Hello World_ application source file:
942 +
943 --
944 [source,c]
945 .path:{hello.c}
946 ----
947 #include <stdio.h>
948 #include "hello-tp.h"
949
950 int main(int argc, char *argv[])
951 {
952 int x;
953
954 puts("Hello, World!\nPress Enter to continue...");
955
956 /*
957 * The following getchar() call is only placed here for the purpose
958 * of this demonstration, to pause the application in order for
959 * you to have time to list its tracepoints. It is not
960 * needed otherwise.
961 */
962 getchar();
963
964 /*
965 * A tracepoint() call.
966 *
967 * Arguments, as defined in hello-tp.h:
968 *
969 * 1. Tracepoint provider name (required)
970 * 2. Tracepoint name (required)
971 * 3. my_integer_arg (first user-defined argument)
972 * 4. my_string_arg (second user-defined argument)
973 *
974 * Notice the tracepoint provider and tracepoint names are
975 * NOT strings: they are in fact parts of variables that the
976 * macros in hello-tp.h create.
977 */
978 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
979
980 for (x = 0; x < argc; ++x) {
981 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
982 }
983
984 puts("Quitting now!");
985 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
986
987 return 0;
988 }
989 ----
990 --
991
992 . Build the application:
993 +
994 --
995 [role="term"]
996 ----
997 $ gcc -c hello.c
998 ----
999 --
1000
1001 . Link the application with the tracepoint provider package,
1002 `liblttng-ust`, and `libdl`:
1003 +
1004 --
1005 [role="term"]
1006 ----
1007 $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
1008 ----
1009 --
1010
1011 Here's the whole build process:
1012
1013 [role="img-100"]
1014 .User space tracing tutorial's build steps.
1015 image::ust-flow.png[]
1016
1017 To trace the user application:
1018
1019 . Run the application with a few arguments:
1020 +
1021 --
1022 [role="term"]
1023 ----
1024 $ ./hello world and beyond
1025 ----
1026 --
1027 +
1028 You see:
1029 +
1030 --
1031 ----
1032 Hello, World!
1033 Press Enter to continue...
1034 ----
1035 --
1036
1037 . Start an LTTng <<lttng-sessiond,session daemon>>:
1038 +
1039 --
1040 [role="term"]
1041 ----
1042 $ lttng-sessiond --daemonize
1043 ----
1044 --
1045 +
1046 Note that a session daemon might already be running, for example as
1047 a service that the distribution's service manager started.
1048
1049 . List the available user space tracepoints:
1050 +
1051 --
1052 [role="term"]
1053 ----
1054 $ lttng list --userspace
1055 ----
1056 --
1057 +
1058 You see the `hello_world:my_first_tracepoint` tracepoint listed
1059 under the `./hello` process.
1060
1061 . Create a <<tracing-session,tracing session>>:
1062 +
1063 --
1064 [role="term"]
1065 ----
1066 $ lttng create my-user-space-session
1067 ----
1068 --
1069
1070 . Create an <<event,event rule>> which matches the
1071 `hello_world:my_first_tracepoint` event name:
1072 +
1073 --
1074 [role="term"]
1075 ----
1076 $ lttng enable-event --userspace hello_world:my_first_tracepoint
1077 ----
1078 --
1079
1080 . <<basic-tracing-session-control,Start tracing>>:
1081 +
1082 --
1083 [role="term"]
1084 ----
1085 $ lttng start
1086 ----
1087 --
1088
1089 . Go back to the running `hello` application and press Enter. The
1090 program executes all `tracepoint()` instrumentation points and exits.
1091 . <<basic-tracing-session-control,Stop tracing>> and destroy the
1092 tracing session:
1093 +
1094 --
1095 [role="term"]
1096 ----
1097 $ lttng stop
1098 $ lttng destroy
1099 ----
1100 --
1101 +
1102 The man:lttng-destroy(1) command does not destroy the trace data; it
1103 only destroys the state of the tracing session.
1104
1105 By default, LTTng saves the traces in
1106 +$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
1107 where +__name__+ is the tracing session name. The
1108 env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
1109
1110 See <<viewing-and-analyzing-your-traces,View and analyze the
1111 recorded events>> to view the recorded events.
1112
1113
1114 [[viewing-and-analyzing-your-traces]]
1115 === View and analyze the recorded events
1116
1117 Once you have completed the <<tracing-the-linux-kernel,Trace the Linux
1118 kernel>> and <<tracing-your-own-user-application,Trace a user
1119 application>> tutorials, you can inspect the recorded events.
1120
1121 Many tools are available to read LTTng traces:
1122
1123 * **cmd:babeltrace** is a command-line utility which converts trace
1124 formats; it supports the format that LTTng produces, CTF, as well as a
1125 basic text output which can be ++grep++ed. The cmd:babeltrace command
1126 is part of the http://diamon.org/babeltrace[Babeltrace] project.
1127 * Babeltrace also includes
1128 **https://www.python.org/[Python] bindings** so
1129 that you can easily open and read an LTTng trace with your own script,
1130 benefiting from the power of Python.
1131 * http://tracecompass.org/[**Trace Compass**]
1132 is a graphical user interface for viewing and analyzing any type of
1133 logs or traces, including LTTng's.
1134 * https://github.com/lttng/lttng-analyses[**LTTng analyses**] is a
1135 project which includes many high-level analyses of LTTng kernel
1136 traces, like scheduling statistics, interrupt frequency distribution,
1137 top CPU usage, and more.
1138
1139 NOTE: This section assumes that the traces recorded during the previous
1140 tutorials were saved to their default location, in the
1141 dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
1142 environment variable defaults to `$HOME` if not set.
1143
1144
1145 [[viewing-and-analyzing-your-traces-bt]]
1146 ==== Use the cmd:babeltrace command-line tool
1147
1148 The simplest way to list all the recorded events of a trace is to pass
1149 its path to cmd:babeltrace with no options:
1150
1151 [role="term"]
1152 ----
1153 $ babeltrace ~/lttng-traces/my-user-space-session*
1154 ----
1155
1156 cmd:babeltrace finds all traces recursively within the given path and
1157 prints all their events, merging them in chronological order.
1158
1159 You can pipe the output of cmd:babeltrace into a tool like man:grep(1) for
1160 further filtering:
1161
1162 [role="term"]
1163 ----
1164 $ babeltrace /tmp/my-kernel-trace | grep _switch
1165 ----
1166
1167 You can pipe the output of cmd:babeltrace into a tool like man:wc(1) to
1168 count the recorded events:
1169
1170 [role="term"]
1171 ----
1172 $ babeltrace /tmp/my-kernel-trace | grep _open | wc --lines
1173 ----
1174
1175
1176 [[viewing-and-analyzing-your-traces-bt-python]]
1177 ==== Use the Babeltrace Python bindings
1178
1179 The <<viewing-and-analyzing-your-traces-bt,text output of cmd:babeltrace>>
1180 is useful to isolate events by simple matching using man:grep(1) and
1181 similar utilities. However, more elaborate filters, such as keeping only
1182 event records with a field value falling within a specific range, are
1183 not trivial to write using a shell. Moreover, reductions and even the
1184 most basic computations involving multiple event records are virtually
1185 impossible to implement.
1186
1187 Fortunately, Babeltrace ships with Python 3 bindings which makes it easy
1188 to read the event records of an LTTng trace sequentially and compute the
1189 desired information.
1190
1191 The following script accepts an LTTng Linux kernel trace path as its
1192 first argument and prints the short names of the top 5 running processes
1193 on CPU 0 during the whole trace:
1194
1195 [source,python]
1196 .path:{top5proc.py}
1197 ----
1198 from collections import Counter
1199 import babeltrace
1200 import sys
1201
1202
1203 def top5proc():
1204 if len(sys.argv) != 2:
1205 msg = 'Usage: python3 {} TRACEPATH'.format(sys.argv[0])
1206 print(msg, file=sys.stderr)
1207 return False
1208
1209 # A trace collection contains one or more traces
1210 col = babeltrace.TraceCollection()
1211
1212 # Add the trace provided by the user (LTTng traces always have
1213 # the 'ctf' format)
1214 if col.add_trace(sys.argv[1], 'ctf') is None:
1215 raise RuntimeError('Cannot add trace')
1216
1217 # This counter dict contains execution times:
1218 #
1219 # task command name -> total execution time (ns)
1220 exec_times = Counter()
1221
1222 # This contains the last `sched_switch` timestamp
1223 last_ts = None
1224
1225 # Iterate on events
1226 for event in col.events:
1227 # Keep only `sched_switch` events
1228 if event.name != 'sched_switch':
1229 continue
1230
1231 # Keep only events which happened on CPU 0
1232 if event['cpu_id'] != 0:
1233 continue
1234
1235 # Event timestamp
1236 cur_ts = event.timestamp
1237
1238 if last_ts is None:
1239 # We start here
1240 last_ts = cur_ts
1241
1242 # Previous task command (short) name
1243 prev_comm = event['prev_comm']
1244
1245 # Initialize entry in our dict if not yet done
1246 if prev_comm not in exec_times:
1247 exec_times[prev_comm] = 0
1248
1249 # Compute previous command execution time
1250 diff = cur_ts - last_ts
1251
1252 # Update execution time of this command
1253 exec_times[prev_comm] += diff
1254
1255 # Update last timestamp
1256 last_ts = cur_ts
1257
1258 # Display top 5
1259 for name, ns in exec_times.most_common(5):
1260 s = ns / 1000000000
1261 print('{:20}{} s'.format(name, s))
1262
1263 return True
1264
1265
1266 if __name__ == '__main__':
1267 sys.exit(0 if top5proc() else 1)
1268 ----
1269
1270 Run this script:
1271
1272 [role="term"]
1273 ----
1274 $ python3 top5proc.py /tmp/my-kernel-trace/kernel
1275 ----
1276
1277 Output example:
1278
1279 ----
1280 swapper/0 48.607245889 s
1281 chromium 7.192738188 s
1282 pavucontrol 0.709894415 s
1283 Compositor 0.660867933 s
1284 Xorg.bin 0.616753786 s
1285 ----
1286
1287 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
1288 weren't using the CPU that much when tracing, its first position in the
1289 list makes sense.
1290
1291
1292 [[core-concepts]]
1293 == [[understanding-lttng]]Core concepts
1294
1295 From a user's perspective, the LTTng system is built on a few concepts,
1296 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1297 operates by sending commands to the <<lttng-sessiond,session daemon>>.
1298 Understanding how those objects relate to eachother is key in mastering
1299 the toolkit.
1300
1301 The core concepts are:
1302
1303 * <<tracing-session,Tracing session>>
1304 * <<domain,Tracing domain>>
1305 * <<channel,Channel and ring buffer>>
1306 * <<"event","Instrumentation point, event rule, event, and event record">>
1307
1308
1309 [[tracing-session]]
1310 === Tracing session
1311
1312 A _tracing session_ is a stateful dialogue between you and
1313 a <<lttng-sessiond,session daemon>>. You can
1314 <<creating-destroying-tracing-sessions,create a new tracing
1315 session>> with the `lttng create` command.
1316
1317 Anything that you do when you control LTTng tracers happens within a
1318 tracing session. In particular, a tracing session:
1319
1320 * Has its own name.
1321 * Has its own set of trace files.
1322 * Has its own state of activity (started or stopped).
1323 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
1324 snapshot, or live).
1325 * Has its own <<channel,channels>> which have their own
1326 <<event,event rules>>.
1327
1328 [role="img-100"]
1329 .A _tracing session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,event rules>>.
1330 image::concepts.png[]
1331
1332 Those attributes and objects are completely isolated between different
1333 tracing sessions.
1334
1335 A tracing session is analogous to a cash machine session:
1336 the operations you do on the banking system through the cash machine do
1337 not alter the data of other users of the same system. In the case of
1338 the cash machine, a session lasts as long as your bank card is inside.
1339 In the case of LTTng, a tracing session lasts from the `lttng create`
1340 command to the `lttng destroy` command.
1341
1342 [role="img-100"]
1343 .Each Unix user has its own set of tracing sessions.
1344 image::many-sessions.png[]
1345
1346
1347 [[tracing-session-mode]]
1348 ==== Tracing session mode
1349
1350 LTTng can send the generated trace data to different locations. The
1351 _tracing session mode_ dictates where to send it. The following modes
1352 are available in LTTng{nbsp}{revision}:
1353
1354 Local mode::
1355 LTTng writes the traces to the file system of the machine being traced
1356 (target system).
1357
1358 Network streaming mode::
1359 LTTng sends the traces over the network to a
1360 <<lttng-relayd,relay daemon>> running on a remote system.
1361
1362 Snapshot mode::
1363 LTTng does not write the traces by default. Instead, you can request
1364 LTTng to <<taking-a-snapshot,take a snapshot>>, that is, a copy of the
1365 current tracing buffers, and to write it to the target's file system
1366 or to send it over the network to a <<lttng-relayd,relay daemon>>
1367 running on a remote system.
1368
1369 Live mode::
1370 This mode is similar to the network streaming mode, but a live
1371 trace viewer can connect to the distant relay daemon to
1372 <<lttng-live,view event records as LTTng generates them>> by
1373 the tracers.
1374
1375
1376 [[domain]]
1377 === Tracing domain
1378
1379 A _tracing domain_ is a namespace for event sources. A tracing domain
1380 has its own properties and features.
1381
1382 There are currently five available tracing domains:
1383
1384 * Linux kernel
1385 * User space
1386 * `java.util.logging` (JUL)
1387 * log4j
1388 * Python
1389
1390 You must specify a tracing domain when using some commands to avoid
1391 ambiguity. For example, since all the domains support named tracepoints
1392 as event sources (instrumentation points that you manually insert in the
1393 source code), you need to specify a tracing domain when
1394 <<enabling-disabling-events,creating an event rule>> because all the
1395 tracing domains could have tracepoints with the same names.
1396
1397 Some features are reserved to specific tracing domains. Dynamic function
1398 entry and return instrumentation points, for example, are currently only
1399 supported in the Linux kernel tracing domain, but support for other
1400 tracing domains could be added in the future.
1401
1402 You can create <<channel,channels>> in the Linux kernel and user space
1403 tracing domains. The other tracing domains have a single default
1404 channel.
1405
1406
1407 [[channel]]
1408 === Channel and ring buffer
1409
1410 A _channel_ is an object which is responsible for a set of ring buffers.
1411 Each ring buffer is divided into multiple sub-buffers. When an LTTng
1412 tracer emits an event, it can record it to one or more
1413 sub-buffers. The attributes of a channel determine what to do when
1414 there's no space left for a new event record because all sub-buffers
1415 are full, where to send a full sub-buffer, and other behaviours.
1416
1417 A channel is always associated to a <<domain,tracing domain>>. The
1418 `java.util.logging` (JUL), log4j, and Python tracing domains each have
1419 a default channel which you cannot configure.
1420
1421 A channel also owns <<event,event rules>>. When an LTTng tracer emits
1422 an event, it records it to the sub-buffers of all
1423 the enabled channels with a satisfied event rule, as long as those
1424 channels are part of active <<tracing-session,tracing sessions>>.
1425
1426
1427 [[channel-buffering-schemes]]
1428 ==== Per-user vs. per-process buffering schemes
1429
1430 A channel has at least one ring buffer _per CPU_. LTTng always
1431 records an event to the ring buffer associated to the CPU on which it
1432 occurred.
1433
1434 Two _buffering schemes_ are available when you
1435 <<enabling-disabling-channels,create a channel>> in the
1436 user space <<domain,tracing domain>>:
1437
1438 Per-user buffering::
1439 Allocate one set of ring buffers--one per CPU--shared by all the
1440 instrumented processes of each Unix user.
1441 +
1442 --
1443 [role="img-100"]
1444 .Per-user buffering scheme.
1445 image::per-user-buffering.png[]
1446 --
1447
1448 Per-process buffering::
1449 Allocate one set of ring buffers--one per CPU--for each
1450 instrumented process.
1451 +
1452 --
1453 [role="img-100"]
1454 .Per-process buffering scheme.
1455 image::per-process-buffering.png[]
1456 --
1457 +
1458 The per-process buffering scheme tends to consume more memory than the
1459 per-user option because systems generally have more instrumented
1460 processes than Unix users running instrumented processes. However, the
1461 per-process buffering scheme ensures that one process having a high
1462 event throughput won't fill all the shared sub-buffers of the same
1463 user, only its own.
1464
1465 The Linux kernel tracing domain has only one available buffering scheme
1466 which is to allocate a single set of ring buffers for the whole system.
1467 This scheme is similar to the per-user option, but with a single, global
1468 user "running" the kernel.
1469
1470
1471 [[channel-overwrite-mode-vs-discard-mode]]
1472 ==== Overwrite vs. discard event loss modes
1473
1474 When an event occurs, LTTng records it to a specific sub-buffer (yellow
1475 arc in the following animation) of a specific channel's ring buffer.
1476 When there's no space left in a sub-buffer, the tracer marks it as
1477 consumable (red) and another, empty sub-buffer starts receiving the
1478 following event records. A <<lttng-consumerd,consumer daemon>>
1479 eventually consumes the marked sub-buffer (returns to white).
1480
1481 [NOTE]
1482 [role="docsvg-channel-subbuf-anim"]
1483 ====
1484 {note-no-anim}
1485 ====
1486
1487 In an ideal world, sub-buffers are consumed faster than they are filled,
1488 as is the case in the previous animation. In the real world,
1489 however, all sub-buffers can be full at some point, leaving no space to
1490 record the following events.
1491
1492 By design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer is
1493 available, it is acceptable to lose event records when the alternative
1494 would be to cause substantial delays in the instrumented application's
1495 execution. LTTng privileges performance over integrity; it aims at
1496 perturbing the traced system as little as possible in order to make
1497 tracing of subtle race conditions and rare interrupt cascades possible.
1498
1499 When it comes to losing event records because no empty sub-buffer is
1500 available, the channel's _event loss mode_ determines what to do. The
1501 available event loss modes are:
1502
1503 Discard mode::
1504 Drop the newest event records until a the tracer
1505 releases a sub-buffer.
1506
1507 Overwrite mode::
1508 Clear the sub-buffer containing the oldest event records and start
1509 writing the newest event records there.
1510 +
1511 This mode is sometimes called _flight recorder mode_ because it's
1512 similar to a
1513 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]:
1514 always keep a fixed amount of the latest data.
1515
1516 Which mechanism you should choose depends on your context: prioritize
1517 the newest or the oldest event records in the ring buffer?
1518
1519 Beware that, in overwrite mode, the tracer abandons a whole sub-buffer
1520 as soon as a there's no space left for a new event record, whereas in
1521 discard mode, the tracer only discards the event record that doesn't
1522 fit.
1523
1524 In discard mode, LTTng increments a count of lost event records when an
1525 event record is lost and saves this count to the trace. In overwrite
1526 mode, since LTTng 2.8, LTTng increments a count of lost sub-buffers when
1527 a sub-buffer is lost and saves this count to the trace. In this mode,
1528 the exact number of lost event records in those lost sub-buffers is not
1529 saved to the trace. Trace analyses can use the trace's saved discarded
1530 event record and sub-buffer counts to decide whether or not to perform
1531 the analyses even if trace data is known to be missing.
1532
1533 There are a few ways to decrease your probability of losing event
1534 records.
1535 <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>> shows
1536 how you can fine-une the sub-buffer count and size of a channel to
1537 virtually stop losing event records, though at the cost of greater
1538 memory usage.
1539
1540
1541 [[channel-subbuf-size-vs-subbuf-count]]
1542 ==== Sub-buffer count and size
1543
1544 When you <<enabling-disabling-channels,create a channel>>, you can
1545 set its number of sub-buffers and their size.
1546
1547 Note that there is noticeable CPU overhead introduced when
1548 switching sub-buffers (marking a full one as consumable and switching
1549 to an empty one for the following events to be recorded). Knowing this,
1550 the following list presents a few practical situations along with how
1551 to configure the sub-buffer count and size for them:
1552
1553 * **High event throughput**: In general, prefer bigger sub-buffers to
1554 lower the risk of losing event records.
1555 +
1556 Having bigger sub-buffers also ensures a lower
1557 <<channel-switch-timer,sub-buffer switching frequency>>.
1558 +
1559 The number of sub-buffers is only meaningful if you create the channel
1560 in overwrite mode: in this case, if a sub-buffer overwrite happens, the
1561 other sub-buffers are left unaltered.
1562
1563 * **Low event throughput**: In general, prefer smaller sub-buffers
1564 since the risk of losing event records is low.
1565 +
1566 Because events occur less frequently, the sub-buffer switching frequency
1567 should remain low and thus the tracer's overhead should not be a
1568 problem.
1569
1570 * **Low memory system**: If your target system has a low memory
1571 limit, prefer fewer first, then smaller sub-buffers.
1572 +
1573 Even if the system is limited in memory, you want to keep the
1574 sub-buffers as big as possible to avoid a high sub-buffer switching
1575 frequency.
1576
1577 Note that LTTng uses http://diamon.org/ctf/[CTF] as its trace format,
1578 which means event data is very compact. For example, the average
1579 LTTng kernel event record weights about 32{nbsp}bytes. Thus, a
1580 sub-buffer size of 1{nbsp}MiB is considered big.
1581
1582 The previous situations highlight the major trade-off between a few big
1583 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1584 frequency vs. how much data is lost in overwrite mode. Assuming a
1585 constant event throughput and using the overwrite mode, the two
1586 following configurations have the same ring buffer total size:
1587
1588 [NOTE]
1589 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1590 ====
1591 {note-no-anim}
1592 ====
1593
1594 * **2 sub-buffers of 4{nbsp}MiB each**: Expect a very low sub-buffer
1595 switching frequency, but if a sub-buffer overwrite happens, half of
1596 the event records so far (4{nbsp}MiB) are definitely lost.
1597 * **8 sub-buffers of 1{nbsp}MiB each**: Expect 4{nbsp}times the tracer's
1598 overhead as the previous configuration, but if a sub-buffer
1599 overwrite happens, only the eighth of event records so far are
1600 definitely lost.
1601
1602 In discard mode, the sub-buffers count parameter is pointless: use two
1603 sub-buffers and set their size according to the requirements of your
1604 situation.
1605
1606
1607 [[channel-switch-timer]]
1608 ==== Switch timer period
1609
1610 The _switch timer period_ is an important configurable attribute of
1611 a channel to ensure periodic sub-buffer flushing.
1612
1613 When the _switch timer_ expires, a sub-buffer switch happens. You can
1614 set the switch timer period attribute when you
1615 <<enabling-disabling-channels,create a channel>> to ensure that event
1616 data is consumed and committed to trace files or to a distant relay
1617 daemon periodically in case of a low event throughput.
1618
1619 [NOTE]
1620 [role="docsvg-channel-switch-timer"]
1621 ====
1622 {note-no-anim}
1623 ====
1624
1625 This attribute is also convenient when you use big sub-buffers to cope
1626 with a sporadic high event throughput, even if the throughput is
1627 normally low.
1628
1629
1630 [[channel-read-timer]]
1631 ==== Read timer period
1632
1633 By default, the LTTng tracers use a notification mechanism to signal a
1634 full sub-buffer so that a consumer daemon can consume it. When such
1635 notifications must be avoided, for example in real-time applications,
1636 you can use the channel's _read timer_ instead. When the read timer
1637 fires, the <<lttng-consumerd,consumer daemon>> checks for full,
1638 consumable sub-buffers.
1639
1640
1641 [[tracefile-rotation]]
1642 ==== Trace file count and size
1643
1644 By default, trace files can grow as large as needed. You can set the
1645 maximum size of each trace file that a channel writes when you
1646 <<enabling-disabling-channels,create a channel>>. When the size of
1647 a trace file reaches the channel's fixed maximum size, LTTng creates
1648 another file to contain the next event records. LTTng appends a file
1649 count to each trace file name in this case.
1650
1651 If you set the trace file size attribute when you create a channel, the
1652 maximum number of trace files that LTTng creates is _unlimited_ by
1653 default. To limit them, you can also set a maximum number of trace
1654 files. When the number of trace files reaches the channel's fixed
1655 maximum count, the oldest trace file is overwritten. This mechanism is
1656 called _trace file rotation_.
1657
1658
1659 [[event]]
1660 === Instrumentation point, event rule, event, and event record
1661
1662 An _event rule_ is a set of conditions which must be **all** satisfied
1663 for LTTng to record an occuring event.
1664
1665 You set the conditions when you <<enabling-disabling-events,create
1666 an event rule>>.
1667
1668 You always attach an event rule to <<channel,channel>> when you create
1669 it.
1670
1671 When an event passes the conditions of an event rule, LTTng records it
1672 in one of the attached channel's sub-buffers.
1673
1674 The available conditions, as of LTTng{nbsp}{revision}, are:
1675
1676 * The event rule _is enabled_.
1677 * The instrumentation point's type _is{nbsp}T_.
1678 * The instrumentation point's name (sometimes called _event name_)
1679 _matches{nbsp}N_, but _is not{nbsp}E_.
1680 * The instrumentation point's log level _is as severe as{nbsp}L_, or
1681 _is exactly{nbsp}L_.
1682 * The fields of the event's payload _satisfy_ a filter
1683 expression{nbsp}__F__.
1684
1685 As you can see, all the conditions but the dynamic filter are related to
1686 the event rule's status or to the instrumentation point, not to the
1687 occurring events. This is why, without a filter, checking if an event
1688 passes an event rule is not a dynamic task: when you create or modify an
1689 event rule, all the tracers of its tracing domain enable or disable the
1690 instrumentation points themselves once. This is possible because the
1691 attributes of an instrumentation point (type, name, and log level) are
1692 defined statically. In other words, without a dynamic filter, the tracer
1693 _does not evaluate_ the arguments of an instrumentation point unless it
1694 matches an enabled event rule.
1695
1696 Note that, for LTTng to record an event, the <<channel,channel>> to
1697 which a matching event rule is attached must also be enabled, and the
1698 tracing session owning this channel must be active.
1699
1700 [role="img-100"]
1701 .Logical path from an instrumentation point to an event record.
1702 image::event-rule.png[]
1703
1704 .Event, event record, or event rule?
1705 ****
1706 With so many similar terms, it's easy to get confused.
1707
1708 An **event** is the consequence of the execution of an _instrumentation
1709 point_, like a tracepoint that you manually place in some source code,
1710 or a Linux kernel KProbe. An event is said to _occur_ at a specific
1711 time. Different actions can be taken upon the occurrence of an event,
1712 like record the event's payload to a buffer.
1713
1714 An **event record** is the representation of an event in a sub-buffer. A
1715 tracer is responsible for capturing the payload of an event, current
1716 context variables, the event's ID, and the event's timestamp. LTTng
1717 can append this sub-buffer to a trace file.
1718
1719 An **event rule** is a set of conditions which must all be satisfied for
1720 LTTng to record an occuring event. Events still occur without
1721 satisfying event rules, but LTTng does not record them.
1722 ****
1723
1724
1725 [[plumbing]]
1726 == Components of noch:{LTTng}
1727
1728 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1729 to call LTTng a simple _tool_ since it is composed of multiple
1730 interacting components. This section describes those components,
1731 explains their respective roles, and shows how they connect together to
1732 form the LTTng ecosystem.
1733
1734 The following diagram shows how the most important components of LTTng
1735 interact with user applications, the Linux kernel, and you:
1736
1737 [role="img-100"]
1738 .Control and trace data paths between LTTng components.
1739 image::plumbing.png[]
1740
1741 The LTTng project incorporates:
1742
1743 * **LTTng-tools**: Libraries and command-line interface to
1744 control tracing sessions.
1745 ** <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1746 ** <<lttng-consumerd,Consumer daemon>> (cmd:lttng-consumerd).
1747 ** <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1748 ** <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1749 ** <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1750 * **LTTng-UST**: Libraries and Java/Python packages to trace user
1751 applications.
1752 ** <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1753 headers to instrument and trace any native user application.
1754 ** <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1755 *** `liblttng-ust-libc-wrapper`
1756 *** `liblttng-ust-pthread-wrapper`
1757 *** `liblttng-ust-cyg-profile`
1758 *** `liblttng-ust-cyg-profile-fast`
1759 *** `liblttng-ust-dl`
1760 ** User space tracepoint provider source files generator command-line
1761 tool (man:lttng-gen-tp(1)).
1762 ** <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1763 Java applications using `java.util.logging` or
1764 Apache log4j 1.2 logging.
1765 ** <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1766 Python applications using the standard `logging` package.
1767 * **LTTng-modules**: <<lttng-modules,Linux kernel modules>> to trace
1768 the kernel.
1769 ** LTTng kernel tracer module.
1770 ** Tracing ring buffer kernel modules.
1771 ** Probe kernel modules.
1772 ** LTTng logger kernel module.
1773
1774
1775 [[lttng-cli]]
1776 === Tracing control command-line interface
1777
1778 [role="img-100"]
1779 .The tracing control command-line interface.
1780 image::plumbing-lttng-cli.png[]
1781
1782 The _man:lttng(1) command-line tool_ is the standard user interface to
1783 control LTTng <<tracing-session,tracing sessions>>. The cmd:lttng tool
1784 is part of LTTng-tools.
1785
1786 The cmd:lttng tool is linked with
1787 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1788 one or more <<lttng-sessiond,session daemons>> behind the scenes.
1789
1790 The cmd:lttng tool has a Git-like interface:
1791
1792 [role="term"]
1793 ----
1794 $ lttng <GENERAL OPTIONS> <COMMAND> <COMMAND OPTIONS>
1795 ----
1796
1797 The <<controlling-tracing,Tracing control>> section explores the
1798 available features of LTTng using the cmd:lttng tool.
1799
1800
1801 [[liblttng-ctl-lttng]]
1802 === Tracing control library
1803
1804 [role="img-100"]
1805 .The tracing control library.
1806 image::plumbing-liblttng-ctl.png[]
1807
1808 The _LTTng control library_, `liblttng-ctl`, is used to communicate
1809 with a <<lttng-sessiond,session daemon>> using a C API that hides the
1810 underlying protocol's details. `liblttng-ctl` is part of LTTng-tools.
1811
1812 The <<lttng-cli,cmd:lttng command-line tool>>
1813 is linked with `liblttng-ctl`.
1814
1815 You can use `liblttng-ctl` in C or $$C++$$ source code by including its
1816 "master" header:
1817
1818 [source,c]
1819 ----
1820 #include <lttng/lttng.h>
1821 ----
1822
1823 Some objects are referenced by name (C string), such as tracing
1824 sessions, but most of them require to create a handle first using
1825 `lttng_create_handle()`.
1826
1827 The best available developer documentation for `liblttng-ctl` is, as of
1828 LTTng{nbsp}{revision}, its installed header files. Every function and
1829 structure is thoroughly documented.
1830
1831
1832 [[lttng-ust]]
1833 === User space tracing library
1834
1835 [role="img-100"]
1836 .The user space tracing library.
1837 image::plumbing-liblttng-ust.png[]
1838
1839 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1840 is the LTTng user space tracer. It receives commands from a
1841 <<lttng-sessiond,session daemon>>, for example to
1842 enable and disable specific instrumentation points, and writes event
1843 records to ring buffers shared with a
1844 <<lttng-consumerd,consumer daemon>>.
1845 `liblttng-ust` is part of LTTng-UST.
1846
1847 Public C header files are installed beside `liblttng-ust` to
1848 instrument any <<c-application,C or $$C++$$ application>>.
1849
1850 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1851 packages, use their own library providing tracepoints which is
1852 linked with `liblttng-ust`.
1853
1854 An application or library does not have to initialize `liblttng-ust`
1855 manually: its constructor does the necessary tasks to properly register
1856 to a session daemon. The initialization phase also enables the
1857 instrumentation points matching the <<event,event rules>> that you
1858 already created.
1859
1860
1861 [[lttng-ust-agents]]
1862 === User space tracing agents
1863
1864 [role="img-100"]
1865 .The user space tracing agents.
1866 image::plumbing-lttng-ust-agents.png[]
1867
1868 The _LTTng-UST Java and Python agents_ are regular Java and Python
1869 packages which add LTTng tracing capabilities to the
1870 native logging frameworks. The LTTng-UST agents are part of LTTng-UST.
1871
1872 In the case of Java, the
1873 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1874 core logging facilities] and
1875 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2] are supported.
1876 Note that Apache Log4{nbsp}2 is not supported.
1877
1878 In the case of Python, the standard
1879 https://docs.python.org/3/library/logging.html[`logging`] package
1880 is supported. Both Python 2 and Python 3 modules can import the
1881 LTTng-UST Python agent package.
1882
1883 The applications using the LTTng-UST agents are in the
1884 `java.util.logging` (JUL),
1885 log4j, and Python <<domain,tracing domains>>.
1886
1887 Both agents use the same mechanism to trace the log statements. When an
1888 agent is initialized, it creates a log handler that attaches to the root
1889 logger. The agent also registers to a <<lttng-sessiond,session daemon>>.
1890 When the application executes a log statement, it is passed to the
1891 agent's log handler by the root logger. The agent's log handler calls a
1892 native function in a tracepoint provider package shared library linked
1893 with <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
1894 other fields, like its logger name and its log level. This native
1895 function contains a user space instrumentation point, hence tracing the
1896 log statement.
1897
1898 The log level condition of an
1899 <<event,event rule>> is considered when tracing
1900 a Java or a Python application, and it's compatible with the standard
1901 JUL, log4j, and Python log levels.
1902
1903
1904 [[lttng-modules]]
1905 === LTTng kernel modules
1906
1907 [role="img-100"]
1908 .The LTTng kernel modules.
1909 image::plumbing-lttng-modules.png[]
1910
1911 The _LTTng kernel modules_ are a set of Linux kernel modules
1912 which implement the kernel tracer of the LTTng project. The LTTng
1913 kernel modules are part of LTTng-modules.
1914
1915 The LTTng kernel modules include:
1916
1917 * A set of _probe_ modules.
1918 +
1919 Each module attaches to a specific subsystem
1920 of the Linux kernel using its tracepoint instrument points. There are
1921 also modules to attach to the entry and return points of the Linux
1922 system call functions.
1923
1924 * _Ring buffer_ modules.
1925 +
1926 A ring buffer implementation is provided as kernel modules. The LTTng
1927 kernel tracer writes to the ring buffer; a
1928 <<lttng-consumerd,consumer daemon>> reads from the ring buffer.
1929
1930 * The _LTTng kernel tracer_ module.
1931 * The _LTTng logger_ module.
1932 +
1933 The LTTng logger module implements the special path:{/proc/lttng-logger}
1934 file so that any executable can generate LTTng events by opening and
1935 writing to this file.
1936 +
1937 See <<proc-lttng-logger-abi,LTTng logger>>.
1938
1939 Generally, you do not have to load the LTTng kernel modules manually
1940 (using man:modprobe(8), for example): a root <<lttng-sessiond,session
1941 daemon>> loads the necessary modules when starting. If you have extra
1942 probe modules, you can specify to load them to the session daemon on
1943 the command line.
1944
1945 The LTTng kernel modules are installed in
1946 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
1947 the kernel release (see `uname --kernel-release`).
1948
1949
1950 [[lttng-sessiond]]
1951 === Session daemon
1952
1953 [role="img-100"]
1954 .The session daemon.
1955 image::plumbing-sessiond.png[]
1956
1957 The _session daemon_, man:lttng-sessiond(8), is a daemon responsible for
1958 managing tracing sessions and for controlling the various components of
1959 LTTng. The session daemon is part of LTTng-tools.
1960
1961 The session daemon sends control requests to and receives control
1962 responses from:
1963
1964 * The <<lttng-ust,user space tracing library>>.
1965 +
1966 Any instance of the user space tracing library first registers to
1967 a session daemon. Then, the session daemon can send requests to
1968 this instance, such as:
1969 +
1970 --
1971 ** Get the list of tracepoints.
1972 ** Share an <<event,event rule>> so that the user space tracing library
1973 can enable or disable tracepoints. Amongst the possible conditions
1974 of an event rule is a filter expression which `liblttng-ust` evalutes
1975 when an event occurs.
1976 ** Share <<channel,channel>> attributes and ring buffer locations.
1977 --
1978 +
1979 The session daemon and the user space tracing library use a Unix
1980 domain socket for their communication.
1981
1982 * The <<lttng-ust-agents,user space tracing agents>>.
1983 +
1984 Any instance of a user space tracing agent first registers to
1985 a session daemon. Then, the session daemon can send requests to
1986 this instance, such as:
1987 +
1988 --
1989 ** Get the list of loggers.
1990 ** Enable or disable a specific logger.
1991 --
1992 +
1993 The session daemon and the user space tracing agent use a TCP connection
1994 for their communication.
1995
1996 * The <<lttng-modules,LTTng kernel tracer>>.
1997 * The <<lttng-consumerd,consumer daemon>>.
1998 +
1999 The session daemon sends requests to the consumer daemon to instruct
2000 it where to send the trace data streams, amongst other information.
2001
2002 * The <<lttng-relayd,relay daemon>>.
2003
2004 The session daemon receives commands from the
2005 <<liblttng-ctl-lttng,tracing control library>>.
2006
2007 The root session daemon loads the appropriate
2008 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
2009 a <<lttng-consumerd,consumer daemon>> as soon as you create
2010 an <<event,event rule>>.
2011
2012 The session daemon does not send and receive trace data: this is the
2013 role of the <<lttng-consumerd,consumer daemon>> and
2014 <<lttng-relayd,relay daemon>>. It does, however, generate the
2015 http://diamon.org/ctf/[CTF] metadata stream.
2016
2017 Each Unix user can have its own session daemon instance. The
2018 tracing sessions managed by different session daemons are completely
2019 independent.
2020
2021 The root user's session daemon is the only one which is
2022 allowed to control the LTTng kernel tracer, and its spawned consumer
2023 daemon is the only one which is allowed to consume trace data from the
2024 LTTng kernel tracer. Note, however, that any Unix user which is a member
2025 of the <<tracing-group,tracing group>> is allowed
2026 to create <<channel,channels>> in the
2027 Linux kernel <<domain,tracing domain>>, and thus to trace the Linux
2028 kernel.
2029
2030 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
2031 session daemon when using its `create` command if none is currently
2032 running. You can also start the session daemon manually.
2033
2034
2035 [[lttng-consumerd]]
2036 === Consumer daemon
2037
2038 [role="img-100"]
2039 .The consumer daemon.
2040 image::plumbing-consumerd.png[]
2041
2042 The _consumer daemon_, cmd:lttng-consumerd, is a daemon which shares
2043 ring buffers with user applications or with the LTTng kernel modules to
2044 collect trace data and send it to some location (on disk or to a
2045 <<lttng-relayd,relay daemon>> over the network). The consumer daemon
2046 is part of LTTng-tools.
2047
2048 You do not start a consumer daemon manually: a consumer daemon is always
2049 spawned by a <<lttng-sessiond,session daemon>> as soon as you create an
2050 <<event,event rule>>, that is, before you start tracing. When you kill
2051 its owner session daemon, the consumer daemon also exits because it is
2052 the session daemon's child process. Command-line options of
2053 man:lttng-sessiond(8) target the consumer daemon process.
2054
2055 There are up to two running consumer daemons per Unix user, whereas only
2056 one session daemon can run per user. This is because each process can be
2057 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
2058 and 64-bit processes, it is more efficient to have separate
2059 corresponding 32-bit and 64-bit consumer daemons. The root user is an
2060 exception: it can have up to _three_ running consumer daemons: 32-bit
2061 and 64-bit instances for its user applications, and one more
2062 reserved for collecting kernel trace data.
2063
2064
2065 [[lttng-relayd]]
2066 === Relay daemon
2067
2068 [role="img-100"]
2069 .The relay daemon.
2070 image::plumbing-relayd.png[]
2071
2072 The _relay daemon_, man:lttng-relayd(8), is a daemon acting as a bridge
2073 between remote session and consumer daemons, local trace files, and a
2074 remote live trace viewer. The relay daemon is part of LTTng-tools.
2075
2076 The main purpose of the relay daemon is to implement a receiver of
2077 <<sending-trace-data-over-the-network,trace data over the network>>.
2078 This is useful when the target system does not have much file system
2079 space to record trace files locally.
2080
2081 The relay daemon is also a server to which a
2082 <<lttng-live,live trace viewer>> can
2083 connect. The live trace viewer sends requests to the relay daemon to
2084 receive trace data as the target system emits events. The
2085 communication protocol is named _LTTng live_; it is used over TCP
2086 connections.
2087
2088 Note that you can start the relay daemon on the target system directly.
2089 This is the setup of choice when the use case is to view events as
2090 the target system emits them without the need of a remote system.
2091
2092
2093 [[instrumenting]]
2094 == [[using-lttng]]Instrumentation
2095
2096 There are many examples of tracing and monitoring in our everyday life:
2097
2098 * You have access to real-time and historical weather reports and
2099 forecasts thanks to weather stations installed around the country.
2100 * You know your heart is safe thanks to an electrocardiogram.
2101 * You make sure not to drive your car too fast and to have enough fuel
2102 to reach your destination thanks to gauges visible on your dashboard.
2103
2104 All the previous examples have something in common: they rely on
2105 **instruments**. Without the electrodes attached to the surface of your
2106 body's skin, cardiac monitoring is futile.
2107
2108 LTTng, as a tracer, is no different from those real life examples. If
2109 you're about to trace a software system or, in other words, record its
2110 history of execution, you better have **instrumentation points** in the
2111 subject you're tracing, that is, the actual software.
2112
2113 Various ways were developed to instrument a piece of software for LTTng
2114 tracing. The most straightforward one is to manually place
2115 instrumentation points, called _tracepoints_, in the software's source
2116 code. It is also possible to add instrumentation points dynamically in
2117 the Linux kernel <<domain,tracing domain>>.
2118
2119 If you're only interested in tracing the Linux kernel, your
2120 instrumentation needs are probably already covered by LTTng's built-in
2121 <<lttng-modules,Linux kernel tracepoints>>. You may also wish to trace a
2122 user application which is already instrumented for LTTng tracing.
2123 In such cases, you can skip this whole section and read the topics of
2124 the <<controlling-tracing,Tracing control>> section.
2125
2126 Many methods are available to instrument a piece of software for LTTng
2127 tracing. They are:
2128
2129 * <<c-application,User space instrumentation for C and $$C++$$
2130 applications>>.
2131 * <<prebuilt-ust-helpers,Prebuilt user space tracing helpers>>.
2132 * <<java-application,User space Java agent>>.
2133 * <<python-application,User space Python agent>>.
2134 * <<proc-lttng-logger-abi,LTTng logger>>.
2135 * <<instrumenting-linux-kernel,LTTng kernel tracepoints>>.
2136
2137
2138 [[c-application]]
2139 === [[cxx-application]]User space instrumentation for C and $$C++$$ applications
2140
2141 The procedure to instrument a C or $$C++$$ user application with
2142 the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`, is:
2143
2144 . <<tracepoint-provider,Create the source files of a tracepoint provider
2145 package>>.
2146 . <<probing-the-application-source-code,Add tracepoints to
2147 the application's source code>>.
2148 . <<building-tracepoint-providers-and-user-application,Build and link
2149 a tracepoint provider package and the user application>>.
2150
2151 If you need quick, man:printf(3)-like instrumentation, you can skip
2152 those steps and use <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>>
2153 instead.
2154
2155 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
2156 instrument a user application with `liblttng-ust`.
2157
2158
2159 [[tracepoint-provider]]
2160 ==== Create the source files of a tracepoint provider package
2161
2162 A _tracepoint provider_ is a set of compiled functions which provide
2163 **tracepoints** to an application, the type of instrumentation point
2164 supported by LTTng-UST. Those functions can emit events with
2165 user-defined fields and serialize those events as event records to one
2166 or more LTTng-UST <<channel,channel>> sub-buffers. The `tracepoint()`
2167 macro, which you <<probing-the-application-source-code,insert in a user
2168 application's source code>>, calls those functions.
2169
2170 A _tracepoint provider package_ is an object file (`.o`) or a shared
2171 library (`.so`) which contains one or more tracepoint providers.
2172 Its source files are:
2173
2174 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
2175 * A <<tpp-source,tracepoint provider package source>> (`.c`).
2176
2177 A tracepoint provider package is dynamically linked with `liblttng-ust`,
2178 the LTTng user space tracer, at run time.
2179
2180 [role="img-100"]
2181 .User application linked with `liblttng-ust` and containing a tracepoint provider.
2182 image::ust-app.png[]
2183
2184 NOTE: If you need quick, man:printf(3)-like instrumentation, you can
2185 skip creating and using a tracepoint provider and use
2186 <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>> instead.
2187
2188
2189 [[tpp-header]]
2190 ===== Create a tracepoint provider header file template
2191
2192 A _tracepoint provider header file_ contains the tracepoint
2193 definitions of a tracepoint provider.
2194
2195 To create a tracepoint provider header file:
2196
2197 . Start from this template:
2198 +
2199 --
2200 [source,c]
2201 .Tracepoint provider header file template (`.h` file extension).
2202 ----
2203 #undef TRACEPOINT_PROVIDER
2204 #define TRACEPOINT_PROVIDER provider_name
2205
2206 #undef TRACEPOINT_INCLUDE
2207 #define TRACEPOINT_INCLUDE "./tp.h"
2208
2209 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
2210 #define _TP_H
2211
2212 #include <lttng/tracepoint.h>
2213
2214 /*
2215 * Use TRACEPOINT_EVENT(), TRACEPOINT_EVENT_CLASS(),
2216 * TRACEPOINT_EVENT_INSTANCE(), and TRACEPOINT_LOGLEVEL() here.
2217 */
2218
2219 #endif /* _TP_H */
2220
2221 #include <lttng/tracepoint-event.h>
2222 ----
2223 --
2224
2225 . Replace:
2226 +
2227 * `provider_name` with the name of your tracepoint provider.
2228 * `"tp.h"` with the name of your tracepoint provider header file.
2229
2230 . Below the `#include <lttng/tracepoint.h>` line, put your
2231 <<defining-tracepoints,tracepoint definitions>>.
2232
2233 Your tracepoint provider name must be unique amongst all the possible
2234 tracepoint provider names used on the same target system. We
2235 suggest to include the name of your project or company in the name,
2236 for example, `org_lttng_my_project_tpp`.
2237
2238 TIP: [[lttng-gen-tp]]You can use the man:lttng-gen-tp(1) tool to create
2239 this boilerplate for you. When using cmd:lttng-gen-tp, all you need to
2240 write are the <<defining-tracepoints,tracepoint definitions>>.
2241
2242
2243 [[defining-tracepoints]]
2244 ===== Create a tracepoint definition
2245
2246 A _tracepoint definition_ defines, for a given tracepoint:
2247
2248 * Its **input arguments**. They are the macro parameters that the
2249 `tracepoint()` macro accepts for this particular tracepoint
2250 in the user application's source code.
2251 * Its **output event fields**. They are the sources of event fields
2252 that form the payload of any event that the execution of the
2253 `tracepoint()` macro emits for this particular tracepoint.
2254
2255 You can create a tracepoint definition by using the
2256 `TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2257 line in the
2258 <<tpp-header,tracepoint provider header file template>>.
2259
2260 The syntax of the `TRACEPOINT_EVENT()` macro is:
2261
2262 [source,c]
2263 .`TRACEPOINT_EVENT()` macro syntax.
2264 ----
2265 TRACEPOINT_EVENT(
2266 /* Tracepoint provider name */
2267 provider_name,
2268
2269 /* Tracepoint name */
2270 tracepoint_name,
2271
2272 /* Input arguments */
2273 TP_ARGS(
2274 arguments
2275 ),
2276
2277 /* Output event fields */
2278 TP_FIELDS(
2279 fields
2280 )
2281 )
2282 ----
2283
2284 Replace:
2285
2286 * `provider_name` with your tracepoint provider name.
2287 * `tracepoint_name` with your tracepoint name.
2288 * `arguments` with the <<tpp-def-input-args,input arguments>>.
2289 * `fields` with the <<tpp-def-output-fields,output event field>>
2290 definitions.
2291
2292 This tracepoint emits events named `provider_name:tracepoint_name`.
2293
2294 [IMPORTANT]
2295 .Event name's length limitation
2296 ====
2297 The concatenation of the tracepoint provider name and the
2298 tracepoint name must not exceed **254 characters**. If it does, the
2299 instrumented application compiles and runs, but LTTng throws multiple
2300 warnings and you could experience serious issues.
2301 ====
2302
2303 [[tpp-def-input-args]]The syntax of the `TP_ARGS()` macro is:
2304
2305 [source,c]
2306 .`TP_ARGS()` macro syntax.
2307 ----
2308 TP_ARGS(
2309 type, arg_name
2310 )
2311 ----
2312
2313 Replace:
2314
2315 * `type` with the C type of the argument.
2316 * `arg_name` with the argument name.
2317
2318 You can repeat `type` and `arg_name` up to 10 times to have
2319 more than one argument.
2320
2321 .`TP_ARGS()` usage with three arguments.
2322 ====
2323 [source,c]
2324 ----
2325 TP_ARGS(
2326 int, count,
2327 float, ratio,
2328 const char*, query
2329 )
2330 ----
2331 ====
2332
2333 The `TP_ARGS()` and `TP_ARGS(void)` forms are valid to create a
2334 tracepoint definition with no input arguments.
2335
2336 [[tpp-def-output-fields]]The `TP_FIELDS()` macro contains a list of
2337 `ctf_*()` macros. Each `ctf_*()` macro defines one event field. See
2338 man:lttng-ust(3) for a complete description of the available `ctf_*()`
2339 macros. A `ctf_*()` macro specifies the type, size, and byte order of
2340 one event field.
2341
2342 Each `ctf_*()` macro takes an _argument expression_ parameter. This is a
2343 C expression that the tracer evalutes at the `tracepoint()` macro site
2344 in the application's source code. This expression provides a field's
2345 source of data. The argument expression can include input argument names
2346 listed in the `TP_ARGS()` macro.
2347
2348 Each `ctf_*()` macro also takes a _field name_ parameter. Field names
2349 must be unique within a given tracepoint definition.
2350
2351 Here's a complete tracepoint definition example:
2352
2353 .Tracepoint definition.
2354 ====
2355 The following tracepoint definition defines a tracepoint which takes
2356 three input arguments and has four output event fields.
2357
2358 [source,c]
2359 ----
2360 #include "my-custom-structure.h"
2361
2362 TRACEPOINT_EVENT(
2363 my_provider,
2364 my_tracepoint,
2365 TP_ARGS(
2366 const struct my_custom_structure*, my_custom_structure,
2367 float, ratio,
2368 const char*, query
2369 ),
2370 TP_FIELDS(
2371 ctf_string(query_field, query)
2372 ctf_float(double, ratio_field, ratio)
2373 ctf_integer(int, recv_size, my_custom_structure->recv_size)
2374 ctf_integer(int, send_size, my_custom_structure->send_size)
2375 )
2376 )
2377 ----
2378
2379 You can refer to this tracepoint definition with the `tracepoint()`
2380 macro in your application's source code like this:
2381
2382 [source,c]
2383 ----
2384 tracepoint(my_provider, my_tracepoint,
2385 my_structure, some_ratio, the_query);
2386 ----
2387 ====
2388
2389 NOTE: The LTTng tracer only evaluates tracepoint arguments at run time
2390 if they satisfy an enabled <<event,event rule>>.
2391
2392
2393 [[using-tracepoint-classes]]
2394 ===== Use a tracepoint class
2395
2396 A _tracepoint class_ is a class of tracepoints which share the same
2397 output event field definitions. A _tracepoint instance_ is one
2398 instance of such a defined tracepoint class, with its own tracepoint
2399 name.
2400
2401 The <<defining-tracepoints,`TRACEPOINT_EVENT()` macro>> is actually a
2402 shorthand which defines both a tracepoint class and a tracepoint
2403 instance at the same time.
2404
2405 When you build a tracepoint provider package, the C or $$C++$$ compiler
2406 creates one serialization function for each **tracepoint class**. A
2407 serialization function is responsible for serializing the event fields
2408 of a tracepoint to a sub-buffer when tracing.
2409
2410 For various performance reasons, when your situation requires multiple
2411 tracepoint definitions with different names, but with the same event
2412 fields, we recommend that you manually create a tracepoint class
2413 and instantiate as many tracepoint instances as needed. One positive
2414 effect of such a design, amongst other advantages, is that all
2415 tracepoint instances of the same tracepoint class reuse the same
2416 serialization function, thus reducing
2417 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2418
2419 .Use a tracepoint class and tracepoint instances.
2420 ====
2421 Consider the following three tracepoint definitions:
2422
2423 [source,c]
2424 ----
2425 TRACEPOINT_EVENT(
2426 my_app,
2427 get_account,
2428 TP_ARGS(
2429 int, userid,
2430 size_t, len
2431 ),
2432 TP_FIELDS(
2433 ctf_integer(int, userid, userid)
2434 ctf_integer(size_t, len, len)
2435 )
2436 )
2437
2438 TRACEPOINT_EVENT(
2439 my_app,
2440 get_settings,
2441 TP_ARGS(
2442 int, userid,
2443 size_t, len
2444 ),
2445 TP_FIELDS(
2446 ctf_integer(int, userid, userid)
2447 ctf_integer(size_t, len, len)
2448 )
2449 )
2450
2451 TRACEPOINT_EVENT(
2452 my_app,
2453 get_transaction,
2454 TP_ARGS(
2455 int, userid,
2456 size_t, len
2457 ),
2458 TP_FIELDS(
2459 ctf_integer(int, userid, userid)
2460 ctf_integer(size_t, len, len)
2461 )
2462 )
2463 ----
2464
2465 In this case, we create three tracepoint classes, with one implicit
2466 tracepoint instance for each of them: `get_account`, `get_settings`, and
2467 `get_transaction`. However, they all share the same event field names
2468 and types. Hence three identical, yet independent serialization
2469 functions are created when you build the tracepoint provider package.
2470
2471 A better design choice is to define a single tracepoint class and three
2472 tracepoint instances:
2473
2474 [source,c]
2475 ----
2476 /* The tracepoint class */
2477 TRACEPOINT_EVENT_CLASS(
2478 /* Tracepoint provider name */
2479 my_app,
2480
2481 /* Tracepoint class name */
2482 my_class,
2483
2484 /* Input arguments */
2485 TP_ARGS(
2486 int, userid,
2487 size_t, len
2488 ),
2489
2490 /* Output event fields */
2491 TP_FIELDS(
2492 ctf_integer(int, userid, userid)
2493 ctf_integer(size_t, len, len)
2494 )
2495 )
2496
2497 /* The tracepoint instances */
2498 TRACEPOINT_EVENT_INSTANCE(
2499 /* Tracepoint provider name */
2500 my_app,
2501
2502 /* Tracepoint class name */
2503 my_class,
2504
2505 /* Tracepoint name */
2506 get_account,
2507
2508 /* Input arguments */
2509 TP_ARGS(
2510 int, userid,
2511 size_t, len
2512 )
2513 )
2514 TRACEPOINT_EVENT_INSTANCE(
2515 my_app,
2516 my_class,
2517 get_settings,
2518 TP_ARGS(
2519 int, userid,
2520 size_t, len
2521 )
2522 )
2523 TRACEPOINT_EVENT_INSTANCE(
2524 my_app,
2525 my_class,
2526 get_transaction,
2527 TP_ARGS(
2528 int, userid,
2529 size_t, len
2530 )
2531 )
2532 ----
2533 ====
2534
2535
2536 [[assigning-log-levels]]
2537 ===== Assign a log level to a tracepoint definition
2538
2539 You can assign an optional _log level_ to a
2540 <<defining-tracepoints,tracepoint definition>>.
2541
2542 Assigning different levels of severity to tracepoint definitions can
2543 be useful: when you <<enabling-disabling-events,create an event rule>>,
2544 you can target tracepoints having a log level as severe as a specific
2545 value.
2546
2547 The concept of LTTng-UST log levels is similar to the levels found
2548 in typical logging frameworks:
2549
2550 * In a logging framework, the log level is given by the function
2551 or method name you use at the log statement site: `debug()`,
2552 `info()`, `warn()`, `error()`, and so on.
2553 * In LTTng-UST, you statically assign the log level to a tracepoint
2554 definition; any `tracepoint()` macro invocation which refers to
2555 this definition has this log level.
2556
2557 You can assign a log level to a tracepoint definition with the
2558 `TRACEPOINT_LOGLEVEL()` macro. You must use this macro _after_ the
2559 <<defining-tracepoints,`TRACEPOINT_EVENT()`>> or
2560 <<using-tracepoint-classes,`TRACEPOINT_INSTANCE()`>> macro for a given
2561 tracepoint.
2562
2563 The syntax of the `TRACEPOINT_LOGLEVEL()` macro is:
2564
2565 [source,c]
2566 .`TRACEPOINT_LOGLEVEL()` macro syntax.
2567 ----
2568 TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2569 ----
2570
2571 Replace:
2572
2573 * `provider_name` with the tracepoint provider name.
2574 * `tracepoint_name` with the tracepoint name.
2575 * `log_level` with the log level to assign to the tracepoint
2576 definition named `tracepoint_name` in the `provider_name`
2577 tracepoint provider.
2578 +
2579 See man:lttng-ust(3) for a list of available log level names.
2580
2581 .Assign the `TRACE_DEBUG_UNIT` log level to a tracepoint definition.
2582 ====
2583 [source,c]
2584 ----
2585 /* Tracepoint definition */
2586 TRACEPOINT_EVENT(
2587 my_app,
2588 get_transaction,
2589 TP_ARGS(
2590 int, userid,
2591 size_t, len
2592 ),
2593 TP_FIELDS(
2594 ctf_integer(int, userid, userid)
2595 ctf_integer(size_t, len, len)
2596 )
2597 )
2598
2599 /* Log level assignment */
2600 TRACEPOINT_LOGLEVEL(my_app, get_transaction, TRACE_DEBUG_UNIT)
2601 ----
2602 ====
2603
2604
2605 [[tpp-source]]
2606 ===== Create a tracepoint provider package source file
2607
2608 A _tracepoint provider package source file_ is a C source file which
2609 includes a <<tpp-header,tracepoint provider header file>> to expand its
2610 macros into event serialization and other functions.
2611
2612 You can always use the following tracepoint provider package source
2613 file template:
2614
2615 [source,c]
2616 .Tracepoint provider package source file template.
2617 ----
2618 #define TRACEPOINT_CREATE_PROBES
2619
2620 #include "tp.h"
2621 ----
2622
2623 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2624 header file>> name. You may also include more than one tracepoint
2625 provider header file here to create a tracepoint provider package
2626 holding more than one tracepoint providers.
2627
2628
2629 [[probing-the-application-source-code]]
2630 ==== Add tracepoints to an application's source code
2631
2632 Once you <<tpp-header,create a tracepoint provider header file>>, you
2633 can use the `tracepoint()` macro in your application's
2634 source code to insert the tracepoints that this header
2635 <<defining-tracepoints,defines>>.
2636
2637 The `tracepoint()` macro takes at least two parameters: the tracepoint
2638 provider name and the tracepoint name. The corresponding tracepoint
2639 definition defines the other parameters.
2640
2641 .`tracepoint()` usage.
2642 ====
2643 The following <<defining-tracepoints,tracepoint definition>> defines a
2644 tracepoint which takes two input arguments and has two output event
2645 fields.
2646
2647 [source,c]
2648 .Tracepoint provider header file.
2649 ----
2650 #include "my-custom-structure.h"
2651
2652 TRACEPOINT_EVENT(
2653 my_provider,
2654 my_tracepoint,
2655 TP_ARGS(
2656 int, argc,
2657 const char*, cmd_name
2658 ),
2659 TP_FIELDS(
2660 ctf_string(cmd_name, cmd_name)
2661 ctf_integer(int, number_of_args, argc)
2662 )
2663 )
2664 ----
2665
2666 You can refer to this tracepoint definition with the `tracepoint()`
2667 macro in your application's source code like this:
2668
2669 [source,c]
2670 .Application's source file.
2671 ----
2672 #include "tp.h"
2673
2674 int main(int argc, char* argv[])
2675 {
2676 tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2677
2678 return 0;
2679 }
2680 ----
2681
2682 Note how the application's source code includes
2683 the tracepoint provider header file containing the tracepoint
2684 definitions to use, path:{tp.h}.
2685 ====
2686
2687 .`tracepoint()` usage with a complex tracepoint definition.
2688 ====
2689 Consider this complex tracepoint definition, where multiple event
2690 fields refer to the same input arguments in their argument expression
2691 parameter:
2692
2693 [source,c]
2694 .Tracepoint provider header file.
2695 ----
2696 /* For `struct stat` */
2697 #include <sys/types.h>
2698 #include <sys/stat.h>
2699 #include <unistd.h>
2700
2701 TRACEPOINT_EVENT(
2702 my_provider,
2703 my_tracepoint,
2704 TP_ARGS(
2705 int, my_int_arg,
2706 char*, my_str_arg,
2707 struct stat*, st
2708 ),
2709 TP_FIELDS(
2710 ctf_integer(int, my_constant_field, 23 + 17)
2711 ctf_integer(int, my_int_arg_field, my_int_arg)
2712 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2713 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2714 my_str_arg[2] + my_str_arg[3])
2715 ctf_string(my_str_arg_field, my_str_arg)
2716 ctf_integer_hex(off_t, size_field, st->st_size)
2717 ctf_float(double, size_dbl_field, (double) st->st_size)
2718 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2719 size_t, strlen(my_str_arg) / 2)
2720 )
2721 )
2722 ----
2723
2724 You can refer to this tracepoint definition with the `tracepoint()`
2725 macro in your application's source code like this:
2726
2727 [source,c]
2728 .Application's source file.
2729 ----
2730 #define TRACEPOINT_DEFINE
2731 #include "tp.h"
2732
2733 int main(void)
2734 {
2735 struct stat s;
2736
2737 stat("/etc/fstab", &s);
2738 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2739
2740 return 0;
2741 }
2742 ----
2743
2744 If you look at the event record that LTTng writes when tracing this
2745 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2746 it should look like this:
2747
2748 .Event record fields
2749 |====
2750 |Field's name |Field's value
2751 |`my_constant_field` |40
2752 |`my_int_arg_field` |23
2753 |`my_int_arg_field2` |529
2754 |`sum4_field` |389
2755 |`my_str_arg_field` |`Hello, World!`
2756 |`size_field` |0x12d
2757 |`size_dbl_field` |301.0
2758 |`half_my_str_arg_field` |`Hello,`
2759 |====
2760 ====
2761
2762 Sometimes, the arguments you pass to `tracepoint()` are expensive to
2763 compute--they use the call stack, for example. To avoid this
2764 computation when the tracepoint is disabled, you can use the
2765 `tracepoint_enabled()` and `do_tracepoint()` macros.
2766
2767 The syntax of the `tracepoint_enabled()` and `do_tracepoint()` macros
2768 is:
2769
2770 [source,c]
2771 .`tracepoint_enabled()` and `do_tracepoint()` macros syntax.
2772 ----
2773 tracepoint_enabled(provider_name, tracepoint_name)
2774 do_tracepoint(provider_name, tracepoint_name, ...)
2775 ----
2776
2777 Replace:
2778
2779 * `provider_name` with the tracepoint provider name.
2780 * `tracepoint_name` with the tracepoint name.
2781
2782 `tracepoint_enabled()` returns a non-zero value if the tracepoint named
2783 `tracepoint_name` from the provider named `provider_name` is enabled
2784 **at run time**.
2785
2786 `do_tracepoint()` is like `tracepoint()`, except that it doesn't check
2787 if the tracepoint is enabled. Using `tracepoint()` with
2788 `tracepoint_enabled()` is dangerous since `tracepoint()` also contains
2789 the `tracepoint_enabled()` check, thus a race condition is
2790 possible in this situation:
2791
2792 [source,c]
2793 .Possible race condition when using `tracepoint_enabled()` with `tracepoint()`.
2794 ----
2795 if (tracepoint_enabled(my_provider, my_tracepoint)) {
2796 stuff = prepare_stuff();
2797 }
2798
2799 tracepoint(my_provider, my_tracepoint, stuff);
2800 ----
2801
2802 If the tracepoint is enabled after the condition, then `stuff` is not
2803 prepared: the emitted event will either contain wrong data, or the whole
2804 application could crash (segmentation fault, for example).
2805
2806 NOTE: Neither `tracepoint_enabled()` nor `do_tracepoint()` have an
2807 `STAP_PROBEV()` call. If you need it, you must emit
2808 this call yourself.
2809
2810
2811 [[building-tracepoint-providers-and-user-application]]
2812 ==== Build and link a tracepoint provider package and an application
2813
2814 Once you have one or more <<tpp-header,tracepoint provider header
2815 files>> and a <<tpp-source,tracepoint provider package source file>>,
2816 you can create the tracepoint provider package by compiling its source
2817 file. From here, multiple build and run scenarios are possible. The
2818 following table shows common application and library configurations
2819 along with the required command lines to achieve them.
2820
2821 In the following diagrams, we use the following file names:
2822
2823 `app`::
2824 Executable application.
2825
2826 `app.o`::
2827 Application's object file.
2828
2829 `tpp.o`::
2830 Tracepoint provider package object file.
2831
2832 `tpp.a`::
2833 Tracepoint provider package archive file.
2834
2835 `libtpp.so`::
2836 Tracepoint provider package shared object file.
2837
2838 `emon.o`::
2839 User library object file.
2840
2841 `libemon.so`::
2842 User library shared object file.
2843
2844 We use the following symbols in the diagrams of table below:
2845
2846 [role="img-100"]
2847 .Symbols used in the build scenario diagrams.
2848 image::ust-sit-symbols.png[]
2849
2850 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
2851 variable in the following instructions.
2852
2853 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
2854 .Common tracepoint provider package scenarios.
2855 |====
2856 |Scenario |Instructions
2857
2858 |
2859 The instrumented application is statically linked with
2860 the tracepoint provider package object.
2861
2862 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
2863
2864 |
2865 include::../common/ust-sit-step-tp-o.txt[]
2866
2867 To build the instrumented application:
2868
2869 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2870 +
2871 --
2872 [source,c]
2873 ----
2874 #define TRACEPOINT_DEFINE
2875 ----
2876 --
2877
2878 . Compile the application source file:
2879 +
2880 --
2881 [role="term"]
2882 ----
2883 $ gcc -c app.c
2884 ----
2885 --
2886
2887 . Build the application:
2888 +
2889 --
2890 [role="term"]
2891 ----
2892 $ gcc -o app app.o tpp.o -llttng-ust -ldl
2893 ----
2894 --
2895
2896 To run the instrumented application:
2897
2898 * Start the application:
2899 +
2900 --
2901 [role="term"]
2902 ----
2903 $ ./app
2904 ----
2905 --
2906
2907 |
2908 The instrumented application is statically linked with the
2909 tracepoint provider package archive file.
2910
2911 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
2912
2913 |
2914 To create the tracepoint provider package archive file:
2915
2916 . Compile the <<tpp-source,tracepoint provider package source file>>:
2917 +
2918 --
2919 [role="term"]
2920 ----
2921 $ gcc -I. -c tpp.c
2922 ----
2923 --
2924
2925 . Create the tracepoint provider package archive file:
2926 +
2927 --
2928 [role="term"]
2929 ----
2930 $ ar rcs tpp.a tpp.o
2931 ----
2932 --
2933
2934 To build the instrumented application:
2935
2936 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2937 +
2938 --
2939 [source,c]
2940 ----
2941 #define TRACEPOINT_DEFINE
2942 ----
2943 --
2944
2945 . Compile the application source file:
2946 +
2947 --
2948 [role="term"]
2949 ----
2950 $ gcc -c app.c
2951 ----
2952 --
2953
2954 . Build the application:
2955 +
2956 --
2957 [role="term"]
2958 ----
2959 $ gcc -o app app.o tpp.a -llttng-ust -ldl
2960 ----
2961 --
2962
2963 To run the instrumented application:
2964
2965 * Start the application:
2966 +
2967 --
2968 [role="term"]
2969 ----
2970 $ ./app
2971 ----
2972 --
2973
2974 |
2975 The instrumented application is linked with the tracepoint provider
2976 package shared object.
2977
2978 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
2979
2980 |
2981 include::../common/ust-sit-step-tp-so.txt[]
2982
2983 To build the instrumented application:
2984
2985 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2986 +
2987 --
2988 [source,c]
2989 ----
2990 #define TRACEPOINT_DEFINE
2991 ----
2992 --
2993
2994 . Compile the application source file:
2995 +
2996 --
2997 [role="term"]
2998 ----
2999 $ gcc -c app.c
3000 ----
3001 --
3002
3003 . Build the application:
3004 +
3005 --
3006 [role="term"]
3007 ----
3008 $ gcc -o app app.o -ldl -L. -ltpp
3009 ----
3010 --
3011
3012 To run the instrumented application:
3013
3014 * Start the application:
3015 +
3016 --
3017 [role="term"]
3018 ----
3019 $ ./app
3020 ----
3021 --
3022
3023 |
3024 The tracepoint provider package shared object is preloaded before the
3025 instrumented application starts.
3026
3027 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
3028
3029 |
3030 include::../common/ust-sit-step-tp-so.txt[]
3031
3032 To build the instrumented application:
3033
3034 . In path:{app.c}, before including path:{tpp.h}, add the
3035 following lines:
3036 +
3037 --
3038 [source,c]
3039 ----
3040 #define TRACEPOINT_DEFINE
3041 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3042 ----
3043 --
3044
3045 . Compile the application source file:
3046 +
3047 --
3048 [role="term"]
3049 ----
3050 $ gcc -c app.c
3051 ----
3052 --
3053
3054 . Build the application:
3055 +
3056 --
3057 [role="term"]
3058 ----
3059 $ gcc -o app app.o -ldl
3060 ----
3061 --
3062
3063 To run the instrumented application with tracing support:
3064
3065 * Preload the tracepoint provider package shared object and
3066 start the application:
3067 +
3068 --
3069 [role="term"]
3070 ----
3071 $ LD_PRELOAD=./libtpp.so ./app
3072 ----
3073 --
3074
3075 To run the instrumented application without tracing support:
3076
3077 * Start the application:
3078 +
3079 --
3080 [role="term"]
3081 ----
3082 $ ./app
3083 ----
3084 --
3085
3086 |
3087 The instrumented application dynamically loads the tracepoint provider
3088 package shared object.
3089
3090 See the <<dlclose-warning,warning about `dlclose()`>>.
3091
3092 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
3093
3094 |
3095 include::../common/ust-sit-step-tp-so.txt[]
3096
3097 To build the instrumented application:
3098
3099 . In path:{app.c}, before including path:{tpp.h}, add the
3100 following lines:
3101 +
3102 --
3103 [source,c]
3104 ----
3105 #define TRACEPOINT_DEFINE
3106 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3107 ----
3108 --
3109
3110 . Compile the application source file:
3111 +
3112 --
3113 [role="term"]
3114 ----
3115 $ gcc -c app.c
3116 ----
3117 --
3118
3119 . Build the application:
3120 +
3121 --
3122 [role="term"]
3123 ----
3124 $ gcc -o app app.o -ldl
3125 ----
3126 --
3127
3128 To run the instrumented application:
3129
3130 * Start the application:
3131 +
3132 --
3133 [role="term"]
3134 ----
3135 $ ./app
3136 ----
3137 --
3138
3139 |
3140 The application is linked with the instrumented user library.
3141
3142 The instrumented user library is statically linked with the tracepoint
3143 provider package object file.
3144
3145 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
3146
3147 |
3148 include::../common/ust-sit-step-tp-o-fpic.txt[]
3149
3150 To build the instrumented user library:
3151
3152 . In path:{emon.c}, before including path:{tpp.h}, add the
3153 following line:
3154 +
3155 --
3156 [source,c]
3157 ----
3158 #define TRACEPOINT_DEFINE
3159 ----
3160 --
3161
3162 . Compile the user library source file:
3163 +
3164 --
3165 [role="term"]
3166 ----
3167 $ gcc -I. -fpic -c emon.c
3168 ----
3169 --
3170
3171 . Build the user library shared object:
3172 +
3173 --
3174 [role="term"]
3175 ----
3176 $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
3177 ----
3178 --
3179
3180 To build the application:
3181
3182 . Compile the application source file:
3183 +
3184 --
3185 [role="term"]
3186 ----
3187 $ gcc -c app.c
3188 ----
3189 --
3190
3191 . Build the application:
3192 +
3193 --
3194 [role="term"]
3195 ----
3196 $ gcc -o app app.o -L. -lemon
3197 ----
3198 --
3199
3200 To run the application:
3201
3202 * Start the application:
3203 +
3204 --
3205 [role="term"]
3206 ----
3207 $ ./app
3208 ----
3209 --
3210
3211 |
3212 The application is linked with the instrumented user library.
3213
3214 The instrumented user library is linked with the tracepoint provider
3215 package shared object.
3216
3217 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3218
3219 |
3220 include::../common/ust-sit-step-tp-so.txt[]
3221
3222 To build the instrumented user library:
3223
3224 . In path:{emon.c}, before including path:{tpp.h}, add the
3225 following line:
3226 +
3227 --
3228 [source,c]
3229 ----
3230 #define TRACEPOINT_DEFINE
3231 ----
3232 --
3233
3234 . Compile the user library source file:
3235 +
3236 --
3237 [role="term"]
3238 ----
3239 $ gcc -I. -fpic -c emon.c
3240 ----
3241 --
3242
3243 . Build the user library shared object:
3244 +
3245 --
3246 [role="term"]
3247 ----
3248 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3249 ----
3250 --
3251
3252 To build the application:
3253
3254 . Compile the application source file:
3255 +
3256 --
3257 [role="term"]
3258 ----
3259 $ gcc -c app.c
3260 ----
3261 --
3262
3263 . Build the application:
3264 +
3265 --
3266 [role="term"]
3267 ----
3268 $ gcc -o app app.o -L. -lemon
3269 ----
3270 --
3271
3272 To run the application:
3273
3274 * Start the application:
3275 +
3276 --
3277 [role="term"]
3278 ----
3279 $ ./app
3280 ----
3281 --
3282
3283 |
3284 The tracepoint provider package shared object is preloaded before the
3285 application starts.
3286
3287 The application is linked with the instrumented user library.
3288
3289 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3290
3291 |
3292 include::../common/ust-sit-step-tp-so.txt[]
3293
3294 To build the instrumented user library:
3295
3296 . In path:{emon.c}, before including path:{tpp.h}, add the
3297 following lines:
3298 +
3299 --
3300 [source,c]
3301 ----
3302 #define TRACEPOINT_DEFINE
3303 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3304 ----
3305 --
3306
3307 . Compile the user library source file:
3308 +
3309 --
3310 [role="term"]
3311 ----
3312 $ gcc -I. -fpic -c emon.c
3313 ----
3314 --
3315
3316 . Build the user library shared object:
3317 +
3318 --
3319 [role="term"]
3320 ----
3321 $ gcc -shared -o libemon.so emon.o -ldl
3322 ----
3323 --
3324
3325 To build the application:
3326
3327 . Compile the application source file:
3328 +
3329 --
3330 [role="term"]
3331 ----
3332 $ gcc -c app.c
3333 ----
3334 --
3335
3336 . Build the application:
3337 +
3338 --
3339 [role="term"]
3340 ----
3341 $ gcc -o app app.o -L. -lemon
3342 ----
3343 --
3344
3345 To run the application with tracing support:
3346
3347 * Preload the tracepoint provider package shared object and
3348 start the application:
3349 +
3350 --
3351 [role="term"]
3352 ----
3353 $ LD_PRELOAD=./libtpp.so ./app
3354 ----
3355 --
3356
3357 To run the application without tracing support:
3358
3359 * Start the application:
3360 +
3361 --
3362 [role="term"]
3363 ----
3364 $ ./app
3365 ----
3366 --
3367
3368 |
3369 The application is linked with the instrumented user library.
3370
3371 The instrumented user library dynamically loads the tracepoint provider
3372 package shared object.
3373
3374 See the <<dlclose-warning,warning about `dlclose()`>>.
3375
3376 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3377
3378 |
3379 include::../common/ust-sit-step-tp-so.txt[]
3380
3381 To build the instrumented user library:
3382
3383 . In path:{emon.c}, before including path:{tpp.h}, add the
3384 following lines:
3385 +
3386 --
3387 [source,c]
3388 ----
3389 #define TRACEPOINT_DEFINE
3390 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3391 ----
3392 --
3393
3394 . Compile the user library source file:
3395 +
3396 --
3397 [role="term"]
3398 ----
3399 $ gcc -I. -fpic -c emon.c
3400 ----
3401 --
3402
3403 . Build the user library shared object:
3404 +
3405 --
3406 [role="term"]
3407 ----
3408 $ gcc -shared -o libemon.so emon.o -ldl
3409 ----
3410 --
3411
3412 To build the application:
3413
3414 . Compile the application source file:
3415 +
3416 --
3417 [role="term"]
3418 ----
3419 $ gcc -c app.c
3420 ----
3421 --
3422
3423 . Build the application:
3424 +
3425 --
3426 [role="term"]
3427 ----
3428 $ gcc -o app app.o -L. -lemon
3429 ----
3430 --
3431
3432 To run the application:
3433
3434 * Start the application:
3435 +
3436 --
3437 [role="term"]
3438 ----
3439 $ ./app
3440 ----
3441 --
3442
3443 |
3444 The application dynamically loads the instrumented user library.
3445
3446 The instrumented user library is linked with the tracepoint provider
3447 package shared object.
3448
3449 See the <<dlclose-warning,warning about `dlclose()`>>.
3450
3451 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3452
3453 |
3454 include::../common/ust-sit-step-tp-so.txt[]
3455
3456 To build the instrumented user library:
3457
3458 . In path:{emon.c}, before including path:{tpp.h}, add the
3459 following line:
3460 +
3461 --
3462 [source,c]
3463 ----
3464 #define TRACEPOINT_DEFINE
3465 ----
3466 --
3467
3468 . Compile the user library source file:
3469 +
3470 --
3471 [role="term"]
3472 ----
3473 $ gcc -I. -fpic -c emon.c
3474 ----
3475 --
3476
3477 . Build the user library shared object:
3478 +
3479 --
3480 [role="term"]
3481 ----
3482 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3483 ----
3484 --
3485
3486 To build the application:
3487
3488 . Compile the application source file:
3489 +
3490 --
3491 [role="term"]
3492 ----
3493 $ gcc -c app.c
3494 ----
3495 --
3496
3497 . Build the application:
3498 +
3499 --
3500 [role="term"]
3501 ----
3502 $ gcc -o app app.o -ldl -L. -lemon
3503 ----
3504 --
3505
3506 To run the application:
3507
3508 * Start the application:
3509 +
3510 --
3511 [role="term"]
3512 ----
3513 $ ./app
3514 ----
3515 --
3516
3517 |
3518 The application dynamically loads the instrumented user library.
3519
3520 The instrumented user library dynamically loads the tracepoint provider
3521 package shared object.
3522
3523 See the <<dlclose-warning,warning about `dlclose()`>>.
3524
3525 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3526
3527 |
3528 include::../common/ust-sit-step-tp-so.txt[]
3529
3530 To build the instrumented user library:
3531
3532 . In path:{emon.c}, before including path:{tpp.h}, add the
3533 following lines:
3534 +
3535 --
3536 [source,c]
3537 ----
3538 #define TRACEPOINT_DEFINE
3539 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3540 ----
3541 --
3542
3543 . Compile the user library source file:
3544 +
3545 --
3546 [role="term"]
3547 ----
3548 $ gcc -I. -fpic -c emon.c
3549 ----
3550 --
3551
3552 . Build the user library shared object:
3553 +
3554 --
3555 [role="term"]
3556 ----
3557 $ gcc -shared -o libemon.so emon.o -ldl
3558 ----
3559 --
3560
3561 To build the application:
3562
3563 . Compile the application source file:
3564 +
3565 --
3566 [role="term"]
3567 ----
3568 $ gcc -c app.c
3569 ----
3570 --
3571
3572 . Build the application:
3573 +
3574 --
3575 [role="term"]
3576 ----
3577 $ gcc -o app app.o -ldl -L. -lemon
3578 ----
3579 --
3580
3581 To run the application:
3582
3583 * Start the application:
3584 +
3585 --
3586 [role="term"]
3587 ----
3588 $ ./app
3589 ----
3590 --
3591
3592 |
3593 The tracepoint provider package shared object is preloaded before the
3594 application starts.
3595
3596 The application dynamically loads the instrumented user library.
3597
3598 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3599
3600 |
3601 include::../common/ust-sit-step-tp-so.txt[]
3602
3603 To build the instrumented user library:
3604
3605 . In path:{emon.c}, before including path:{tpp.h}, add the
3606 following lines:
3607 +
3608 --
3609 [source,c]
3610 ----
3611 #define TRACEPOINT_DEFINE
3612 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3613 ----
3614 --
3615
3616 . Compile the user library source file:
3617 +
3618 --
3619 [role="term"]
3620 ----
3621 $ gcc -I. -fpic -c emon.c
3622 ----
3623 --
3624
3625 . Build the user library shared object:
3626 +
3627 --
3628 [role="term"]
3629 ----
3630 $ gcc -shared -o libemon.so emon.o -ldl
3631 ----
3632 --
3633
3634 To build the application:
3635
3636 . Compile the application source file:
3637 +
3638 --
3639 [role="term"]
3640 ----
3641 $ gcc -c app.c
3642 ----
3643 --
3644
3645 . Build the application:
3646 +
3647 --
3648 [role="term"]
3649 ----
3650 $ gcc -o app app.o -L. -lemon
3651 ----
3652 --
3653
3654 To run the application with tracing support:
3655
3656 * Preload the tracepoint provider package shared object and
3657 start the application:
3658 +
3659 --
3660 [role="term"]
3661 ----
3662 $ LD_PRELOAD=./libtpp.so ./app
3663 ----
3664 --
3665
3666 To run the application without tracing support:
3667
3668 * Start the application:
3669 +
3670 --
3671 [role="term"]
3672 ----
3673 $ ./app
3674 ----
3675 --
3676
3677 |
3678 The application is statically linked with the tracepoint provider
3679 package object file.
3680
3681 The application is linked with the instrumented user library.
3682
3683 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3684
3685 |
3686 include::../common/ust-sit-step-tp-o.txt[]
3687
3688 To build the instrumented user library:
3689
3690 . In path:{emon.c}, before including path:{tpp.h}, add the
3691 following line:
3692 +
3693 --
3694 [source,c]
3695 ----
3696 #define TRACEPOINT_DEFINE
3697 ----
3698 --
3699
3700 . Compile the user library source file:
3701 +
3702 --
3703 [role="term"]
3704 ----
3705 $ gcc -I. -fpic -c emon.c
3706 ----
3707 --
3708
3709 . Build the user library shared object:
3710 +
3711 --
3712 [role="term"]
3713 ----
3714 $ gcc -shared -o libemon.so emon.o
3715 ----
3716 --
3717
3718 To build the application:
3719
3720 . Compile the application source file:
3721 +
3722 --
3723 [role="term"]
3724 ----
3725 $ gcc -c app.c
3726 ----
3727 --
3728
3729 . Build the application:
3730 +
3731 --
3732 [role="term"]
3733 ----
3734 $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3735 ----
3736 --
3737
3738 To run the instrumented application:
3739
3740 * Start the application:
3741 +
3742 --
3743 [role="term"]
3744 ----
3745 $ ./app
3746 ----
3747 --
3748
3749 |
3750 The application is statically linked with the tracepoint provider
3751 package object file.
3752
3753 The application dynamically loads the instrumented user library.
3754
3755 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3756
3757 |
3758 include::../common/ust-sit-step-tp-o.txt[]
3759
3760 To build the application:
3761
3762 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3763 +
3764 --
3765 [source,c]
3766 ----
3767 #define TRACEPOINT_DEFINE
3768 ----
3769 --
3770
3771 . Compile the application source file:
3772 +
3773 --
3774 [role="term"]
3775 ----
3776 $ gcc -c app.c
3777 ----
3778 --
3779
3780 . Build the application:
3781 +
3782 --
3783 [role="term"]
3784 ----
3785 $ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3786 -llttng-ust -ldl
3787 ----
3788 --
3789 +
3790 The `--export-dynamic` option passed to the linker is necessary for the
3791 dynamically loaded library to ``see'' the tracepoint symbols defined in
3792 the application.
3793
3794 To build the instrumented user library:
3795
3796 . Compile the user library source file:
3797 +
3798 --
3799 [role="term"]
3800 ----
3801 $ gcc -I. -fpic -c emon.c
3802 ----
3803 --
3804
3805 . Build the user library shared object:
3806 +
3807 --
3808 [role="term"]
3809 ----
3810 $ gcc -shared -o libemon.so emon.o
3811 ----
3812 --
3813
3814 To run the application:
3815
3816 * Start the application:
3817 +
3818 --
3819 [role="term"]
3820 ----
3821 $ ./app
3822 ----
3823 --
3824 |====
3825
3826 [[dlclose-warning]]
3827 [IMPORTANT]
3828 .Do not use man:dlclose(3) on a tracepoint provider package
3829 ====
3830 Never use man:dlclose(3) on any shared object which:
3831
3832 * Is linked with, statically or dynamically, a tracepoint provider
3833 package.
3834 * Calls man:dlopen(3) itself to dynamically open a tracepoint provider
3835 package shared object.
3836
3837 This is currently considered **unsafe** due to a lack of reference
3838 counting from LTTng-UST to the shared object.
3839
3840 A known workaround (available since glibc 2.2) is to use the
3841 `RTLD_NODELETE` flag when calling man:dlopen(3) initially. This has the
3842 effect of not unloading the loaded shared object, even if man:dlclose(3)
3843 is called.
3844
3845 You can also preload the tracepoint provider package shared object with
3846 the env:LD_PRELOAD environment variable to overcome this limitation.
3847 ====
3848
3849
3850 [[using-lttng-ust-with-daemons]]
3851 ===== Use noch:{LTTng-UST} with daemons
3852
3853 If your instrumented application calls man:fork(2), man:clone(2),
3854 or BSD's man:rfork(2), without a following man:exec(3)-family
3855 system call, you must preload the path:{liblttng-ust-fork.so} shared
3856 object when you start the application.
3857
3858 [role="term"]
3859 ----
3860 $ LD_PRELOAD=liblttng-ust-fork.so ./my-app
3861 ----
3862
3863 If your tracepoint provider package is
3864 a shared library which you also preload, you must put both
3865 shared objects in env:LD_PRELOAD:
3866
3867 [role="term"]
3868 ----
3869 $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
3870 ----
3871
3872
3873 [role="since-2.9"]
3874 [[liblttng-ust-fd]]
3875 ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
3876
3877 If your instrumented application closes one or more file descriptors
3878 which it did not open itself, you must preload the
3879 path:{liblttng-ust-fd.so} shared object when you start the application:
3880
3881 [role="term"]
3882 ----
3883 $ LD_PRELOAD=liblttng-ust-fd.so ./my-app
3884 ----
3885
3886 Typical use cases include closing all the file descriptors after
3887 man:fork(2) or man:rfork(2) and buggy applications doing
3888 ``double closes''.
3889
3890
3891 [[lttng-ust-pkg-config]]
3892 ===== Use noch:{pkg-config}
3893
3894 On some distributions, LTTng-UST ships with a
3895 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
3896 metadata file. If this is your case, then you can use cmd:pkg-config to
3897 build an application on the command line:
3898
3899 [role="term"]
3900 ----
3901 $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
3902 ----
3903
3904
3905 [[instrumenting-32-bit-app-on-64-bit-system]]
3906 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
3907
3908 In order to trace a 32-bit application running on a 64-bit system,
3909 LTTng must use a dedicated 32-bit
3910 <<lttng-consumerd,consumer daemon>>.
3911
3912 The following steps show how to build and install a 32-bit consumer
3913 daemon, which is _not_ part of the default 64-bit LTTng build, how to
3914 build and install the 32-bit LTTng-UST libraries, and how to build and
3915 link an instrumented 32-bit application in that context.
3916
3917 To build a 32-bit instrumented application for a 64-bit target system,
3918 assuming you have a fresh target system with no installed Userspace RCU
3919 or LTTng packages:
3920
3921 . Download, build, and install a 32-bit version of Userspace RCU:
3922 +
3923 --
3924 [role="term"]
3925 ----
3926 $ cd $(mktemp -d) &&
3927 wget http://lttng.org/files/urcu/userspace-rcu-latest-0.9.tar.bz2 &&
3928 tar -xf userspace-rcu-latest-0.9.tar.bz2 &&
3929 cd userspace-rcu-0.9.* &&
3930 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
3931 make &&
3932 sudo make install &&
3933 sudo ldconfig
3934 ----
3935 --
3936
3937 . Using your distribution's package manager, or from source, install
3938 the following 32-bit versions of the following dependencies of
3939 LTTng-tools and LTTng-UST:
3940 +
3941 --
3942 * https://sourceforge.net/projects/libuuid/[libuuid]
3943 * http://directory.fsf.org/wiki/Popt[popt]
3944 * http://www.xmlsoft.org/[libxml2]
3945 --
3946
3947 . Download, build, and install a 32-bit version of the latest
3948 LTTng-UST{nbsp}{revision}:
3949 +
3950 --
3951 [role="term"]
3952 ----
3953 $ cd $(mktemp -d) &&
3954 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.9.tar.bz2 &&
3955 tar -xf lttng-ust-latest-2.9.tar.bz2 &&
3956 cd lttng-ust-2.9.* &&
3957 ./configure --libdir=/usr/local/lib32 \
3958 CFLAGS=-m32 CXXFLAGS=-m32 \
3959 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
3960 make &&
3961 sudo make install &&
3962 sudo ldconfig
3963 ----
3964 --
3965 +
3966 [NOTE]
3967 ====
3968 Depending on your distribution,
3969 32-bit libraries could be installed at a different location than
3970 `/usr/lib32`. For example, Debian is known to install
3971 some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
3972
3973 In this case, make sure to set `LDFLAGS` to all the
3974 relevant 32-bit library paths, for example:
3975
3976 [role="term"]
3977 ----
3978 $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
3979 ----
3980 ====
3981
3982 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
3983 the 32-bit consumer daemon:
3984 +
3985 --
3986 [role="term"]
3987 ----
3988 $ cd $(mktemp -d) &&
3989 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
3990 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
3991 cd lttng-tools-2.9.* &&
3992 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3993 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
3994 --disable-bin-lttng --disable-bin-lttng-crash \
3995 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
3996 make &&
3997 cd src/bin/lttng-consumerd &&
3998 sudo make install &&
3999 sudo ldconfig
4000 ----
4001 --
4002
4003 . From your distribution or from source,
4004 <<installing-lttng,install>> the 64-bit versions of
4005 LTTng-UST and Userspace RCU.
4006 . Download, build, and install the 64-bit version of the
4007 latest LTTng-tools{nbsp}{revision}:
4008 +
4009 --
4010 [role="term"]
4011 ----
4012 $ cd $(mktemp -d) &&
4013 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
4014 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
4015 cd lttng-tools-2.9.* &&
4016 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
4017 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
4018 make &&
4019 sudo make install &&
4020 sudo ldconfig
4021 ----
4022 --
4023
4024 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
4025 when linking your 32-bit application:
4026 +
4027 ----
4028 -m32 -L/usr/lib32 -L/usr/local/lib32 \
4029 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
4030 ----
4031 +
4032 For example, let's rebuild the quick start example in
4033 <<tracing-your-own-user-application,Trace a user application>> as an
4034 instrumented 32-bit application:
4035 +
4036 --
4037 [role="term"]
4038 ----
4039 $ gcc -m32 -c -I. hello-tp.c
4040 $ gcc -m32 -c hello.c
4041 $ gcc -m32 -o hello hello.o hello-tp.o \
4042 -L/usr/lib32 -L/usr/local/lib32 \
4043 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
4044 -llttng-ust -ldl
4045 ----
4046 --
4047
4048 No special action is required to execute the 32-bit application and
4049 to trace it: use the command-line man:lttng(1) tool as usual.
4050
4051
4052 [role="since-2.5"]
4053 [[tracef]]
4054 ==== Use `tracef()`
4055
4056 man:tracef(3) is a small LTTng-UST API designed for quick,
4057 man:printf(3)-like instrumentation without the burden of
4058 <<tracepoint-provider,creating>> and
4059 <<building-tracepoint-providers-and-user-application,building>>
4060 a tracepoint provider package.
4061
4062 To use `tracef()` in your application:
4063
4064 . In the C or C++ source files where you need to use `tracef()`,
4065 include `<lttng/tracef.h>`:
4066 +
4067 --
4068 [source,c]
4069 ----
4070 #include <lttng/tracef.h>
4071 ----
4072 --
4073
4074 . In the application's source code, use `tracef()` like you would use
4075 man:printf(3):
4076 +
4077 --
4078 [source,c]
4079 ----
4080 /* ... */
4081
4082 tracef("my message: %d (%s)", my_integer, my_string);
4083
4084 /* ... */
4085 ----
4086 --
4087
4088 . Link your application with `liblttng-ust`:
4089 +
4090 --
4091 [role="term"]
4092 ----
4093 $ gcc -o app app.c -llttng-ust
4094 ----
4095 --
4096
4097 To trace the events that `tracef()` calls emit:
4098
4099 * <<enabling-disabling-events,Create an event rule>> which matches the
4100 `lttng_ust_tracef:*` event name:
4101 +
4102 --
4103 [role="term"]
4104 ----
4105 $ lttng enable-event --userspace 'lttng_ust_tracef:*'
4106 ----
4107 --
4108
4109 [IMPORTANT]
4110 .Limitations of `tracef()`
4111 ====
4112 The `tracef()` utility function was developed to make user space tracing
4113 super simple, albeit with notable disadvantages compared to
4114 <<defining-tracepoints,user-defined tracepoints>>:
4115
4116 * All the emitted events have the same tracepoint provider and
4117 tracepoint names, respectively `lttng_ust_tracef` and `event`.
4118 * There is no static type checking.
4119 * The only event record field you actually get, named `msg`, is a string
4120 potentially containing the values you passed to `tracef()`
4121 using your own format string. This also means that you cannot filter
4122 events with a custom expression at run time because there are no
4123 isolated fields.
4124 * Since `tracef()` uses the C standard library's man:vasprintf(3)
4125 function behind the scenes to format the strings at run time, its
4126 expected performance is lower than with user-defined tracepoints,
4127 which do not require a conversion to a string.
4128
4129 Taking this into consideration, `tracef()` is useful for some quick
4130 prototyping and debugging, but you should not consider it for any
4131 permanent and serious applicative instrumentation.
4132 ====
4133
4134
4135 [role="since-2.7"]
4136 [[tracelog]]
4137 ==== Use `tracelog()`
4138
4139 The man:tracelog(3) API is very similar to <<tracef,`tracef()`>>, with
4140 the difference that it accepts an additional log level parameter.
4141
4142 The goal of `tracelog()` is to ease the migration from logging to
4143 tracing.
4144
4145 To use `tracelog()` in your application:
4146
4147 . In the C or C++ source files where you need to use `tracelog()`,
4148 include `<lttng/tracelog.h>`:
4149 +
4150 --
4151 [source,c]
4152 ----
4153 #include <lttng/tracelog.h>
4154 ----
4155 --
4156
4157 . In the application's source code, use `tracelog()` like you would use
4158 man:printf(3), except for the first parameter which is the log
4159 level:
4160 +
4161 --
4162 [source,c]
4163 ----
4164 /* ... */
4165
4166 tracelog(TRACE_WARNING, "my message: %d (%s)",
4167 my_integer, my_string);
4168
4169 /* ... */
4170 ----
4171 --
4172 +
4173 See man:lttng-ust(3) for a list of available log level names.
4174
4175 . Link your application with `liblttng-ust`:
4176 +
4177 --
4178 [role="term"]
4179 ----
4180 $ gcc -o app app.c -llttng-ust
4181 ----
4182 --
4183
4184 To trace the events that `tracelog()` calls emit with a log level
4185 _as severe as_ a specific log level:
4186
4187 * <<enabling-disabling-events,Create an event rule>> which matches the
4188 `lttng_ust_tracelog:*` event name and a minimum level
4189 of severity:
4190 +
4191 --
4192 [role="term"]
4193 ----
4194 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
4195 --loglevel=TRACE_WARNING
4196 ----
4197 --
4198
4199 To trace the events that `tracelog()` calls emit with a
4200 _specific log level_:
4201
4202 * Create an event rule which matches the `lttng_ust_tracelog:*`
4203 event name and a specific log level:
4204 +
4205 --
4206 [role="term"]
4207 ----
4208 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
4209 --loglevel-only=TRACE_INFO
4210 ----
4211 --
4212
4213
4214 [[prebuilt-ust-helpers]]
4215 === Prebuilt user space tracing helpers
4216
4217 The LTTng-UST package provides a few helpers in the form or preloadable
4218 shared objects which automatically instrument system functions and
4219 calls.
4220
4221 The helper shared objects are normally found in dir:{/usr/lib}. If you
4222 built LTTng-UST <<building-from-source,from source>>, they are probably
4223 located in dir:{/usr/local/lib}.
4224
4225 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4226 are:
4227
4228 path:{liblttng-ust-libc-wrapper.so}::
4229 path:{liblttng-ust-pthread-wrapper.so}::
4230 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4231 memory and POSIX threads function tracing>>.
4232
4233 path:{liblttng-ust-cyg-profile.so}::
4234 path:{liblttng-ust-cyg-profile-fast.so}::
4235 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4236
4237 path:{liblttng-ust-dl.so}::
4238 <<liblttng-ust-dl,Dynamic linker tracing>>.
4239
4240 To use a user space tracing helper with any user application:
4241
4242 * Preload the helper shared object when you start the application:
4243 +
4244 --
4245 [role="term"]
4246 ----
4247 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4248 ----
4249 --
4250 +
4251 You can preload more than one helper:
4252 +
4253 --
4254 [role="term"]
4255 ----
4256 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4257 ----
4258 --
4259
4260
4261 [role="since-2.3"]
4262 [[liblttng-ust-libc-pthread-wrapper]]
4263 ==== Instrument C standard library memory and POSIX threads functions
4264
4265 The path:{liblttng-ust-libc-wrapper.so} and
4266 path:{liblttng-ust-pthread-wrapper.so} helpers
4267 add instrumentation to some C standard library and POSIX
4268 threads functions.
4269
4270 [role="growable"]
4271 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4272 |====
4273 |TP provider name |TP name |Instrumented function
4274
4275 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4276 |`calloc` |man:calloc(3)
4277 |`realloc` |man:realloc(3)
4278 |`free` |man:free(3)
4279 |`memalign` |man:memalign(3)
4280 |`posix_memalign` |man:posix_memalign(3)
4281 |====
4282
4283 [role="growable"]
4284 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4285 |====
4286 |TP provider name |TP name |Instrumented function
4287
4288 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4289 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4290 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4291 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4292 |====
4293
4294 When you preload the shared object, it replaces the functions listed
4295 in the previous tables by wrappers which contain tracepoints and call
4296 the replaced functions.
4297
4298
4299 [[liblttng-ust-cyg-profile]]
4300 ==== Instrument function entry and exit
4301
4302 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4303 to the entry and exit points of functions.
4304
4305 man:gcc(1) and man:clang(1) have an option named
4306 https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
4307 which generates instrumentation calls for entry and exit to functions.
4308 The LTTng-UST function tracing helpers,
4309 path:{liblttng-ust-cyg-profile.so} and
4310 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4311 to add tracepoints to the two generated functions (which contain
4312 `cyg_profile` in their names, hence the helper's name).
4313
4314 To use the LTTng-UST function tracing helper, the source files to
4315 instrument must be built using the `-finstrument-functions` compiler
4316 flag.
4317
4318 There are two versions of the LTTng-UST function tracing helper:
4319
4320 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4321 that you should only use when it can be _guaranteed_ that the
4322 complete event stream is recorded without any lost event record.
4323 Any kind of duplicate information is left out.
4324 +
4325 Assuming no event record is lost, having only the function addresses on
4326 entry is enough to create a call graph, since an event record always
4327 contains the ID of the CPU that generated it.
4328 +
4329 You can use a tool like man:addr2line(1) to convert function addresses
4330 back to source file names and line numbers.
4331
4332 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4333 which also works in use cases where event records might get discarded or
4334 not recorded from application startup.
4335 In these cases, the trace analyzer needs more information to be
4336 able to reconstruct the program flow.
4337
4338 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4339 points of this helper.
4340
4341 All the tracepoints that this helper provides have the
4342 log level `TRACE_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4343
4344 TIP: It's sometimes a good idea to limit the number of source files that
4345 you compile with the `-finstrument-functions` option to prevent LTTng
4346 from writing an excessive amount of trace data at run time. When using
4347 man:gcc(1), you can use the
4348 `-finstrument-functions-exclude-function-list` option to avoid
4349 instrument entries and exits of specific function names.
4350
4351
4352 [role="since-2.4"]
4353 [[liblttng-ust-dl]]
4354 ==== Instrument the dynamic linker
4355
4356 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4357 man:dlopen(3) and man:dlclose(3) function calls.
4358
4359 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4360 of this helper.
4361
4362
4363 [role="since-2.4"]
4364 [[java-application]]
4365 === User space Java agent
4366
4367 You can instrument any Java application which uses one of the following
4368 logging frameworks:
4369
4370 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4371 (JUL) core logging facilities.
4372 * http://logging.apache.org/log4j/1.2/[**Apache log4j 1.2**], since
4373 LTTng 2.6. Note that Apache Log4j{nbsp}2 is not supported.
4374
4375 [role="img-100"]
4376 .LTTng-UST Java agent imported by a Java application.
4377 image::java-app.png[]
4378
4379 Note that the methods described below are new in LTTng{nbsp}2.8.
4380 Previous LTTng versions use another technique.
4381
4382 NOTE: We use http://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4383 and https://ci.lttng.org/[continuous integration], thus this version is
4384 directly supported. However, the LTTng-UST Java agent is also tested
4385 with OpenJDK{nbsp}7.
4386
4387
4388 [role="since-2.8"]
4389 [[jul]]
4390 ==== Use the LTTng-UST Java agent for `java.util.logging`
4391
4392 To use the LTTng-UST Java agent in a Java application which uses
4393 `java.util.logging` (JUL):
4394
4395 . In the Java application's source code, import the LTTng-UST
4396 log handler package for `java.util.logging`:
4397 +
4398 --
4399 [source,java]
4400 ----
4401 import org.lttng.ust.agent.jul.LttngLogHandler;
4402 ----
4403 --
4404
4405 . Create an LTTng-UST JUL log handler:
4406 +
4407 --
4408 [source,java]
4409 ----
4410 Handler lttngUstLogHandler = new LttngLogHandler();
4411 ----
4412 --
4413
4414 . Add this handler to the JUL loggers which should emit LTTng events:
4415 +
4416 --
4417 [source,java]
4418 ----
4419 Logger myLogger = Logger.getLogger("some-logger");
4420
4421 myLogger.addHandler(lttngUstLogHandler);
4422 ----
4423 --
4424
4425 . Use `java.util.logging` log statements and configuration as usual.
4426 The loggers with an attached LTTng-UST log handler can emit
4427 LTTng events.
4428
4429 . Before exiting the application, remove the LTTng-UST log handler from
4430 the loggers attached to it and call its `close()` method:
4431 +
4432 --
4433 [source,java]
4434 ----
4435 myLogger.removeHandler(lttngUstLogHandler);
4436 lttngUstLogHandler.close();
4437 ----
4438 --
4439 +
4440 This is not strictly necessary, but it is recommended for a clean
4441 disposal of the handler's resources.
4442
4443 . Include the LTTng-UST Java agent's common and JUL-specific JAR files,
4444 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4445 in the
4446 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4447 path] when you build the Java application.
4448 +
4449 The JAR files are typically located in dir:{/usr/share/java}.
4450 +
4451 IMPORTANT: The LTTng-UST Java agent must be
4452 <<installing-lttng,installed>> for the logging framework your
4453 application uses.
4454
4455 .Use the LTTng-UST Java agent for `java.util.logging`.
4456 ====
4457 [source,java]
4458 .path:{Test.java}
4459 ----
4460 import java.io.IOException;
4461 import java.util.logging.Handler;
4462 import java.util.logging.Logger;
4463 import org.lttng.ust.agent.jul.LttngLogHandler;
4464
4465 public class Test
4466 {
4467 private static final int answer = 42;
4468
4469 public static void main(String[] argv) throws Exception
4470 {
4471 // Create a logger
4472 Logger logger = Logger.getLogger("jello");
4473
4474 // Create an LTTng-UST log handler
4475 Handler lttngUstLogHandler = new LttngLogHandler();
4476
4477 // Add the LTTng-UST log handler to our logger
4478 logger.addHandler(lttngUstLogHandler);
4479
4480 // Log at will!
4481 logger.info("some info");
4482 logger.warning("some warning");
4483 Thread.sleep(500);
4484 logger.finer("finer information; the answer is " + answer);
4485 Thread.sleep(123);
4486 logger.severe("error!");
4487
4488 // Not mandatory, but cleaner
4489 logger.removeHandler(lttngUstLogHandler);
4490 lttngUstLogHandler.close();
4491 }
4492 }
4493 ----
4494
4495 Build this example:
4496
4497 [role="term"]
4498 ----
4499 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4500 ----
4501
4502 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4503 <<enabling-disabling-events,create an event rule>> matching the
4504 `jello` JUL logger, and <<basic-tracing-session-control,start tracing>>:
4505
4506 [role="term"]
4507 ----
4508 $ lttng create
4509 $ lttng enable-event --jul jello
4510 $ lttng start
4511 ----
4512
4513 Run the compiled class:
4514
4515 [role="term"]
4516 ----
4517 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4518 ----
4519
4520 <<basic-tracing-session-control,Stop tracing>> and inspect the
4521 recorded events:
4522
4523 [role="term"]
4524 ----
4525 $ lttng stop
4526 $ lttng view
4527 ----
4528 ====
4529
4530 In the resulting trace, an <<event,event record>> generated by a Java
4531 application using `java.util.logging` is named `lttng_jul:event` and
4532 has the following fields:
4533
4534 `msg`::
4535 Log record's message.
4536
4537 `logger_name`::
4538 Logger name.
4539
4540 `class_name`::
4541 Name of the class in which the log statement was executed.
4542
4543 `method_name`::
4544 Name of the method in which the log statement was executed.
4545
4546 `long_millis`::
4547 Logging time (timestamp in milliseconds).
4548
4549 `int_loglevel`::
4550 Log level integer value.
4551
4552 `int_threadid`::
4553 ID of the thread in which the log statement was executed.
4554
4555 You can use the opt:lttng-enable-event(1):--loglevel or
4556 opt:lttng-enable-event(1):--loglevel-only option of the
4557 man:lttng-enable-event(1) command to target a range of JUL log levels
4558 or a specific JUL log level.
4559
4560
4561 [role="since-2.8"]
4562 [[log4j]]
4563 ==== Use the LTTng-UST Java agent for Apache log4j
4564
4565 To use the LTTng-UST Java agent in a Java application which uses
4566 Apache log4j 1.2:
4567
4568 . In the Java application's source code, import the LTTng-UST
4569 log appender package for Apache log4j:
4570 +
4571 --
4572 [source,java]
4573 ----
4574 import org.lttng.ust.agent.log4j.LttngLogAppender;
4575 ----
4576 --
4577
4578 . Create an LTTng-UST log4j log appender:
4579 +
4580 --
4581 [source,java]
4582 ----
4583 Appender lttngUstLogAppender = new LttngLogAppender();
4584 ----
4585 --
4586
4587 . Add this appender to the log4j loggers which should emit LTTng events:
4588 +
4589 --
4590 [source,java]
4591 ----
4592 Logger myLogger = Logger.getLogger("some-logger");
4593
4594 myLogger.addAppender(lttngUstLogAppender);
4595 ----
4596 --
4597
4598 . Use Apache log4j log statements and configuration as usual. The
4599 loggers with an attached LTTng-UST log appender can emit LTTng events.
4600
4601 . Before exiting the application, remove the LTTng-UST log appender from
4602 the loggers attached to it and call its `close()` method:
4603 +
4604 --
4605 [source,java]
4606 ----
4607 myLogger.removeAppender(lttngUstLogAppender);
4608 lttngUstLogAppender.close();
4609 ----
4610 --
4611 +
4612 This is not strictly necessary, but it is recommended for a clean
4613 disposal of the appender's resources.
4614
4615 . Include the LTTng-UST Java agent's common and log4j-specific JAR
4616 files, path:{lttng-ust-agent-common.jar} and
4617 path:{lttng-ust-agent-log4j.jar}, in the
4618 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4619 path] when you build the Java application.
4620 +
4621 The JAR files are typically located in dir:{/usr/share/java}.
4622 +
4623 IMPORTANT: The LTTng-UST Java agent must be
4624 <<installing-lttng,installed>> for the logging framework your
4625 application uses.
4626
4627 .Use the LTTng-UST Java agent for Apache log4j.
4628 ====
4629 [source,java]
4630 .path:{Test.java}
4631 ----
4632 import org.apache.log4j.Appender;
4633 import org.apache.log4j.Logger;
4634 import org.lttng.ust.agent.log4j.LttngLogAppender;
4635
4636 public class Test
4637 {
4638 private static final int answer = 42;
4639
4640 public static void main(String[] argv) throws Exception
4641 {
4642 // Create a logger
4643 Logger logger = Logger.getLogger("jello");
4644
4645 // Create an LTTng-UST log appender
4646 Appender lttngUstLogAppender = new LttngLogAppender();
4647
4648 // Add the LTTng-UST log appender to our logger
4649 logger.addAppender(lttngUstLogAppender);
4650
4651 // Log at will!
4652 logger.info("some info");
4653 logger.warn("some warning");
4654 Thread.sleep(500);
4655 logger.debug("debug information; the answer is " + answer);
4656 Thread.sleep(123);
4657 logger.fatal("error!");
4658
4659 // Not mandatory, but cleaner
4660 logger.removeAppender(lttngUstLogAppender);
4661 lttngUstLogAppender.close();
4662 }
4663 }
4664
4665 ----
4666
4667 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4668 file):
4669
4670 [role="term"]
4671 ----
4672 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4673 ----
4674
4675 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4676 <<enabling-disabling-events,create an event rule>> matching the
4677 `jello` log4j logger, and <<basic-tracing-session-control,start tracing>>:
4678
4679 [role="term"]
4680 ----
4681 $ lttng create
4682 $ lttng enable-event --log4j jello
4683 $ lttng start
4684 ----
4685
4686 Run the compiled class:
4687
4688 [role="term"]
4689 ----
4690 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4691 ----
4692
4693 <<basic-tracing-session-control,Stop tracing>> and inspect the
4694 recorded events:
4695
4696 [role="term"]
4697 ----
4698 $ lttng stop
4699 $ lttng view
4700 ----
4701 ====
4702
4703 In the resulting trace, an <<event,event record>> generated by a Java
4704 application using log4j is named `lttng_log4j:event` and
4705 has the following fields:
4706
4707 `msg`::
4708 Log record's message.
4709
4710 `logger_name`::
4711 Logger name.
4712
4713 `class_name`::
4714 Name of the class in which the log statement was executed.
4715
4716 `method_name`::
4717 Name of the method in which the log statement was executed.
4718
4719 `filename`::
4720 Name of the file in which the executed log statement is located.
4721
4722 `line_number`::
4723 Line number at which the log statement was executed.
4724
4725 `timestamp`::
4726 Logging timestamp.
4727
4728 `int_loglevel`::
4729 Log level integer value.
4730
4731 `thread_name`::
4732 Name of the Java thread in which the log statement was executed.
4733
4734 You can use the opt:lttng-enable-event(1):--loglevel or
4735 opt:lttng-enable-event(1):--loglevel-only option of the
4736 man:lttng-enable-event(1) command to target a range of Apache log4j log levels
4737 or a specific log4j log level.
4738
4739
4740 [role="since-2.8"]
4741 [[java-application-context]]
4742 ==== Provide application-specific context fields in a Java application
4743
4744 A Java application-specific context field is a piece of state provided
4745 by the application which <<adding-context,you can add>>, using the
4746 man:lttng-add-context(1) command, to each <<event,event record>>
4747 produced by the log statements of this application.
4748
4749 For example, a given object might have a current request ID variable.
4750 You can create a context information retriever for this object and
4751 assign a name to this current request ID. You can then, using the
4752 man:lttng-add-context(1) command, add this context field by name to
4753 the JUL or log4j <<channel,channel>>.
4754
4755 To provide application-specific context fields in a Java application:
4756
4757 . In the Java application's source code, import the LTTng-UST
4758 Java agent context classes and interfaces:
4759 +
4760 --
4761 [source,java]
4762 ----
4763 import org.lttng.ust.agent.context.ContextInfoManager;
4764 import org.lttng.ust.agent.context.IContextInfoRetriever;
4765 ----
4766 --
4767
4768 . Create a context information retriever class, that is, a class which
4769 implements the `IContextInfoRetriever` interface:
4770 +
4771 --
4772 [source,java]
4773 ----
4774 class MyContextInfoRetriever implements IContextInfoRetriever
4775 {
4776 @Override
4777 public Object retrieveContextInfo(String key)
4778 {
4779 if (key.equals("intCtx")) {
4780 return (short) 17;
4781 } else if (key.equals("strContext")) {
4782 return "context value!";
4783 } else {
4784 return null;
4785 }
4786 }
4787 }
4788 ----
4789 --
4790 +
4791 This `retrieveContextInfo()` method is the only member of the
4792 `IContextInfoRetriever` interface. Its role is to return the current
4793 value of a state by name to create a context field. The names of the
4794 context fields and which state variables they return depends on your
4795 specific scenario.
4796 +
4797 All primitive types and objects are supported as context fields.
4798 When `retrieveContextInfo()` returns an object, the context field
4799 serializer calls its `toString()` method to add a string field to
4800 event records. The method can also return `null`, which means that
4801 no context field is available for the required name.
4802
4803 . Register an instance of your context information retriever class to
4804 the context information manager singleton:
4805 +
4806 --
4807 [source,java]
4808 ----
4809 IContextInfoRetriever cir = new MyContextInfoRetriever();
4810 ContextInfoManager cim = ContextInfoManager.getInstance();
4811 cim.registerContextInfoRetriever("retrieverName", cir);
4812 ----
4813 --
4814
4815 . Before exiting the application, remove your context information
4816 retriever from the context information manager singleton:
4817 +
4818 --
4819 [source,java]
4820 ----
4821 ContextInfoManager cim = ContextInfoManager.getInstance();
4822 cim.unregisterContextInfoRetriever("retrieverName");
4823 ----
4824 --
4825 +
4826 This is not strictly necessary, but it is recommended for a clean
4827 disposal of some manager's resources.
4828
4829 . Build your Java application with LTTng-UST Java agent support as
4830 usual, following the procedure for either the <<jul,JUL>> or
4831 <<log4j,Apache log4j>> framework.
4832
4833
4834 .Provide application-specific context fields in a Java application.
4835 ====
4836 [source,java]
4837 .path:{Test.java}
4838 ----
4839 import java.util.logging.Handler;
4840 import java.util.logging.Logger;
4841 import org.lttng.ust.agent.jul.LttngLogHandler;
4842 import org.lttng.ust.agent.context.ContextInfoManager;
4843 import org.lttng.ust.agent.context.IContextInfoRetriever;
4844
4845 public class Test
4846 {
4847 // Our context information retriever class
4848 private static class MyContextInfoRetriever
4849 implements IContextInfoRetriever
4850 {
4851 @Override
4852 public Object retrieveContextInfo(String key) {
4853 if (key.equals("intCtx")) {
4854 return (short) 17;
4855 } else if (key.equals("strContext")) {
4856 return "context value!";
4857 } else {
4858 return null;
4859 }
4860 }
4861 }
4862
4863 private static final int answer = 42;
4864
4865 public static void main(String args[]) throws Exception
4866 {
4867 // Get the context information manager instance
4868 ContextInfoManager cim = ContextInfoManager.getInstance();
4869
4870 // Create and register our context information retriever
4871 IContextInfoRetriever cir = new MyContextInfoRetriever();
4872 cim.registerContextInfoRetriever("myRetriever", cir);
4873
4874 // Create a logger
4875 Logger logger = Logger.getLogger("jello");
4876
4877 // Create an LTTng-UST log handler
4878 Handler lttngUstLogHandler = new LttngLogHandler();
4879
4880 // Add the LTTng-UST log handler to our logger
4881 logger.addHandler(lttngUstLogHandler);
4882
4883 // Log at will!
4884 logger.info("some info");
4885 logger.warning("some warning");
4886 Thread.sleep(500);
4887 logger.finer("finer information; the answer is " + answer);
4888 Thread.sleep(123);
4889 logger.severe("error!");
4890
4891 // Not mandatory, but cleaner
4892 logger.removeHandler(lttngUstLogHandler);
4893 lttngUstLogHandler.close();
4894 cim.unregisterContextInfoRetriever("myRetriever");
4895 }
4896 }
4897 ----
4898
4899 Build this example:
4900
4901 [role="term"]
4902 ----
4903 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4904 ----
4905
4906 <<creating-destroying-tracing-sessions,Create a tracing session>>
4907 and <<enabling-disabling-events,create an event rule>> matching the
4908 `jello` JUL logger:
4909
4910 [role="term"]
4911 ----
4912 $ lttng create
4913 $ lttng enable-event --jul jello
4914 ----
4915
4916 <<adding-context,Add the application-specific context fields>> to the
4917 JUL channel:
4918
4919 [role="term"]
4920 ----
4921 $ lttng add-context --jul --type='$app.myRetriever:intCtx'
4922 $ lttng add-context --jul --type='$app.myRetriever:strContext'
4923 ----
4924
4925 <<basic-tracing-session-control,Start tracing>>:
4926
4927 [role="term"]
4928 ----
4929 $ lttng start
4930 ----
4931
4932 Run the compiled class:
4933
4934 [role="term"]
4935 ----
4936 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4937 ----
4938
4939 <<basic-tracing-session-control,Stop tracing>> and inspect the
4940 recorded events:
4941
4942 [role="term"]
4943 ----
4944 $ lttng stop
4945 $ lttng view
4946 ----
4947 ====
4948
4949
4950 [role="since-2.7"]
4951 [[python-application]]
4952 === User space Python agent
4953
4954 You can instrument a Python 2 or Python 3 application which uses the
4955 standard https://docs.python.org/3/library/logging.html[`logging`]
4956 package.
4957
4958 Each log statement emits an LTTng event once the
4959 application module imports the
4960 <<lttng-ust-agents,LTTng-UST Python agent>> package.
4961
4962 [role="img-100"]
4963 .A Python application importing the LTTng-UST Python agent.
4964 image::python-app.png[]
4965
4966 To use the LTTng-UST Python agent:
4967
4968 . In the Python application's source code, import the LTTng-UST Python
4969 agent:
4970 +
4971 --
4972 [source,python]
4973 ----
4974 import lttngust
4975 ----
4976 --
4977 +
4978 The LTTng-UST Python agent automatically adds its logging handler to the
4979 root logger at import time.
4980 +
4981 Any log statement that the application executes before this import does
4982 not emit an LTTng event.
4983 +
4984 IMPORTANT: The LTTng-UST Python agent must be
4985 <<installing-lttng,installed>>.
4986
4987 . Use log statements and logging configuration as usual.
4988 Since the LTTng-UST Python agent adds a handler to the _root_
4989 logger, you can trace any log statement from any logger.
4990
4991 .Use the LTTng-UST Python agent.
4992 ====
4993 [source,python]
4994 .path:{test.py}
4995 ----
4996 import lttngust
4997 import logging
4998 import time
4999
5000
5001 def example():
5002 logging.basicConfig()
5003 logger = logging.getLogger('my-logger')
5004
5005 while True:
5006 logger.debug('debug message')
5007 logger.info('info message')
5008 logger.warn('warn message')
5009 logger.error('error message')
5010 logger.critical('critical message')
5011 time.sleep(1)
5012
5013
5014 if __name__ == '__main__':
5015 example()
5016 ----
5017
5018 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
5019 logging handler which prints to the standard error stream, is not
5020 strictly required for LTTng-UST tracing to work, but in versions of
5021 Python preceding 3.2, you could see a warning message which indicates
5022 that no handler exists for the logger `my-logger`.
5023
5024 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5025 <<enabling-disabling-events,create an event rule>> matching the
5026 `my-logger` Python logger, and <<basic-tracing-session-control,start
5027 tracing>>:
5028
5029 [role="term"]
5030 ----
5031 $ lttng create
5032 $ lttng enable-event --python my-logger
5033 $ lttng start
5034 ----
5035
5036 Run the Python script:
5037
5038 [role="term"]
5039 ----
5040 $ python test.py
5041 ----
5042
5043 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5044 events:
5045
5046 [role="term"]
5047 ----
5048 $ lttng stop
5049 $ lttng view
5050 ----
5051 ====
5052
5053 In the resulting trace, an <<event,event record>> generated by a Python
5054 application is named `lttng_python:event` and has the following fields:
5055
5056 `asctime`::
5057 Logging time (string).
5058
5059 `msg`::
5060 Log record's message.
5061
5062 `logger_name`::
5063 Logger name.
5064
5065 `funcName`::
5066 Name of the function in which the log statement was executed.
5067
5068 `lineno`::
5069 Line number at which the log statement was executed.
5070
5071 `int_loglevel`::
5072 Log level integer value.
5073
5074 `thread`::
5075 ID of the Python thread in which the log statement was executed.
5076
5077 `threadName`::
5078 Name of the Python thread in which the log statement was executed.
5079
5080 You can use the opt:lttng-enable-event(1):--loglevel or
5081 opt:lttng-enable-event(1):--loglevel-only option of the
5082 man:lttng-enable-event(1) command to target a range of Python log levels
5083 or a specific Python log level.
5084
5085 When an application imports the LTTng-UST Python agent, the agent tries
5086 to register to a <<lttng-sessiond,session daemon>>. Note that you must
5087 <<start-sessiond,start the session daemon>> _before_ you run the Python
5088 application. If a session daemon is found, the agent tries to register
5089 to it during 5{nbsp}seconds, after which the application continues
5090 without LTTng tracing support. You can override this timeout value with
5091 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
5092 (milliseconds).
5093
5094 If the session daemon stops while a Python application with an imported
5095 LTTng-UST Python agent runs, the agent retries to connect and to
5096 register to a session daemon every 3{nbsp}seconds. You can override this
5097 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
5098 variable.
5099
5100
5101 [role="since-2.5"]
5102 [[proc-lttng-logger-abi]]
5103 === LTTng logger
5104
5105 The `lttng-tracer` Linux kernel module, part of
5106 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger file
5107 path:{/proc/lttng-logger} when it's loaded. Any application can write
5108 text data to this file to emit an LTTng event.
5109
5110 [role="img-100"]
5111 .An application writes to the LTTng logger file to emit an LTTng event.
5112 image::lttng-logger.png[]
5113
5114 The LTTng logger is the quickest method--not the most efficient,
5115 however--to add instrumentation to an application. It is designed
5116 mostly to instrument shell scripts:
5117
5118 [role="term"]
5119 ----
5120 $ echo "Some message, some $variable" > /proc/lttng-logger
5121 ----
5122
5123 Any event that the LTTng logger emits is named `lttng_logger` and
5124 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
5125 other instrumentation points in the kernel tracing domain, **any Unix
5126 user** can <<enabling-disabling-events,create an event rule>> which
5127 matches its event name, not only the root user or users in the
5128 <<tracing-group,tracing group>>.
5129
5130 To use the LTTng logger:
5131
5132 * From any application, write text data to the path:{/proc/lttng-logger}
5133 file.
5134
5135 The `msg` field of `lttng_logger` event records contains the
5136 recorded message.
5137
5138 NOTE: The maximum message length of an LTTng logger event is
5139 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
5140 than one event to contain the remaining data.
5141
5142 You should not use the LTTng logger to trace a user application which
5143 can be instrumented in a more efficient way, namely:
5144
5145 * <<c-application,C and $$C++$$ applications>>.
5146 * <<java-application,Java applications>>.
5147 * <<python-application,Python applications>>.
5148
5149 .Use the LTTng logger.
5150 ====
5151 [source,bash]
5152 .path:{test.bash}
5153 ----
5154 echo 'Hello, World!' > /proc/lttng-logger
5155 sleep 2
5156 df --human-readable --print-type / > /proc/lttng-logger
5157 ----
5158
5159 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5160 <<enabling-disabling-events,create an event rule>> matching the
5161 `lttng_logger` Linux kernel tracepoint, and
5162 <<basic-tracing-session-control,start tracing>>:
5163
5164 [role="term"]
5165 ----
5166 $ lttng create
5167 $ lttng enable-event --kernel lttng_logger
5168 $ lttng start
5169 ----
5170
5171 Run the Bash script:
5172
5173 [role="term"]
5174 ----
5175 $ bash test.bash
5176 ----
5177
5178 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5179 events:
5180
5181 [role="term"]
5182 ----
5183 $ lttng stop
5184 $ lttng view
5185 ----
5186 ====
5187
5188
5189 [[instrumenting-linux-kernel]]
5190 === LTTng kernel tracepoints
5191
5192 NOTE: This section shows how to _add_ instrumentation points to the
5193 Linux kernel. The kernel's subsystems are already thoroughly
5194 instrumented at strategic places for LTTng when you
5195 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
5196 package.
5197
5198 ////
5199 There are two methods to instrument the Linux kernel:
5200
5201 . <<linux-add-lttng-layer,Add an LTTng layer>> over an existing ftrace
5202 tracepoint which uses the `TRACE_EVENT()` API.
5203 +
5204 Choose this if you want to instrumentation a Linux kernel tree with an
5205 instrumentation point compatible with ftrace, perf, and SystemTap.
5206
5207 . Use an <<linux-lttng-tracepoint-event,LTTng-only approach>> to
5208 instrument an out-of-tree kernel module.
5209 +
5210 Choose this if you don't need ftrace, perf, or SystemTap support.
5211 ////
5212
5213
5214 [[linux-add-lttng-layer]]
5215 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5216
5217 This section shows how to add an LTTng layer to existing ftrace
5218 instrumentation using the `TRACE_EVENT()` API.
5219
5220 This section does not document the `TRACE_EVENT()` macro. You can
5221 read the following articles to learn more about this API:
5222
5223 * http://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part 1)]
5224 * http://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part 2)]
5225 * http://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part 3)]
5226
5227 The following procedure assumes that your ftrace tracepoints are
5228 correctly defined in their own header and that they are created in
5229 one source file using the `CREATE_TRACE_POINTS` definition.
5230
5231 To add an LTTng layer over an existing ftrace tracepoint:
5232
5233 . Make sure the following kernel configuration options are
5234 enabled:
5235 +
5236 --
5237 * `CONFIG_MODULES`
5238 * `CONFIG_KALLSYMS`
5239 * `CONFIG_HIGH_RES_TIMERS`
5240 * `CONFIG_TRACEPOINTS`
5241 --
5242
5243 . Build the Linux source tree with your custom ftrace tracepoints.
5244 . Boot the resulting Linux image on your target system.
5245 +
5246 Confirm that the tracepoints exist by looking for their names in the
5247 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5248 is your subsystem's name.
5249
5250 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
5251 +
5252 --
5253 [role="term"]
5254 ----
5255 $ cd $(mktemp -d) &&
5256 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
5257 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
5258 cd lttng-modules-2.9.*
5259 ----
5260 --
5261
5262 . In dir:{instrumentation/events/lttng-module}, relative to the root
5263 of the LTTng-modules source tree, create a header file named
5264 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5265 LTTng-modules tracepoint definitions using the LTTng-modules
5266 macros in it.
5267 +
5268 Start with this template:
5269 +
5270 --
5271 [source,c]
5272 .path:{instrumentation/events/lttng-module/my_subsys.h}
5273 ----
5274 #undef TRACE_SYSTEM
5275 #define TRACE_SYSTEM my_subsys
5276
5277 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5278 #define _LTTNG_MY_SUBSYS_H
5279
5280 #include "../../../probes/lttng-tracepoint-event.h"
5281 #include <linux/tracepoint.h>
5282
5283 LTTNG_TRACEPOINT_EVENT(
5284 /*
5285 * Format is identical to TRACE_EVENT()'s version for the three
5286 * following macro parameters:
5287 */
5288 my_subsys_my_event,
5289 TP_PROTO(int my_int, const char *my_string),
5290 TP_ARGS(my_int, my_string),
5291
5292 /* LTTng-modules specific macros */
5293 TP_FIELDS(
5294 ctf_integer(int, my_int_field, my_int)
5295 ctf_string(my_bar_field, my_bar)
5296 )
5297 )
5298
5299 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5300
5301 #include "../../../probes/define_trace.h"
5302 ----
5303 --
5304 +
5305 The entries in the `TP_FIELDS()` section are the list of fields for the
5306 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5307 ftrace's `TRACE_EVENT()` macro.
5308 +
5309 See <<lttng-modules-tp-fields,Tracepoint fields macros>> for a
5310 complete description of the available `ctf_*()` macros.
5311
5312 . Create the LTTng-modules probe's kernel module C source file,
5313 +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5314 subsystem name:
5315 +
5316 --
5317 [source,c]
5318 .path:{probes/lttng-probe-my-subsys.c}
5319 ----
5320 #include <linux/module.h>
5321 #include "../lttng-tracer.h"
5322
5323 /*
5324 * Build-time verification of mismatch between mainline
5325 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5326 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5327 */
5328 #include <trace/events/my_subsys.h>
5329
5330 /* Create LTTng tracepoint probes */
5331 #define LTTNG_PACKAGE_BUILD
5332 #define CREATE_TRACE_POINTS
5333 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5334
5335 #include "../instrumentation/events/lttng-module/my_subsys.h"
5336
5337 MODULE_LICENSE("GPL and additional rights");
5338 MODULE_AUTHOR("Your name <your-email>");
5339 MODULE_DESCRIPTION("LTTng my_subsys probes");
5340 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5341 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5342 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5343 LTTNG_MODULES_EXTRAVERSION);
5344 ----
5345 --
5346
5347 . Edit path:{probes/KBuild} and add your new kernel module object
5348 next to the existing ones:
5349 +
5350 --
5351 [source,make]
5352 .path:{probes/KBuild}
5353 ----
5354 # ...
5355
5356 obj-m += lttng-probe-module.o
5357 obj-m += lttng-probe-power.o
5358
5359 obj-m += lttng-probe-my-subsys.o
5360
5361 # ...
5362 ----
5363 --
5364
5365 . Build and install the LTTng kernel modules:
5366 +
5367 --
5368 [role="term"]
5369 ----
5370 $ make KERNELDIR=/path/to/linux
5371 # make modules_install && depmod -a
5372 ----
5373 --
5374 +
5375 Replace `/path/to/linux` with the path to the Linux source tree where
5376 you defined and used tracepoints with ftrace's `TRACE_EVENT()` macro.
5377
5378 Note that you can also use the
5379 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5380 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5381 C code that need to be executed before the event fields are recorded.
5382
5383 The best way to learn how to use the previous LTTng-modules macros is to
5384 inspect the existing LTTng-modules tracepoint definitions in the
5385 dir:{instrumentation/events/lttng-module} header files. Compare them
5386 with the Linux kernel mainline versions in the
5387 dir:{include/trace/events} directory of the Linux source tree.
5388
5389
5390 [role="since-2.7"]
5391 [[lttng-tracepoint-event-code]]
5392 ===== Use custom C code to access the data for tracepoint fields
5393
5394 Although we recommended to always use the
5395 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5396 the arguments and fields of an LTTng-modules tracepoint when possible,
5397 sometimes you need a more complex process to access the data that the
5398 tracer records as event record fields. In other words, you need local
5399 variables and multiple C{nbsp}statements instead of simple
5400 argument-based expressions that you pass to the
5401 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5402
5403 You can use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5404 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5405 a block of C{nbsp}code to be executed before LTTng records the fields.
5406 The structure of this macro is:
5407
5408 [source,c]
5409 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5410 ----
5411 LTTNG_TRACEPOINT_EVENT_CODE(
5412 /*
5413 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5414 * version for the following three macro parameters:
5415 */
5416 my_subsys_my_event,
5417 TP_PROTO(int my_int, const char *my_string),
5418 TP_ARGS(my_int, my_string),
5419
5420 /* Declarations of custom local variables */
5421 TP_locvar(
5422 int a = 0;
5423 unsigned long b = 0;
5424 const char *name = "(undefined)";
5425 struct my_struct *my_struct;
5426 ),
5427
5428 /*
5429 * Custom code which uses both tracepoint arguments
5430 * (in TP_ARGS()) and local variables (in TP_locvar()).
5431 *
5432 * Local variables are actually members of a structure pointed
5433 * to by the special variable tp_locvar.
5434 */
5435 TP_code(
5436 if (my_int) {
5437 tp_locvar->a = my_int + 17;
5438 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5439 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5440 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5441 put_my_struct(tp_locvar->my_struct);
5442
5443 if (tp_locvar->b) {
5444 tp_locvar->a = 1;
5445 }
5446 }
5447 ),
5448
5449 /*
5450 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5451 * version for this, except that tp_locvar members can be
5452 * used in the argument expression parameters of
5453 * the ctf_*() macros.
5454 */
5455 TP_FIELDS(
5456 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5457 ctf_integer(int, my_struct_a, tp_locvar->a)
5458 ctf_string(my_string_field, my_string)
5459 ctf_string(my_struct_name, tp_locvar->name)
5460 )
5461 )
5462 ----
5463
5464 IMPORTANT: The C code defined in `TP_code()` must not have any side
5465 effects when executed. In particular, the code must not allocate
5466 memory or get resources without deallocating this memory or putting
5467 those resources afterwards.
5468
5469
5470 [[instrumenting-linux-kernel-tracing]]
5471 ==== Load and unload a custom probe kernel module
5472
5473 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5474 kernel module>> in the kernel before it can emit LTTng events.
5475
5476 To load the default probe kernel modules and a custom probe kernel
5477 module:
5478
5479 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5480 probe modules to load when starting a root <<lttng-sessiond,session
5481 daemon>>:
5482 +
5483 --
5484 .Load the `my_subsys`, `usb`, and the default probe modules.
5485 ====
5486 [role="term"]
5487 ----
5488 # lttng-sessiond --extra-kmod-probes=my_subsys,usb
5489 ----
5490 ====
5491 --
5492 +
5493 You only need to pass the subsystem name, not the whole kernel module
5494 name.
5495
5496 To load _only_ a given custom probe kernel module:
5497
5498 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5499 modules to load when starting a root session daemon:
5500 +
5501 --
5502 .Load only the `my_subsys` and `usb` probe modules.
5503 ====
5504 [role="term"]
5505 ----
5506 # lttng-sessiond --kmod-probes=my_subsys,usb
5507 ----
5508 ====
5509 --
5510
5511 To confirm that a probe module is loaded:
5512
5513 * Use man:lsmod(8):
5514 +
5515 --
5516 [role="term"]
5517 ----
5518 $ lsmod | grep lttng_probe_usb
5519 ----
5520 --
5521
5522 To unload the loaded probe modules:
5523
5524 * Kill the session daemon with `SIGTERM`:
5525 +
5526 --
5527 [role="term"]
5528 ----
5529 # pkill lttng-sessiond
5530 ----
5531 --
5532 +
5533 You can also use man:modprobe(8)'s `--remove` option if the session
5534 daemon terminates abnormally.
5535
5536
5537 [[controlling-tracing]]
5538 == Tracing control
5539
5540 Once an application or a Linux kernel is
5541 <<instrumenting,instrumented>> for LTTng tracing,
5542 you can _trace_ it.
5543
5544 This section is divided in topics on how to use the various
5545 <<plumbing,components of LTTng>>, in particular the <<lttng-cli,cmd:lttng
5546 command-line tool>>, to _control_ the LTTng daemons and tracers.
5547
5548 NOTE: In the following subsections, we refer to an man:lttng(1) command
5549 using its man page name. For example, instead of _Run the `create`
5550 command to..._, we use _Run the man:lttng-create(1) command to..._.
5551
5552
5553 [[start-sessiond]]
5554 === Start a session daemon
5555
5556 In some situations, you need to run a <<lttng-sessiond,session daemon>>
5557 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5558 command-line tool.
5559
5560 You will see the following error when you run a command while no session
5561 daemon is running:
5562
5563 ----
5564 Error: No session daemon is available
5565 ----
5566
5567 The only command that automatically runs a session daemon is
5568 man:lttng-create(1), which you use to
5569 <<creating-destroying-tracing-sessions,create a tracing session>>. While
5570 this is most of the time the first operation that you do, sometimes it's
5571 not. Some examples are:
5572
5573 * <<list-instrumentation-points,List the available instrumentation points>>.
5574 * <<saving-loading-tracing-session,Load a tracing session configuration>>.
5575
5576 [[tracing-group]] Each Unix user must have its own running session
5577 daemon to trace user applications. The session daemon that the root user
5578 starts is the only one allowed to control the LTTng kernel tracer. Users
5579 that are part of the _tracing group_ can control the root session
5580 daemon. The default tracing group name is `tracing`; you can set it to
5581 something else with the opt:lttng-sessiond(8):--group option when you
5582 start the root session daemon.
5583
5584 To start a user session daemon:
5585
5586 * Run man:lttng-sessiond(8):
5587 +
5588 --
5589 [role="term"]
5590 ----
5591 $ lttng-sessiond --daemonize
5592 ----
5593 --
5594
5595 To start the root session daemon:
5596
5597 * Run man:lttng-sessiond(8) as the root user:
5598 +
5599 --
5600 [role="term"]
5601 ----
5602 # lttng-sessiond --daemonize
5603 ----
5604 --
5605
5606 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5607 start the session daemon in foreground.
5608
5609 To stop a session daemon, use man:kill(1) on its process ID (standard
5610 `TERM` signal).
5611
5612 Note that some Linux distributions could manage the LTTng session daemon
5613 as a service. In this case, you should use the service manager to
5614 start, restart, and stop session daemons.
5615
5616
5617 [[creating-destroying-tracing-sessions]]
5618 === Create and destroy a tracing session
5619
5620 Almost all the LTTng control operations happen in the scope of
5621 a <<tracing-session,tracing session>>, which is the dialogue between the
5622 <<lttng-sessiond,session daemon>> and you.
5623
5624 To create a tracing session with a generated name:
5625
5626 * Use the man:lttng-create(1) command:
5627 +
5628 --
5629 [role="term"]
5630 ----
5631 $ lttng create
5632 ----
5633 --
5634
5635 The created tracing session's name is `auto` followed by the
5636 creation date.
5637
5638 To create a tracing session with a specific name:
5639
5640 * Use the optional argument of the man:lttng-create(1) command:
5641 +
5642 --
5643 [role="term"]
5644 ----
5645 $ lttng create my-session
5646 ----
5647 --
5648 +
5649 Replace `my-session` with the specific tracing session name.
5650
5651 LTTng appends the creation date to the created tracing session's name.
5652
5653 LTTng writes the traces of a tracing session in
5654 +$LTTNG_HOME/lttng-trace/__name__+ by default, where +__name__+ is the
5655 name of the tracing session. Note that the env:LTTNG_HOME environment
5656 variable defaults to `$HOME` if not set.
5657
5658 To output LTTng traces to a non-default location:
5659
5660 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command:
5661 +
5662 --
5663 [role="term"]
5664 ----
5665 $ lttng create my-session --output=/tmp/some-directory
5666 ----
5667 --
5668
5669 You may create as many tracing sessions as you wish.
5670
5671 To list all the existing tracing sessions for your Unix user:
5672
5673 * Use the man:lttng-list(1) command:
5674 +
5675 --
5676 [role="term"]
5677 ----
5678 $ lttng list
5679 ----
5680 --
5681
5682 When you create a tracing session, it is set as the _current tracing
5683 session_. The following man:lttng(1) commands operate on the current
5684 tracing session when you don't specify one:
5685
5686 [role="list-3-cols"]
5687 * `add-context`
5688 * `destroy`
5689 * `disable-channel`
5690 * `disable-event`
5691 * `enable-channel`
5692 * `enable-event`
5693 * `load`
5694 * `regenerate`
5695 * `save`
5696 * `snapshot`
5697 * `start`
5698 * `stop`
5699 * `track`
5700 * `untrack`
5701 * `view`
5702
5703 To change the current tracing session:
5704
5705 * Use the man:lttng-set-session(1) command:
5706 +
5707 --
5708 [role="term"]
5709 ----
5710 $ lttng set-session new-session
5711 ----
5712 --
5713 +
5714 Replace `new-session` by the name of the new current tracing session.
5715
5716 When you are done tracing in a given tracing session, you can destroy
5717 it. This operation frees the resources taken by the tracing session
5718 to destroy; it does not destroy the trace data that LTTng wrote for
5719 this tracing session.
5720
5721 To destroy the current tracing session:
5722
5723 * Use the man:lttng-destroy(1) command:
5724 +
5725 --
5726 [role="term"]
5727 ----
5728 $ lttng destroy
5729 ----
5730 --
5731
5732
5733 [[list-instrumentation-points]]
5734 === List the available instrumentation points
5735
5736 The <<lttng-sessiond,session daemon>> can query the running instrumented
5737 user applications and the Linux kernel to get a list of available
5738 instrumentation points. For the Linux kernel <<domain,tracing domain>>,
5739 they are tracepoints and system calls. For the user space tracing
5740 domain, they are tracepoints. For the other tracing domains, they are
5741 logger names.
5742
5743 To list the available instrumentation points:
5744
5745 * Use the man:lttng-list(1) command with the requested tracing domain's
5746 option amongst:
5747 +
5748 --
5749 * opt:lttng-list(1):--kernel: Linux kernel tracepoints (your Unix user
5750 must be a root user, or it must be a member of the
5751 <<tracing-group,tracing group>>).
5752 * opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall: Linux
5753 kernel system calls (your Unix user must be a root user, or it must be
5754 a member of the tracing group).
5755 * opt:lttng-list(1):--userspace: user space tracepoints.
5756 * opt:lttng-list(1):--jul: `java.util.logging` loggers.
5757 * opt:lttng-list(1):--log4j: Apache log4j loggers.
5758 * opt:lttng-list(1):--python: Python loggers.
5759 --
5760
5761 .List the available user space tracepoints.
5762 ====
5763 [role="term"]
5764 ----
5765 $ lttng list --userspace
5766 ----
5767 ====
5768
5769 .List the available Linux kernel system call tracepoints.
5770 ====
5771 [role="term"]
5772 ----
5773 $ lttng list --kernel --syscall
5774 ----
5775 ====
5776
5777
5778 [[enabling-disabling-events]]
5779 === Create and enable an event rule
5780
5781 Once you <<creating-destroying-tracing-sessions,create a tracing
5782 session>>, you can create <<event,event rules>> with the
5783 man:lttng-enable-event(1) command.
5784
5785 You specify each condition with a command-line option. The available
5786 condition options are shown in the following table.
5787
5788 [role="growable",cols="asciidoc,asciidoc,default"]
5789 .Condition command-line options for the man:lttng-enable-event(1) command.
5790 |====
5791 |Option |Description |Applicable tracing domains
5792
5793 |
5794 One of:
5795
5796 . `--syscall`
5797 . +--probe=__ADDR__+
5798 . +--function=__ADDR__+
5799
5800 |
5801 Instead of using the default _tracepoint_ instrumentation type, use:
5802
5803 . A Linux system call.
5804 . A Linux https://lwn.net/Articles/132196/[KProbe] (symbol or address).
5805 . The entry and return points of a Linux function (symbol or address).
5806
5807 |Linux kernel.
5808
5809 |First positional argument.
5810
5811 |
5812 Tracepoint or system call name. In the case of a Linux KProbe or
5813 function, this is a custom name given to the event rule. With the
5814 JUL, log4j, and Python domains, this is a logger name.
5815
5816 With a tracepoint, logger, or system call name, the last character
5817 can be `*` to match anything that remains.
5818
5819 |All.
5820
5821 |
5822 One of:
5823
5824 . +--loglevel=__LEVEL__+
5825 . +--loglevel-only=__LEVEL__+
5826
5827 |
5828 . Match only tracepoints or log statements with a logging level at
5829 least as severe as +__LEVEL__+.
5830 . Match only tracepoints or log statements with a logging level
5831 equal to +__LEVEL__+.
5832
5833 See man:lttng-enable-event(1) for the list of available logging level
5834 names.
5835
5836 |User space, JUL, log4j, and Python.
5837
5838 |+--exclude=__EXCLUSIONS__+
5839
5840 |
5841 When you use a `*` character at the end of the tracepoint or logger
5842 name (first positional argument), exclude the specific names in the
5843 comma-delimited list +__EXCLUSIONS__+.
5844
5845 |
5846 User space, JUL, log4j, and Python.
5847
5848 |+--filter=__EXPR__+
5849
5850 |
5851 Match only events which satisfy the expression +__EXPR__+.
5852
5853 See man:lttng-enable-event(1) to learn more about the syntax of a
5854 filter expression.
5855
5856 |All.
5857
5858 |====
5859
5860 You attach an event rule to a <<channel,channel>> on creation. If you do
5861 not specify the channel with the opt:lttng-enable-event(1):--channel
5862 option, and if the event rule to create is the first in its
5863 <<domain,tracing domain>> for a given tracing session, then LTTng
5864 creates a _default channel_ for you. This default channel is reused in
5865 subsequent invocations of the man:lttng-enable-event(1) command for the
5866 same tracing domain.
5867
5868 An event rule is always enabled at creation time.
5869
5870 The following examples show how you can combine the previous
5871 command-line options to create simple to more complex event rules.
5872
5873 .Create an event rule targetting a Linux kernel tracepoint (default channel).
5874 ====
5875 [role="term"]
5876 ----
5877 $ lttng enable-event --kernel sched_switch
5878 ----
5879 ====
5880
5881 .Create an event rule matching four Linux kernel system calls (default channel).
5882 ====
5883 [role="term"]
5884 ----
5885 $ lttng enable-event --kernel --syscall open,write,read,close
5886 ----
5887 ====
5888
5889 .Create event rules matching tracepoints with filter expressions (default channel).
5890 ====
5891 [role="term"]
5892 ----
5893 $ lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
5894 ----
5895
5896 [role="term"]
5897 ----
5898 $ lttng enable-event --kernel --all \
5899 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
5900 ----
5901
5902 [role="term"]
5903 ----
5904 $ lttng enable-event --jul my_logger \
5905 --filter='$app.retriever:cur_msg_id > 3'
5906 ----
5907
5908 IMPORTANT: Make sure to always quote the filter string when you
5909 use man:lttng(1) from a shell.
5910 ====
5911
5912 .Create an event rule matching any user space tracepoint of a given tracepoint provider with a log level range (default channel).
5913 ====
5914 [role="term"]
5915 ----
5916 $ lttng enable-event --userspace my_app:'*' --loglevel=TRACE_INFO
5917 ----
5918
5919 IMPORTANT: Make sure to always quote the wildcard character when you
5920 use man:lttng(1) from a shell.
5921 ====
5922
5923 .Create an event rule matching multiple Python loggers with a wildcard and with exclusions (default channel).
5924 ====
5925 [role="term"]
5926 ----
5927 $ lttng enable-event --python my-app.'*' \
5928 --exclude='my-app.module,my-app.hello'
5929 ----
5930 ====
5931
5932 .Create an event rule matching any Apache log4j logger with a specific log level (default channel).
5933 ====
5934 [role="term"]
5935 ----
5936 $ lttng enable-event --log4j --all --loglevel-only=LOG4J_WARN
5937 ----
5938 ====
5939
5940 .Create an event rule attached to a specific channel matching a specific user space tracepoint provider and tracepoint.
5941 ====
5942 [role="term"]
5943 ----
5944 $ lttng enable-event --userspace my_app:my_tracepoint --channel=my-channel
5945 ----
5946 ====
5947
5948 The event rules of a given channel form a whitelist: as soon as an
5949 emitted event passes one of them, LTTng can record the event. For
5950 example, an event named `my_app:my_tracepoint` emitted from a user space
5951 tracepoint with a `TRACE_ERROR` log level passes both of the following
5952 rules:
5953
5954 [role="term"]
5955 ----
5956 $ lttng enable-event --userspace my_app:my_tracepoint
5957 $ lttng enable-event --userspace my_app:my_tracepoint \
5958 --loglevel=TRACE_INFO
5959 ----
5960
5961 The second event rule is redundant: the first one includes
5962 the second one.
5963
5964
5965 [[disable-event-rule]]
5966 === Disable an event rule
5967
5968 To disable an event rule that you <<enabling-disabling-events,created>>
5969 previously, use the man:lttng-disable-event(1) command. This command
5970 disables _all_ the event rules (of a given tracing domain and channel)
5971 which match an instrumentation point. The other conditions are not
5972 supported as of LTTng{nbsp}{revision}.
5973
5974 The LTTng tracer does not record an emitted event which passes
5975 a _disabled_ event rule.
5976
5977 .Disable an event rule matching a Python logger (default channel).
5978 ====
5979 [role="term"]
5980 ----
5981 $ lttng disable-event --python my-logger
5982 ----
5983 ====
5984
5985 .Disable an event rule matching all `java.util.logging` loggers (default channel).
5986 ====
5987 [role="term"]
5988 ----
5989 $ lttng disable-event --jul '*'
5990 ----
5991 ====
5992
5993 .Disable _all_ the event rules of the default channel.
5994 ====
5995 The opt:lttng-disable-event(1):--all-events option is not, like the
5996 opt:lttng-enable-event(1):--all option of man:lttng-enable-event(1), the
5997 equivalent of the event name `*` (wildcard): it disables _all_ the event
5998 rules of a given channel.
5999
6000 [role="term"]
6001 ----
6002 $ lttng disable-event --jul --all-events
6003 ----
6004 ====
6005
6006 NOTE: You cannot delete an event rule once you create it.
6007
6008
6009 [[status]]
6010 === Get the status of a tracing session
6011
6012 To get the status of the current tracing session, that is, its
6013 parameters, its channels, event rules, and their attributes:
6014
6015 * Use the man:lttng-status(1) command:
6016 +
6017 --
6018 [role="term"]
6019 ----
6020 $ lttng status
6021 ----
6022 --
6023 +
6024
6025 To get the status of any tracing session:
6026
6027 * Use the man:lttng-list(1) command with the tracing session's name:
6028 +
6029 --
6030 [role="term"]
6031 ----
6032 $ lttng list my-session
6033 ----
6034 --
6035 +
6036 Replace `my-session` with the desired tracing session's name.
6037
6038
6039 [[basic-tracing-session-control]]
6040 === Start and stop a tracing session
6041
6042 Once you <<creating-destroying-tracing-sessions,create a tracing
6043 session>> and
6044 <<enabling-disabling-events,create one or more event rules>>,
6045 you can start and stop the tracers for this tracing session.
6046
6047 To start tracing in the current tracing session:
6048
6049 * Use the man:lttng-start(1) command:
6050 +
6051 --
6052 [role="term"]
6053 ----
6054 $ lttng start
6055 ----
6056 --
6057
6058 LTTng is very flexible: you can launch user applications before
6059 or after the you start the tracers. The tracers only record the events
6060 if they pass enabled event rules and if they occur while the tracers are
6061 started.
6062
6063 To stop tracing in the current tracing session:
6064
6065 * Use the man:lttng-stop(1) command:
6066 +
6067 --
6068 [role="term"]
6069 ----
6070 $ lttng stop
6071 ----
6072 --
6073 +
6074 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
6075 records>> or lost sub-buffers since the last time you ran
6076 man:lttng-start(1), warnings are printed when you run the
6077 man:lttng-stop(1) command.
6078
6079
6080 [[enabling-disabling-channels]]
6081 === Create a channel
6082
6083 Once you create a tracing session, you can create a <<channel,channel>>
6084 with the man:lttng-enable-channel(1) command.
6085
6086 Note that LTTng automatically creates a default channel when, for a
6087 given <<domain,tracing domain>>, no channels exist and you
6088 <<enabling-disabling-events,create>> the first event rule. This default
6089 channel is named `channel0` and its attributes are set to reasonable
6090 values. Therefore, you only need to create a channel when you need
6091 non-default attributes.
6092
6093 You specify each non-default channel attribute with a command-line
6094 option when you use the man:lttng-enable-channel(1) command. The
6095 available command-line options are:
6096
6097 [role="growable",cols="asciidoc,asciidoc"]
6098 .Command-line options for the man:lttng-enable-channel(1) command.
6099 |====
6100 |Option |Description
6101
6102 |`--overwrite`
6103
6104 |
6105 Use the _overwrite_
6106 <<channel-overwrite-mode-vs-discard-mode,event loss mode>> instead of
6107 the default _discard_ mode.
6108
6109 |`--buffers-pid` (user space tracing domain only)
6110
6111 |
6112 Use the per-process <<channel-buffering-schemes,buffering scheme>>
6113 instead of the default per-user buffering scheme.
6114
6115 |+--subbuf-size=__SIZE__+
6116
6117 |
6118 Allocate sub-buffers of +__SIZE__+ bytes (power of two), for each CPU,
6119 either for each Unix user (default), or for each instrumented process.
6120
6121 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6122
6123 |+--num-subbuf=__COUNT__+
6124
6125 |
6126 Allocate +__COUNT__+ sub-buffers (power of two), for each CPU, either
6127 for each Unix user (default), or for each instrumented process.
6128
6129 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6130
6131 |+--tracefile-size=__SIZE__+
6132
6133 |
6134 Set the maximum size of each trace file that this channel writes within
6135 a stream to +__SIZE__+ bytes instead of no maximum.
6136
6137 See <<tracefile-rotation,Trace file count and size>>.
6138
6139 |+--tracefile-count=__COUNT__+
6140
6141 |
6142 Limit the number of trace files that this channel creates to
6143 +__COUNT__+ channels instead of no limit.
6144
6145 See <<tracefile-rotation,Trace file count and size>>.
6146
6147 |+--switch-timer=__PERIODUS__+
6148
6149 |
6150 Set the <<channel-switch-timer,switch timer period>>
6151 to +__PERIODUS__+{nbsp}µs.
6152
6153 |+--read-timer=__PERIODUS__+
6154
6155 |
6156 Set the <<channel-read-timer,read timer period>>
6157 to +__PERIODUS__+{nbsp}µs.
6158
6159 |+--output=__TYPE__+ (Linux kernel tracing domain only)
6160
6161 |
6162 Set the channel's output type to +__TYPE__+, either `mmap` or `splice`.
6163
6164 |====
6165
6166 You can only create a channel in the Linux kernel and user space
6167 <<domain,tracing domains>>: other tracing domains have their own channel
6168 created on the fly when <<enabling-disabling-events,creating event
6169 rules>>.
6170
6171 [IMPORTANT]
6172 ====
6173 Because of a current LTTng limitation, you must create all channels
6174 _before_ you <<basic-tracing-session-control,start tracing>> in a given
6175 tracing session, that is, before the first time you run
6176 man:lttng-start(1).
6177
6178 Since LTTng automatically creates a default channel when you use the
6179 man:lttng-enable-event(1) command with a specific tracing domain, you
6180 cannot, for example, create a Linux kernel event rule, start tracing,
6181 and then create a user space event rule, because no user space channel
6182 exists yet and it's too late to create one.
6183
6184 For this reason, make sure to configure your channels properly
6185 before starting the tracers for the first time!
6186 ====
6187
6188 The following examples show how you can combine the previous
6189 command-line options to create simple to more complex channels.
6190
6191 .Create a Linux kernel channel with default attributes.
6192 ====
6193 [role="term"]
6194 ----
6195 $ lttng enable-channel --kernel my-channel
6196 ----
6197 ====
6198
6199 .Create a user space channel with 4 sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6200 ====
6201 [role="term"]
6202 ----
6203 $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6204 --buffers-pid my-channel
6205 ----
6206 ====
6207
6208 .Create a Linux kernel channel which rotates 8 trace files of 4{nbsp}MiB each for each stream
6209 ====
6210 [role="term"]
6211 ----
6212 $ lttng enable-channel --kernel --tracefile-count=8 \
6213 --tracefile-size=4194304 my-channel
6214 ----
6215 ====
6216
6217 .Create a user space channel in overwrite (or _flight recorder_) mode.
6218 ====
6219 [role="term"]
6220 ----
6221 $ lttng enable-channel --userspace --overwrite my-channel
6222 ----
6223 ====
6224
6225 You can <<enabling-disabling-events,create>> the same event rule in
6226 two different channels:
6227
6228 [role="term"]
6229 ----
6230 $ lttng enable-event --userspace --channel=my-channel app:tp
6231 $ lttng enable-event --userspace --channel=other-channel app:tp
6232 ----
6233
6234 If both channels are enabled, when a tracepoint named `app:tp` is
6235 reached, LTTng records two events, one for each channel.
6236
6237
6238 [[disable-channel]]
6239 === Disable a channel
6240
6241 To disable a specific channel that you <<enabling-disabling-channels,created>>
6242 previously, use the man:lttng-disable-channel(1) command.
6243
6244 .Disable a specific Linux kernel channel.
6245 ====
6246 [role="term"]
6247 ----
6248 $ lttng disable-channel --kernel my-channel
6249 ----
6250 ====
6251
6252 The state of a channel precedes the individual states of event rules
6253 attached to it: event rules which belong to a disabled channel, even if
6254 they are enabled, are also considered disabled.
6255
6256
6257 [[adding-context]]
6258 === Add context fields to a channel
6259
6260 Event record fields in trace files provide important information about
6261 events that occured previously, but sometimes some external context may
6262 help you solve a problem faster. Examples of context fields are:
6263
6264 * The **process ID**, **thread ID**, **process name**, and
6265 **process priority** of the thread in which the event occurs.
6266 * The **hostname** of the system on which the event occurs.
6267 * The current values of many possible **performance counters** using
6268 perf, for example:
6269 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6270 ** Cache misses.
6271 ** Branch instructions, misses, and loads.
6272 ** CPU faults.
6273 * Any context defined at the application level (supported for the
6274 JUL and log4j <<domain,tracing domains>>).
6275
6276 To get the full list of available context fields, see
6277 `lttng add-context --list`. Some context fields are reserved for a
6278 specific <<domain,tracing domain>> (Linux kernel or user space).
6279
6280 You add context fields to <<channel,channels>>. All the events
6281 that a channel with added context fields records contain those fields.
6282
6283 To add context fields to one or all the channels of a given tracing
6284 session:
6285
6286 * Use the man:lttng-add-context(1) command.
6287
6288 .Add context fields to all the channels of the current tracing session.
6289 ====
6290 The following command line adds the virtual process identifier and
6291 the per-thread CPU cycles count fields to all the user space channels
6292 of the current tracing session.
6293
6294 [role="term"]
6295 ----
6296 $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6297 ----
6298 ====
6299
6300 .Add performance counter context fields by raw ID
6301 ====
6302 See man:lttng-add-context(1) for the exact format of the context field
6303 type, which is partly compatible with the format used in
6304 man:perf-record(1).
6305
6306 [role="term"]
6307 ----
6308 $ lttng add-context --userspace --type=perf:thread:raw:r0110:test
6309 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6310 ----
6311 ====
6312
6313 .Add a context field to a specific channel.
6314 ====
6315 The following command line adds the thread identifier context field
6316 to the Linux kernel channel named `my-channel` in the current
6317 tracing session.
6318
6319 [role="term"]
6320 ----
6321 $ lttng add-context --kernel --channel=my-channel --type=tid
6322 ----
6323 ====
6324
6325 .Add an application-specific context field to a specific channel.
6326 ====
6327 The following command line adds the `cur_msg_id` context field of the
6328 `retriever` context retriever for all the instrumented
6329 <<java-application,Java applications>> recording <<event,event records>>
6330 in the channel named `my-channel`:
6331
6332 [role="term"]
6333 ----
6334 $ lttng add-context --kernel --channel=my-channel \
6335 --type='$app:retriever:cur_msg_id'
6336 ----
6337
6338 IMPORTANT: Make sure to always quote the `$` character when you
6339 use man:lttng-add-context(1) from a shell.
6340 ====
6341
6342 NOTE: You cannot remove context fields from a channel once you add it.
6343
6344
6345 [role="since-2.7"]
6346 [[pid-tracking]]
6347 === Track process IDs
6348
6349 It's often useful to allow only specific process IDs (PIDs) to emit
6350 events. For example, you may wish to record all the system calls made by
6351 a given process (à la http://linux.die.net/man/1/strace[strace]).
6352
6353 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6354 purpose. Both commands operate on a whitelist of process IDs. You _add_
6355 entries to this whitelist with the man:lttng-track(1) command and remove
6356 entries with the man:lttng-untrack(1) command. Any process which has one
6357 of the PIDs in the whitelist is allowed to emit LTTng events which pass
6358 an enabled <<event,event rule>>.
6359
6360 NOTE: The PID tracker tracks the _numeric process IDs_. Should a
6361 process with a given tracked ID exit and another process be given this
6362 ID, then the latter would also be allowed to emit events.
6363
6364 .Track and untrack process IDs.
6365 ====
6366 For the sake of the following example, assume the target system has 16
6367 possible PIDs.
6368
6369 When you
6370 <<creating-destroying-tracing-sessions,create a tracing session>>,
6371 the whitelist contains all the possible PIDs:
6372
6373 [role="img-100"]
6374 .All PIDs are tracked.
6375 image::track-all.png[]
6376
6377 When the whitelist is full and you use the man:lttng-track(1) command to
6378 specify some PIDs to track, LTTng first clears the whitelist, then it
6379 tracks the specific PIDs. After:
6380
6381 [role="term"]
6382 ----
6383 $ lttng track --pid=3,4,7,10,13
6384 ----
6385
6386 the whitelist is:
6387
6388 [role="img-100"]
6389 .PIDs 3, 4, 7, 10, and 13 are tracked.
6390 image::track-3-4-7-10-13.png[]
6391
6392 You can add more PIDs to the whitelist afterwards:
6393
6394 [role="term"]
6395 ----
6396 $ lttng track --pid=1,15,16
6397 ----
6398
6399 The result is:
6400
6401 [role="img-100"]
6402 .PIDs 1, 15, and 16 are added to the whitelist.
6403 image::track-1-3-4-7-10-13-15-16.png[]
6404
6405 The man:lttng-untrack(1) command removes entries from the PID tracker's
6406 whitelist. Given the previous example, the following command:
6407
6408 [role="term"]
6409 ----
6410 $ lttng untrack --pid=3,7,10,13
6411 ----
6412
6413 leads to this whitelist:
6414
6415 [role="img-100"]
6416 .PIDs 3, 7, 10, and 13 are removed from the whitelist.
6417 image::track-1-4-15-16.png[]
6418
6419 LTTng can track all possible PIDs again using the
6420 opt:lttng-track(1):--all option:
6421
6422 [role="term"]
6423 ----
6424 $ lttng track --pid --all
6425 ----
6426
6427 The result is, again:
6428
6429 [role="img-100"]
6430 .All PIDs are tracked.
6431 image::track-all.png[]
6432 ====
6433
6434 .Track only specific PIDs
6435 ====
6436 A very typical use case with PID tracking is to start with an empty
6437 whitelist, then <<basic-tracing-session-control,start the tracers>>, and
6438 then add PIDs manually while tracers are active. You can accomplish this
6439 by using the opt:lttng-untrack(1):--all option of the
6440 man:lttng-untrack(1) command to clear the whitelist after you
6441 <<creating-destroying-tracing-sessions,create a tracing session>>:
6442
6443 [role="term"]
6444 ----
6445 $ lttng untrack --pid --all
6446 ----
6447
6448 gives:
6449
6450 [role="img-100"]
6451 .No PIDs are tracked.
6452 image::untrack-all.png[]
6453
6454 If you trace with this whitelist configuration, the tracer records no
6455 events for this <<domain,tracing domain>> because no processes are
6456 tracked. You can use the man:lttng-track(1) command as usual to track
6457 specific PIDs, for example:
6458
6459 [role="term"]
6460 ----
6461 $ lttng track --pid=6,11
6462 ----
6463
6464 Result:
6465
6466 [role="img-100"]
6467 .PIDs 6 and 11 are tracked.
6468 image::track-6-11.png[]
6469 ====
6470
6471
6472 [role="since-2.5"]
6473 [[saving-loading-tracing-session]]
6474 === Save and load tracing session configurations
6475
6476 Configuring a <<tracing-session,tracing session>> can be long. Some of
6477 the tasks involved are:
6478
6479 * <<enabling-disabling-channels,Create channels>> with
6480 specific attributes.
6481 * <<adding-context,Add context fields>> to specific channels.
6482 * <<enabling-disabling-events,Create event rules>> with specific log
6483 level and filter conditions.
6484
6485 If you use LTTng to solve real world problems, chances are you have to
6486 record events using the same tracing session setup over and over,
6487 modifying a few variables each time in your instrumented program
6488 or environment. To avoid constant tracing session reconfiguration,
6489 the man:lttng(1) command-line tool can save and load tracing session
6490 configurations to/from XML files.
6491
6492 To save a given tracing session configuration:
6493
6494 * Use the man:lttng-save(1) command:
6495 +
6496 --
6497 [role="term"]
6498 ----
6499 $ lttng save my-session
6500 ----
6501 --
6502 +
6503 Replace `my-session` with the name of the tracing session to save.
6504
6505 LTTng saves tracing session configurations to
6506 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6507 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. Use
6508 the opt:lttng-save(1):--output-path option to change this destination
6509 directory.
6510
6511 LTTng saves all configuration parameters, for example:
6512
6513 * The tracing session name.
6514 * The trace data output path.
6515 * The channels with their state and all their attributes.
6516 * The context fields you added to channels.
6517 * The event rules with their state, log level and filter conditions.
6518
6519 To load a tracing session:
6520
6521 * Use the man:lttng-load(1) command:
6522 +
6523 --
6524 [role="term"]
6525 ----
6526 $ lttng load my-session
6527 ----
6528 --
6529 +
6530 Replace `my-session` with the name of the tracing session to load.
6531
6532 When LTTng loads a configuration, it restores your saved tracing session
6533 as if you just configured it manually.
6534
6535 See man:lttng(1) for the complete list of command-line options. You
6536 can also save and load all many sessions at a time, and decide in which
6537 directory to output the XML files.
6538
6539
6540 [[sending-trace-data-over-the-network]]
6541 === Send trace data over the network
6542
6543 LTTng can send the recorded trace data to a remote system over the
6544 network instead of writing it to the local file system.
6545
6546 To send the trace data over the network:
6547
6548 . On the _remote_ system (which can also be the target system),
6549 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6550 +
6551 --
6552 [role="term"]
6553 ----
6554 $ lttng-relayd
6555 ----
6556 --
6557
6558 . On the _target_ system, create a tracing session configured to
6559 send trace data over the network:
6560 +
6561 --
6562 [role="term"]
6563 ----
6564 $ lttng create my-session --set-url=net://remote-system
6565 ----
6566 --
6567 +
6568 Replace `remote-system` by the host name or IP address of the
6569 remote system. See man:lttng-create(1) for the exact URL format.
6570
6571 . On the target system, use the man:lttng(1) command-line tool as usual.
6572 When tracing is active, the target's consumer daemon sends sub-buffers
6573 to the relay daemon running on the remote system instead of flushing
6574 them to the local file system. The relay daemon writes the received
6575 packets to the local file system.
6576
6577 The relay daemon writes trace files to
6578 +$LTTNG_HOME/lttng-traces/__hostname__/__session__+ by default, where
6579 +__hostname__+ is the host name of the target system and +__session__+
6580 is the tracing session name. Note that the env:LTTNG_HOME environment
6581 variable defaults to `$HOME` if not set. Use the
6582 opt:lttng-relayd(8):--output option of man:lttng-relayd(8) to write
6583 trace files to another base directory.
6584
6585
6586 [role="since-2.4"]
6587 [[lttng-live]]
6588 === View events as LTTng emits them (noch:{LTTng} live)
6589
6590 LTTng live is a network protocol implemented by the <<lttng-relayd,relay
6591 daemon>> (man:lttng-relayd(8)) to allow compatible trace viewers to
6592 display events as LTTng emits them on the target system while tracing is
6593 active.
6594
6595 The relay daemon creates a _tee_: it forwards the trace data to both
6596 the local file system and to connected live viewers:
6597
6598 [role="img-90"]
6599 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live viewer.
6600 image::live.png[]
6601
6602 To use LTTng live:
6603
6604 . On the _target system_, create a <<tracing-session,tracing session>>
6605 in _live mode_:
6606 +
6607 --
6608 [role="term"]
6609 ----
6610 $ lttng create my-session --live
6611 ----
6612 --
6613 +
6614 This spawns a local relay daemon.
6615
6616 . Start the live viewer and configure it to connect to the relay
6617 daemon. For example, with http://diamon.org/babeltrace[Babeltrace]:
6618 +
6619 --
6620 [role="term"]
6621 ----
6622 $ babeltrace --input-format=lttng-live \
6623 net://localhost/host/hostname/my-session
6624 ----
6625 --
6626 +
6627 Replace:
6628 +
6629 --
6630 * `hostname` with the host name of the target system.
6631 * `my-session` with the name of the tracing session to view.
6632 --
6633
6634 . Configure the tracing session as usual with the man:lttng(1)
6635 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6636
6637 You can list the available live tracing sessions with Babeltrace:
6638
6639 [role="term"]
6640 ----
6641 $ babeltrace --input-format=lttng-live net://localhost
6642 ----
6643
6644 You can start the relay daemon on another system. In this case, you need
6645 to specify the relay daemon's URL when you create the tracing session
6646 with the opt:lttng-create(1):--set-url option. You also need to replace
6647 `localhost` in the procedure above with the host name of the system on
6648 which the relay daemon is running.
6649
6650 See man:lttng-create(1) and man:lttng-relayd(8) for the complete list of
6651 command-line options.
6652
6653
6654 [role="since-2.3"]
6655 [[taking-a-snapshot]]
6656 === Take a snapshot of the current sub-buffers of a tracing session
6657
6658 The normal behavior of LTTng is to append full sub-buffers to growing
6659 trace data files. This is ideal to keep a full history of the events
6660 that occurred on the target system, but it can
6661 represent too much data in some situations. For example, you may wish
6662 to trace your application continuously until some critical situation
6663 happens, in which case you only need the latest few recorded
6664 events to perform the desired analysis, not multi-gigabyte trace files.
6665
6666 With the man:lttng-snapshot(1) command, you can take a snapshot of the
6667 current sub-buffers of a given <<tracing-session,tracing session>>.
6668 LTTng can write the snapshot to the local file system or send it over
6669 the network.
6670
6671 To take a snapshot:
6672
6673 . Create a tracing session in _snapshot mode_:
6674 +
6675 --
6676 [role="term"]
6677 ----
6678 $ lttng create my-session --snapshot
6679 ----
6680 --
6681 +
6682 The <<channel-overwrite-mode-vs-discard-mode,event loss mode>> of
6683 <<channel,channels>> created in this mode is automatically set to
6684 _overwrite_ (flight recorder mode).
6685
6686 . Configure the tracing session as usual with the man:lttng(1)
6687 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6688
6689 . **Optional**: When you need to take a snapshot,
6690 <<basic-tracing-session-control,stop tracing>>.
6691 +
6692 You can take a snapshot when the tracers are active, but if you stop
6693 them first, you are sure that the data in the sub-buffers does not
6694 change before you actually take the snapshot.
6695
6696 . Take a snapshot:
6697 +
6698 --
6699 [role="term"]
6700 ----
6701 $ lttng snapshot record --name=my-first-snapshot
6702 ----
6703 --
6704 +
6705 LTTng writes the current sub-buffers of all the current tracing
6706 session's channels to trace files on the local file system. Those trace
6707 files have `my-first-snapshot` in their name.
6708
6709 There is no difference between the format of a normal trace file and the
6710 format of a snapshot: viewers of LTTng traces also support LTTng
6711 snapshots.
6712
6713 By default, LTTng writes snapshot files to the path shown by
6714 `lttng snapshot list-output`. You can change this path or decide to send
6715 snapshots over the network using either:
6716
6717 . An output path or URL that you specify when you create the
6718 tracing session.
6719 . An snapshot output path or URL that you add using
6720 `lttng snapshot add-output`
6721 . An output path or URL that you provide directly to the
6722 `lttng snapshot record` command.
6723
6724 Method 3 overrides method 2, which overrides method 1. When you
6725 specify a URL, a relay daemon must listen on a remote system (see
6726 <<sending-trace-data-over-the-network,Send trace data over the network>>).
6727
6728
6729 [role="since-2.6"]
6730 [[mi]]
6731 === Use the machine interface
6732
6733 With any command of the man:lttng(1) command-line tool, you can set the
6734 opt:lttng(1):--mi option to `xml` (before the command name) to get an
6735 XML machine interface output, for example:
6736
6737 [role="term"]
6738 ----
6739 $ lttng --mi=xml enable-event --kernel --syscall open
6740 ----
6741
6742 A schema definition (XSD) is
6743 https://github.com/lttng/lttng-tools/blob/stable-2.9/src/common/mi-lttng-3.0.xsd[available]
6744 to ease the integration with external tools as much as possible.
6745
6746
6747 [role="since-2.8"]
6748 [[metadata-regenerate]]
6749 === Regenerate the metadata of an LTTng trace
6750
6751 An LTTng trace, which is a http://diamon.org/ctf[CTF] trace, has both
6752 data stream files and a metadata file. This metadata file contains,
6753 amongst other things, information about the offset of the clock sources
6754 used to timestamp <<event,event records>> when tracing.
6755
6756 If, once a <<tracing-session,tracing session>> is
6757 <<basic-tracing-session-control,started>>, a major
6758 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
6759 happens, the trace's clock offset also needs to be updated. You
6760 can use the `metadata` item of the man:lttng-regenerate(1) command
6761 to do so.
6762
6763 The main use case of this command is to allow a system to boot with
6764 an incorrect wall time and trace it with LTTng before its wall time
6765 is corrected. Once the system is known to be in a state where its
6766 wall time is correct, it can run `lttng regenerate metadata`.
6767
6768 To regenerate the metadata of an LTTng trace:
6769
6770 * Use the `metadata` item of the man:lttng-regenerate(1) command:
6771 +
6772 --
6773 [role="term"]
6774 ----
6775 $ lttng regenerate metadata
6776 ----
6777 --
6778
6779 [IMPORTANT]
6780 ====
6781 `lttng regenerate metadata` has the following limitations:
6782
6783 * Tracing session <<creating-destroying-tracing-sessions,created>>
6784 in non-live mode.
6785 * User space <<channel,channels>>, if any, are using
6786 <<channel-buffering-schemes,per-user buffering>>.
6787 ====
6788
6789
6790 [role="since-2.9"]
6791 [[regenerate-statedump]]
6792 === Regenerate the state dump of a tracing session
6793
6794 The LTTng kernel and user space tracers generate state dump
6795 <<event,event records>> when the application starts or when you
6796 <<basic-tracing-session-control,start a tracing session>>. An analysis
6797 can use the state dump event records to set an initial state before it
6798 builds the rest of the state from the following event records.
6799 http://tracecompass.org/[Trace Compass] is a notable example of an
6800 application which uses the state dump of an LTTng trace.
6801
6802 When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
6803 state dump event records are not included in the snapshot because they
6804 were recorded to a sub-buffer that has been consumed or overwritten
6805 already.
6806
6807 You can use the `lttng regenerate statedump` command to emit the state
6808 dump event records again.
6809
6810 To regenerate the state dump of the current tracing session, provided
6811 create it in snapshot mode, before you take a snapshot:
6812
6813 . Use the `statedump` item of the man:lttng-regenerate(1) command:
6814 +
6815 --
6816 [role="term"]
6817 ----
6818 $ lttng regenerate statedump
6819 ----
6820 --
6821
6822 . <<basic-tracing-session-control,Stop the tracing session>>:
6823 +
6824 --
6825 [role="term"]
6826 ----
6827 $ lttng stop
6828 ----
6829 --
6830
6831 . <<taking-a-snapshot,Take a snapshot>>:
6832 +
6833 --
6834 [role="term"]
6835 ----
6836 $ lttng snapshot record --name=my-snapshot
6837 ----
6838 --
6839
6840 Depending on the event throughput, you should run steps 1 and 2
6841 as closely as possible.
6842
6843 NOTE: To record the state dump events, you need to
6844 <<enabling-disabling-events,create event rules>> which enable them.
6845 LTTng-UST state dump tracepoints start with `lttng_ust_statedump:`.
6846 LTTng-modules state dump tracepoints start with `lttng_statedump_`.
6847
6848
6849 [role="since-2.7"]
6850 [[persistent-memory-file-systems]]
6851 === Record trace data on persistent memory file systems
6852
6853 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory]
6854 (NVRAM) is random-access memory that retains its information when power
6855 is turned off (non-volatile). Systems with such memory can store data
6856 structures in RAM and retrieve them after a reboot, without flushing
6857 to typical _storage_.
6858
6859 Linux supports NVRAM file systems thanks to either
6860 http://pramfs.sourceforge.net/[PRAMFS] or
6861 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
6862 (requires Linux 4.1+).
6863
6864 This section does not describe how to operate such file systems;
6865 we assume that you have a working persistent memory file system.
6866
6867 When you create a <<tracing-session,tracing session>>, you can specify
6868 the path of the shared memory holding the sub-buffers. If you specify a
6869 location on an NVRAM file system, then you can retrieve the latest
6870 recorded trace data when the system reboots after a crash.
6871
6872 To record trace data on a persistent memory file system and retrieve the
6873 trace data after a system crash:
6874
6875 . Create a tracing session with a sub-buffer shared memory path located
6876 on an NVRAM file system:
6877 +
6878 --
6879 [role="term"]
6880 ----
6881 $ lttng create my-session --shm-path=/path/to/shm
6882 ----
6883 --
6884
6885 . Configure the tracing session as usual with the man:lttng(1)
6886 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6887
6888 . After a system crash, use the man:lttng-crash(1) command-line tool to
6889 view the trace data recorded on the NVRAM file system:
6890 +
6891 --
6892 [role="term"]
6893 ----
6894 $ lttng-crash /path/to/shm
6895 ----
6896 --
6897
6898 The binary layout of the ring buffer files is not exactly the same as
6899 the trace files layout. This is why you need to use man:lttng-crash(1)
6900 instead of your preferred trace viewer directly.
6901
6902 To convert the ring buffer files to LTTng trace files:
6903
6904 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
6905 +
6906 --
6907 [role="term"]
6908 ----
6909 $ lttng-crash --extract=/path/to/trace /path/to/shm
6910 ----
6911 --
6912
6913
6914 [[reference]]
6915 == Reference
6916
6917 [[lttng-modules-ref]]
6918 === noch:{LTTng-modules}
6919
6920
6921 [role="since-2.9"]
6922 [[lttng-tracepoint-enum]]
6923 ==== `LTTNG_TRACEPOINT_ENUM()` usage
6924
6925 Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
6926
6927 [source,c]
6928 ----
6929 LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
6930 ----
6931
6932 Replace:
6933
6934 * `name` with the name of the enumeration (C identifier, unique
6935 amongst all the defined enumerations).
6936 * `entries` with a list of enumeration entries.
6937
6938 The available enumeration entry macros are:
6939
6940 +ctf_enum_value(__name__, __value__)+::
6941 Entry named +__name__+ mapped to the integral value +__value__+.
6942
6943 +ctf_enum_range(__name__, __begin__, __end__)+::
6944 Entry named +__name__+ mapped to the range of integral values between
6945 +__begin__+ (included) and +__end__+ (included).
6946
6947 +ctf_enum_auto(__name__)+::
6948 Entry named +__name__+ mapped to the integral value following the
6949 last mapping's value.
6950 +
6951 The last value of a `ctf_enum_value()` entry is its +__value__+
6952 parameter.
6953 +
6954 The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
6955 +
6956 If `ctf_enum_auto()` is the first entry in the list, its integral
6957 value is 0.
6958
6959 Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
6960 to use a defined enumeration as a tracepoint field.
6961
6962 .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
6963 ====
6964 [source,c]
6965 ----
6966 LTTNG_TRACEPOINT_ENUM(
6967 my_enum,
6968 TP_ENUM_VALUES(
6969 ctf_enum_auto("AUTO: EXPECT 0")
6970 ctf_enum_value("VALUE: 23", 23)
6971 ctf_enum_value("VALUE: 27", 27)
6972 ctf_enum_auto("AUTO: EXPECT 28")
6973 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
6974 ctf_enum_auto("AUTO: EXPECT 304")
6975 )
6976 )
6977 ----
6978 ====
6979
6980
6981 [role="since-2.7"]
6982 [[lttng-modules-tp-fields]]
6983 ==== Tracepoint fields macros (for `TP_FIELDS()`)
6984
6985 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
6986 tracepoint fields, which must be listed within `TP_FIELDS()` in
6987 `LTTNG_TRACEPOINT_EVENT()`, are:
6988
6989 [role="func-desc growable",cols="asciidoc,asciidoc"]
6990 .Available macros to define LTTng-modules tracepoint fields
6991 |====
6992 |Macro |Description and parameters
6993
6994 |
6995 +ctf_integer(__t__, __n__, __e__)+
6996
6997 +ctf_integer_nowrite(__t__, __n__, __e__)+
6998
6999 +ctf_user_integer(__t__, __n__, __e__)+
7000
7001 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
7002 |
7003 Standard integer, displayed in base 10.
7004
7005 +__t__+::
7006 Integer C type (`int`, `long`, `size_t`, ...).
7007
7008 +__n__+::
7009 Field name.
7010
7011 +__e__+::
7012 Argument expression.
7013
7014 |
7015 +ctf_integer_hex(__t__, __n__, __e__)+
7016
7017 +ctf_user_integer_hex(__t__, __n__, __e__)+
7018 |
7019 Standard integer, displayed in base 16.
7020
7021 +__t__+::
7022 Integer C type.
7023
7024 +__n__+::
7025 Field name.
7026
7027 +__e__+::
7028 Argument expression.
7029
7030 |+ctf_integer_oct(__t__, __n__, __e__)+
7031 |
7032 Standard integer, displayed in base 8.
7033
7034 +__t__+::
7035 Integer C type.
7036
7037 +__n__+::
7038 Field name.
7039
7040 +__e__+::
7041 Argument expression.
7042
7043 |
7044 +ctf_integer_network(__t__, __n__, __e__)+
7045
7046 +ctf_user_integer_network(__t__, __n__, __e__)+
7047 |
7048 Integer in network byte order (big-endian), displayed in base 10.
7049
7050 +__t__+::
7051 Integer C type.
7052
7053 +__n__+::
7054 Field name.
7055
7056 +__e__+::
7057 Argument expression.
7058
7059 |
7060 +ctf_integer_network_hex(__t__, __n__, __e__)+
7061
7062 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
7063 |
7064 Integer in network byte order, displayed in base 16.
7065
7066 +__t__+::
7067 Integer C type.
7068
7069 +__n__+::
7070 Field name.
7071
7072 +__e__+::
7073 Argument expression.
7074
7075 |
7076 +ctf_enum(__N__, __t__, __n__, __e__)+
7077
7078 +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
7079
7080 +ctf_user_enum(__N__, __t__, __n__, __e__)+
7081
7082 +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
7083 |
7084 Enumeration.
7085
7086 +__N__+::
7087 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
7088
7089 +__t__+::
7090 Integer C type (`int`, `long`, `size_t`, ...).
7091
7092 +__n__+::
7093 Field name.
7094
7095 +__e__+::
7096 Argument expression.
7097
7098 |
7099 +ctf_string(__n__, __e__)+
7100
7101 +ctf_string_nowrite(__n__, __e__)+
7102
7103 +ctf_user_string(__n__, __e__)+
7104
7105 +ctf_user_string_nowrite(__n__, __e__)+
7106 |
7107 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
7108
7109 +__n__+::
7110 Field name.
7111
7112 +__e__+::
7113 Argument expression.
7114
7115 |
7116 +ctf_array(__t__, __n__, __e__, __s__)+
7117
7118 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
7119
7120 +ctf_user_array(__t__, __n__, __e__, __s__)+
7121
7122 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
7123 |
7124 Statically-sized array of integers.
7125
7126 +__t__+::
7127 Array element C type.
7128
7129 +__n__+::
7130 Field name.
7131
7132 +__e__+::
7133 Argument expression.
7134
7135 +__s__+::
7136 Number of elements.
7137
7138 |
7139 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
7140
7141 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7142
7143 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
7144
7145 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7146 |
7147 Statically-sized array of bits.
7148
7149 The type of +__e__+ must be an integer type. +__s__+ is the number
7150 of elements of such type in +__e__+, not the number of bits.
7151
7152 +__t__+::
7153 Array element C type.
7154
7155 +__n__+::
7156 Field name.
7157
7158 +__e__+::
7159 Argument expression.
7160
7161 +__s__+::
7162 Number of elements.
7163
7164 |
7165 +ctf_array_text(__t__, __n__, __e__, __s__)+
7166
7167 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
7168
7169 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
7170
7171 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
7172 |
7173 Statically-sized array, printed as text.
7174
7175 The string does not need to be null-terminated.
7176
7177 +__t__+::
7178 Array element C type (always `char`).
7179
7180 +__n__+::
7181 Field name.
7182
7183 +__e__+::
7184 Argument expression.
7185
7186 +__s__+::
7187 Number of elements.
7188
7189 |
7190 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
7191
7192 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7193
7194 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
7195
7196 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7197 |
7198 Dynamically-sized array of integers.
7199
7200 The type of +__E__+ must be unsigned.
7201
7202 +__t__+::
7203 Array element C type.
7204
7205 +__n__+::
7206 Field name.
7207
7208 +__e__+::
7209 Argument expression.
7210
7211 +__T__+::
7212 Length expression C type.
7213
7214 +__E__+::
7215 Length expression.
7216
7217 |
7218 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7219
7220 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7221 |
7222 Dynamically-sized array of integers, displayed in base 16.
7223
7224 The type of +__E__+ must be unsigned.
7225
7226 +__t__+::
7227 Array element C type.
7228
7229 +__n__+::
7230 Field name.
7231
7232 +__e__+::
7233 Argument expression.
7234
7235 +__T__+::
7236 Length expression C type.
7237
7238 +__E__+::
7239 Length expression.
7240
7241 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
7242 |
7243 Dynamically-sized array of integers in network byte order (big-endian),
7244 displayed in base 10.
7245
7246 The type of +__E__+ must be unsigned.
7247
7248 +__t__+::
7249 Array element C type.
7250
7251 +__n__+::
7252 Field name.
7253
7254 +__e__+::
7255 Argument expression.
7256
7257 +__T__+::
7258 Length expression C type.
7259
7260 +__E__+::
7261 Length expression.
7262
7263 |
7264 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7265
7266 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7267
7268 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7269
7270 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7271 |
7272 Dynamically-sized array of bits.
7273
7274 The type of +__e__+ must be an integer type. +__s__+ is the number
7275 of elements of such type in +__e__+, not the number of bits.
7276
7277 The type of +__E__+ must be unsigned.
7278
7279 +__t__+::
7280 Array element C type.
7281
7282 +__n__+::
7283 Field name.
7284
7285 +__e__+::
7286 Argument expression.
7287
7288 +__T__+::
7289 Length expression C type.
7290
7291 +__E__+::
7292 Length expression.
7293
7294 |
7295 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7296
7297 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7298
7299 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7300
7301 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7302 |
7303 Dynamically-sized array, displayed as text.
7304
7305 The string does not need to be null-terminated.
7306
7307 The type of +__E__+ must be unsigned.
7308
7309 The behaviour is undefined if +__e__+ is `NULL`.
7310
7311 +__t__+::
7312 Sequence element C type (always `char`).
7313
7314 +__n__+::
7315 Field name.
7316
7317 +__e__+::
7318 Argument expression.
7319
7320 +__T__+::
7321 Length expression C type.
7322
7323 +__E__+::
7324 Length expression.
7325 |====
7326
7327 Use the `_user` versions when the argument expression, `e`, is
7328 a user space address. In the cases of `ctf_user_integer*()` and
7329 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
7330 be addressable.
7331
7332 The `_nowrite` versions omit themselves from the session trace, but are
7333 otherwise identical. This means the `_nowrite` fields won't be written
7334 in the recorded trace. Their primary purpose is to make some
7335 of the event context available to the
7336 <<enabling-disabling-events,event filters>> without having to
7337 commit the data to sub-buffers.
7338
7339
7340 [[glossary]]
7341 == Glossary
7342
7343 Terms related to LTTng and to tracing in general:
7344
7345 Babeltrace::
7346 The http://diamon.org/babeltrace[Babeltrace] project, which includes
7347 the cmd:babeltrace command, some libraries, and Python bindings.
7348
7349 <<channel-buffering-schemes,buffering scheme>>::
7350 A layout of sub-buffers applied to a given channel.
7351
7352 <<channel,channel>>::
7353 An entity which is responsible for a set of ring buffers.
7354 +
7355 <<event,Event rules>> are always attached to a specific channel.
7356
7357 clock::
7358 A reference of time for a tracer.
7359
7360 <<lttng-consumerd,consumer daemon>>::
7361 A process which is responsible for consuming the full sub-buffers
7362 and write them to a file system or send them over the network.
7363
7364 <<channel-overwrite-mode-vs-discard-mode,discard mode>>:: The event loss
7365 mode in which the tracer _discards_ new event records when there's no
7366 sub-buffer space left to store them.
7367
7368 event::
7369 The consequence of the execution of an instrumentation
7370 point, like a tracepoint that you manually place in some source code,
7371 or a Linux kernel KProbe.
7372 +
7373 An event is said to _occur_ at a specific time. Different actions can
7374 be taken upon the occurrence of an event, like record the event's payload
7375 to a sub-buffer.
7376
7377 <<channel-overwrite-mode-vs-discard-mode,event loss mode>>::
7378 The mechanism by which event records of a given channel are lost
7379 (not recorded) when there is no sub-buffer space left to store them.
7380
7381 [[def-event-name]]event name::
7382 The name of an event, which is also the name of the event record.
7383 This is also called the _instrumentation point name_.
7384
7385 event record::
7386 A record, in a trace, of the payload of an event which occured.
7387
7388 <<event,event rule>>::
7389 Set of conditions which must be satisfied for one or more occuring
7390 events to be recorded.
7391
7392 `java.util.logging`::
7393 Java platform's
7394 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities].
7395
7396 <<instrumenting,instrumentation>>::
7397 The use of LTTng probes to make a piece of software traceable.
7398
7399 instrumentation point::
7400 A point in the execution path of a piece of software that, when
7401 reached by this execution, can emit an event.
7402
7403 instrumentation point name::
7404 See _<<def-event-name,event name>>_.
7405
7406 log4j::
7407 A http://logging.apache.org/log4j/1.2/[logging library] for Java
7408 developed by the Apache Software Foundation.
7409
7410 log level::
7411 Level of severity of a log statement or user space
7412 instrumentation point.
7413
7414 LTTng::
7415 The _Linux Trace Toolkit: next generation_ project.
7416
7417 <<lttng-cli,cmd:lttng>>::
7418 A command-line tool provided by the LTTng-tools project which you
7419 can use to send and receive control messages to and from a
7420 session daemon.
7421
7422 LTTng analyses::
7423 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
7424 which is a set of analyzing programs that are used to obtain a
7425 higher level view of an LTTng trace.
7426
7427 cmd:lttng-consumerd::
7428 The name of the consumer daemon program.
7429
7430 cmd:lttng-crash::
7431 A utility provided by the LTTng-tools project which can convert
7432 ring buffer files (usually
7433 <<persistent-memory-file-systems,saved on a persistent memory file system>>)
7434 to trace files.
7435
7436 LTTng Documentation::
7437 This document.
7438
7439 <<lttng-live,LTTng live>>::
7440 A communication protocol between the relay daemon and live viewers
7441 which makes it possible to see events "live", as they are received by
7442 the relay daemon.
7443
7444 <<lttng-modules,LTTng-modules>>::
7445 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
7446 which contains the Linux kernel modules to make the Linux kernel
7447 instrumentation points available for LTTng tracing.
7448
7449 cmd:lttng-relayd::
7450 The name of the relay daemon program.
7451
7452 cmd:lttng-sessiond::
7453 The name of the session daemon program.
7454
7455 LTTng-tools::
7456 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
7457 contains the various programs and libraries used to
7458 <<controlling-tracing,control tracing>>.
7459
7460 <<lttng-ust,LTTng-UST>>::
7461 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
7462 contains libraries to instrument user applications.
7463
7464 <<lttng-ust-agents,LTTng-UST Java agent>>::
7465 A Java package provided by the LTTng-UST project to allow the
7466 LTTng instrumentation of `java.util.logging` and Apache log4j 1.2
7467 logging statements.
7468
7469 <<lttng-ust-agents,LTTng-UST Python agent>>::
7470 A Python package provided by the LTTng-UST project to allow the
7471 LTTng instrumentation of Python logging statements.
7472
7473 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
7474 The event loss mode in which new event records overwrite older
7475 event records when there's no sub-buffer space left to store them.
7476
7477 <<channel-buffering-schemes,per-process buffering>>::
7478 A buffering scheme in which each instrumented process has its own
7479 sub-buffers for a given user space channel.
7480
7481 <<channel-buffering-schemes,per-user buffering>>::
7482 A buffering scheme in which all the processes of a Unix user share the
7483 same sub-buffer for a given user space channel.
7484
7485 <<lttng-relayd,relay daemon>>::
7486 A process which is responsible for receiving the trace data sent by
7487 a distant consumer daemon.
7488
7489 ring buffer::
7490 A set of sub-buffers.
7491
7492 <<lttng-sessiond,session daemon>>::
7493 A process which receives control commands from you and orchestrates
7494 the tracers and various LTTng daemons.
7495
7496 <<taking-a-snapshot,snapshot>>::
7497 A copy of the current data of all the sub-buffers of a given tracing
7498 session, saved as trace files.
7499
7500 sub-buffer::
7501 One part of an LTTng ring buffer which contains event records.
7502
7503 timestamp::
7504 The time information attached to an event when it is emitted.
7505
7506 trace (_noun_)::
7507 A set of files which are the concatenations of one or more
7508 flushed sub-buffers.
7509
7510 trace (_verb_)::
7511 The action of recording the events emitted by an application
7512 or by a system, or to initiate such recording by controlling
7513 a tracer.
7514
7515 Trace Compass::
7516 The http://tracecompass.org[Trace Compass] project and application.
7517
7518 tracepoint::
7519 An instrumentation point using the tracepoint mechanism of the Linux
7520 kernel or of LTTng-UST.
7521
7522 tracepoint definition::
7523 The definition of a single tracepoint.
7524
7525 tracepoint name::
7526 The name of a tracepoint.
7527
7528 tracepoint provider::
7529 A set of functions providing tracepoints to an instrumented user
7530 application.
7531 +
7532 Not to be confused with a _tracepoint provider package_: many tracepoint
7533 providers can exist within a tracepoint provider package.
7534
7535 tracepoint provider package::
7536 One or more tracepoint providers compiled as an object file or as
7537 a shared library.
7538
7539 tracer::
7540 A software which records emitted events.
7541
7542 <<domain,tracing domain>>::
7543 A namespace for event sources.
7544
7545 <<tracing-group,tracing group>>::
7546 The Unix group in which a Unix user can be to be allowed to trace the
7547 Linux kernel.
7548
7549 <<tracing-session,tracing session>>::
7550 A stateful dialogue between you and a <<lttng-sessiond,session
7551 daemon>>.
7552
7553 user application::
7554 An application running in user space, as opposed to a Linux kernel
7555 module, for example.
This page took 0.195166 seconds and 4 git commands to generate.