2.8, 2.9: add links to distribution websites
[lttng-docs.git] / 2.8 / lttng-docs-2.8.txt
1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.8, 2 December 2016
5
6
7 include::../common/copyright.txt[]
8
9
10 include::../common/welcome.txt[]
11
12
13 include::../common/audience.txt[]
14
15
16 [[chapters]]
17 === What's in this documentation?
18
19 The LTTng Documentation is divided into the following sections:
20
21 * **<<nuts-and-bolts,Nuts and bolts>>** explains the
22 rudiments of software tracing and the rationale behind the
23 LTTng project.
24 +
25 You can skip this section if you’re familiar with software tracing and
26 with the LTTng project.
27
28 * **<<installing-lttng,Installation>>** describes the steps to
29 install the LTTng packages on common Linux distributions and from
30 their sources.
31 +
32 You can skip this section if you already properly installed LTTng on
33 your target system.
34
35 * **<<getting-started,Quick start>>** is a concise guide to
36 getting started quickly with LTTng kernel and user space tracing.
37 +
38 We recommend this section if you're new to LTTng or to software tracing
39 in general.
40 +
41 You can skip this section if you're not new to LTTng.
42
43 * **<<core-concepts,Core concepts>>** explains the concepts at
44 the heart of LTTng.
45 +
46 It's a good idea to become familiar with the core concepts
47 before attempting to use the toolkit.
48
49 * **<<plumbing,Components of LTTng>>** describes the various components
50 of the LTTng machinery, like the daemons, the libraries, and the
51 command-line interface.
52 * **<<instrumenting,Instrumentation>>** shows different ways to
53 instrument user applications and the Linux kernel.
54 +
55 Instrumenting source code is essential to provide a meaningful
56 source of events.
57 +
58 You can skip this section if you do not have a programming background.
59
60 * **<<controlling-tracing,Tracing control>>** is divided into topics
61 which demonstrate how to use the vast array of features that
62 LTTng{nbsp}{revision} offers.
63 * **<<reference,Reference>>** contains reference tables.
64 * **<<glossary,Glossary>>** is a specialized dictionary of terms related
65 to LTTng or to the field of software tracing.
66
67
68 include::../common/convention.txt[]
69
70
71 include::../common/acknowledgements.txt[]
72
73
74 [[whats-new]]
75 == What's new in LTTng {revision}?
76
77 * **Tracing control**:
78 ** You can attach <<java-application-context,Java application-specific
79 context fields>> to a <<channel,channel>> with the
80 man:lttng-add-context(1) command:
81 +
82 --
83 [role="term"]
84 ----
85 lttng add-context --jul --type='$app.retriever:cur_msg_id'
86 ----
87 --
88 +
89 Here, `$app` is the prefix of all application-specific context fields,
90 `retriever` names a _context information retriever_ defined at the
91 application level, and `cur_msg_id` names a context field read from this
92 retriever.
93 +
94 Both the `java.util.logging` and Apache log4j <<domain,tracing domains>>
95 are supported.
96
97 ** You can use Java application-specific <<adding-context,context>>
98 fields in the <<enabling-disabling-events,filter expression>> of an
99 <<event,event rule>>:
100 +
101 --
102 [role="term"]
103 ----
104 lttng enable-event --log4j my_logger \
105 --filter='$app.retriever:cur_msg_id == 23'
106 ----
107 --
108
109 ** New `lttng status` command which is the equivalent of +lttng list
110 __CUR__+, where +__CUR__+ is the name of the current
111 <<tracing-session,tracing session>>.
112 +
113 See man:lttng-status(1).
114
115 ** New `lttng metadata regenerate` command to
116 <<metadata-regenerate,regenerate the metadata file of an LTTng
117 trace>> at any moment. This command is meant to be used to resample
118 the wall time following a major
119 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
120 so that a system which boots with an incorrect wall time can be
121 traced before its wall time is NTP-corrected.
122 +
123 See man:lttng-metadata(1).
124
125 ** New command-line interface warnings when <<event,event records>> or
126 whole sub-buffers are
127 <<channel-overwrite-mode-vs-discard-mode,lost>>. The warning messages
128 are printed when a <<tracing-session,tracing session>> is
129 <<basic-tracing-session-control,stopped>> (man:lttng-stop(1)
130 command).
131
132 * **User space tracing**:
133 ** Shared object base address dump in order to map <<event,event
134 records>> to original source location (file and line number).
135 +
136 If you attach the `ip` and `vpid` <<adding-context,context fields>> to a
137 user space <<channel,channel>> and if you use the
138 <<liblttng-ust-dl,path:{liblttng-ust-dl.so} helper>>, you can retrieve
139 the source location where a given event record was generated.
140 +
141 The http://diamon.org/babeltrace/[Babeltrace] trace viewer supports this
142 state dump and those context fields since version 1.4 to print the
143 source location of a given event record. http://tracecompass.org/[Trace
144 Compass] also supports this since version 2.0.
145
146 ** A <<java-application,Java application>> which uses
147 `java.util.logging` now adds an LTTng-UST log handler to the desired
148 JUL loggers.
149 +
150 The previous workflow was to initialize the LTTng-UST Java agent
151 by calling `LTTngAgent.getLTTngAgent()`. This had the effect of adding
152 an LTTng-UST log handler to the root loggers.
153
154 ** A <<java-application,Java application>> which uses Apache log4j now
155 adds an LTTng-UST log appender to the desired log4j loggers.
156 +
157 The previous workflow was to initialize the LTTng-UST Java agent
158 by calling `LTTngAgent.getLTTngAgent()`. This had the effect of adding
159 an LTTng-UST appender to the root loggers.
160
161 ** Any <<java-application,Java application>> can provide
162 <<java-application-context,dynamic context fields>> while running
163 thanks to a new API provided by the <<lttng-ust-agents,LTTng-UST Java
164 agent>>. You can require LTTng to record specific context fields in
165 event records, and you can use them in the filter expression of
166 <<event,event rules>>.
167
168 * **Linux kernel tracing**:
169 ** The LTTng kernel modules can now be built into a Linux kernel image,
170 that is, not as loadable modules.
171 +
172 Follow the project's
173 https://github.com/lttng/lttng-modules/blob/stable-{revision}/README.md#kernel-built-in-support[`README.md`]
174 file to learn how.
175
176 ** New instrumentation:
177 *** ARM64 architecture support.
178 *** x86 page faults.
179 *** x86 `irq_vectors`.
180 ** New <<adding-context,context fields>>:
181 *** `interruptible`
182 *** `preemptible`
183 *** `need_reschedule`
184 *** `migratable` (specific to RT-Preempt)
185 ** Clock source plugin support for advanced cases where a custom source
186 of time is needed to timestamp LTTng event records.
187 +
188 See https://github.com/lttng/lttng-modules/blob/stable-{revision}/lttng-clock.h[`lttng-clock.h`]
189 for an overview of the small API.
190
191 * **Documentation**:
192 ** The link:/man[man pages] of the man:lttng(1) command-line tool are
193 split into one man page per command (à la Git), for example:
194 +
195 --
196 [role="term"]
197 ----
198 man lttng-enable-event
199 ----
200 --
201 +
202 You can also use the `--help` option of any man:lttng(1) command to
203 open its man page.
204 +
205 The content and formatting of all the LTTng man pages has improved
206 dramatically.
207
208
209 [[nuts-and-bolts]]
210 == Nuts and bolts
211
212 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
213 generation_ is a modern toolkit for tracing Linux systems and
214 applications. So your first question might be:
215 **what is tracing?**
216
217
218 [[what-is-tracing]]
219 === What is tracing?
220
221 As the history of software engineering progressed and led to what
222 we now take for granted--complex, numerous and
223 interdependent software applications running in parallel on
224 sophisticated operating systems like Linux--the authors of such
225 components, software developers, began feeling a natural
226 urge to have tools that would ensure the robustness and good performance
227 of their masterpieces.
228
229 One major achievement in this field is, inarguably, the
230 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
231 an essential tool for developers to find and fix bugs. But even the best
232 debugger won't help make your software run faster, and nowadays, faster
233 software means either more work done by the same hardware, or cheaper
234 hardware for the same work.
235
236 A _profiler_ is often the tool of choice to identify performance
237 bottlenecks. Profiling is suitable to identify _where_ performance is
238 lost in a given software. The profiler outputs a profile, a statistical
239 summary of observed events, which you may use to discover which
240 functions took the most time to execute. However, a profiler won't
241 report _why_ some identified functions are the bottleneck. Bottlenecks
242 might only occur when specific conditions are met, conditions that are
243 sometimes impossible to capture by a statistical profiler, or impossible
244 to reproduce with an application altered by the overhead of an
245 event-based profiler. For a thorough investigation of software
246 performance issues, a history of execution is essential, with the
247 recorded values of variables and context fields you choose, and
248 with as little influence as possible on the instrumented software. This
249 is where tracing comes in handy.
250
251 _Tracing_ is a technique used to understand what goes on in a running
252 software system. The software used for tracing is called a _tracer_,
253 which is conceptually similar to a tape recorder. When recording,
254 specific instrumentation points placed in the software source code
255 generate events that are saved on a giant tape: a _trace_ file. You
256 can trace user applications and the operating system at the same time,
257 opening the possibility of resolving a wide range of problems that would
258 otherwise be extremely challenging.
259
260 Tracing is often compared to _logging_. However, tracers and loggers are
261 two different tools, serving two different purposes. Tracers are
262 designed to record much lower-level events that occur much more
263 frequently than log messages, often in the range of thousands per
264 second, with very little execution overhead. Logging is more appropriate
265 for a very high-level analysis of less frequent events: user accesses,
266 exceptional conditions (errors and warnings, for example), database
267 transactions, instant messaging communications, and such. Simply put,
268 logging is one of the many use cases that can be satisfied with tracing.
269
270 The list of recorded events inside a trace file can be read manually
271 like a log file for the maximum level of detail, but it is generally
272 much more interesting to perform application-specific analyses to
273 produce reduced statistics and graphs that are useful to resolve a
274 given problem. Trace viewers and analyzers are specialized tools
275 designed to do this.
276
277 In the end, this is what LTTng is: a powerful, open source set of
278 tools to trace the Linux kernel and user applications at the same time.
279 LTTng is composed of several components actively maintained and
280 developed by its link:/community/#where[community].
281
282
283 [[lttng-alternatives]]
284 === Alternatives to noch:{LTTng}
285
286 Excluding proprietary solutions, a few competing software tracers
287 exist for Linux:
288
289 * https://github.com/dtrace4linux/linux[dtrace4linux] is a port of
290 Sun Microsystems's DTrace to Linux. The cmd:dtrace tool interprets
291 user scripts and is responsible for loading code into the
292 Linux kernel for further execution and collecting the outputted data.
293 * https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF] is a
294 subsystem in the Linux kernel in which a virtual machine can execute
295 programs passed from the user space to the kernel. You can attach
296 such programs to tracepoints and KProbes thanks to a system call, and
297 they can output data to the user space when executed thanks to
298 different mechanisms (pipe, VM register values, and eBPF maps, to name
299 a few).
300 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
301 is the de facto function tracer of the Linux kernel. Its user
302 interface is a set of special files in sysfs.
303 * https://perf.wiki.kernel.org/[perf] is
304 a performance analyzing tool for Linux which supports hardware
305 performance counters, tracepoints, as well as other counters and
306 types of probes. perf's controlling utility is the cmd:perf command
307 line/curses tool.
308 * http://linux.die.net/man/1/strace[strace]
309 is a command-line utility which records system calls made by a
310 user process, as well as signal deliveries and changes of process
311 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
312 to fulfill its function.
313 * http://www.sysdig.org/[sysdig], like SystemTap, uses scripts to
314 analyze Linux kernel events. You write scripts, or _chisels_ in
315 sysdig's jargon, in Lua and sysdig executes them while the system is
316 being traced or afterwards. sysdig's interface is the cmd:sysdig
317 command-line tool as well as the curses-based cmd:csysdig tool.
318 * https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and
319 user space tracer which uses custom user scripts to produce plain text
320 traces. SystemTap converts the scripts to the C language, and then
321 compiles them as Linux kernel modules which are loaded to produce
322 trace data. SystemTap's primary user interface is the cmd:stap
323 command-line tool.
324
325 The main distinctive features of LTTng is that it produces correlated
326 kernel and user space traces, as well as doing so with the lowest
327 overhead amongst other solutions. It produces trace files in the
328 http://diamon.org/ctf[CTF] format, a file format optimized
329 for the production and analyses of multi-gigabyte data.
330
331 LTTng is the result of more than 10 years of active open source
332 development by a community of passionate developers.
333 LTTng{nbsp}{revision} is currently available on major desktop and server
334 Linux distributions.
335
336 The main interface for tracing control is a single command-line tool
337 named cmd:lttng. The latter can create several tracing sessions, enable
338 and disable events on the fly, filter events efficiently with custom
339 user expressions, start and stop tracing, and much more. LTTng can
340 record the traces on the file system or send them over the network, and
341 keep them totally or partially. You can view the traces once tracing
342 becomes inactive or in real-time.
343
344 <<installing-lttng,Install LTTng now>> and
345 <<getting-started,start tracing>>!
346
347
348 [[installing-lttng]]
349 == Installation
350
351 **LTTng** is a set of software <<plumbing,components>> which interact to
352 <<instrumenting,instrument>> the Linux kernel and user applications, and
353 to <<controlling-tracing,control tracing>> (start and stop
354 tracing, enable and disable event rules, and the rest). Those
355 components are bundled into the following packages:
356
357 * **LTTng-tools**: Libraries and command-line interface to
358 control tracing.
359 * **LTTng-modules**: Linux kernel modules to instrument and
360 trace the kernel.
361 * **LTTng-UST**: Libraries and Java/Python packages to instrument and
362 trace user applications.
363
364 Most distributions mark the LTTng-modules and LTTng-UST packages as
365 optional when installing LTTng-tools (which is always required). In the
366 following sections, we always provide the steps to install all three,
367 but note that:
368
369 * You only need to install LTTng-modules if you intend to trace the
370 Linux kernel.
371 * You only need to install LTTng-UST if you intend to trace user
372 applications.
373
374 [role="growable"]
375 .Availability of LTTng{nbsp}{revision} for major Linux distributions as of 2 December 2016.
376 |====
377 |Distribution |Available in releases |Alternatives
378
379 |https://www.ubuntu.com/[Ubuntu]
380 |<<ubuntu,Ubuntu{nbsp}16.10 _Yakkety Yak_>>.
381 |LTTng{nbsp}{revision} for Ubuntu{nbsp}14.04 _Trusty Tahr_
382 and Ubuntu{nbsp}16.04 _Xenial Xerus_:
383 <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
384
385 LTTng{nbsp}2.9 for Ubuntu{nbsp}14.04 _Trusty Tahr_
386 and Ubuntu{nbsp}16.04 _Xenial Xerus_:
387 link:/docs/v2.9#doc-ubuntu-ppa[use the LTTng Stable{nbsp}2.9 PPA].
388
389 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
390 other Ubuntu releases.
391
392 |https://getfedora.org/[Fedora]
393 |<<fedora,Fedora{nbsp}25>>.
394 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
395 other Fedora releases.
396
397 |https://www.debian.org/[Debian]
398 |<<debian,Debian "stretch" (testing)>>.
399 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
400 previous Debian releases.
401
402 |https://www.opensuse.org/[openSUSE]
403 |_Not available_
404 |<<building-from-source,Build LTTng{nbsp}{revision} from source>>.
405
406 |https://www.archlinux.org/[Arch Linux]
407 |_Not available_
408 |link:/docs/v2.9#doc-arch-linux[LTTng{nbsp}2.9 from the AUR].
409
410 |https://alpinelinux.org/[Alpine Linux]
411 |<<alpine-linux,Alpine Linux "edge">>.
412 |LTTng{nbsp}{revision} for Alpine Linux{nbsp}3.5 (not released yet).
413
414 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
415 other Alpine Linux releases.
416
417 |https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
418 |See http://packages.efficios.com/[EfficiOS Enterprise Packages].
419 |
420
421 |https://buildroot.org/[Buildroot]
422 |<<buildroot,Buildroot 2016.11>>.
423 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
424 other Buildroot releases.
425
426 |http://www.openembedded.org/wiki/Main_Page[OpenEmbedded] and
427 https://www.yoctoproject.org/[Yocto]
428 |<<oe-yocto,Yocto Project{nbsp}2.2 _Morty_>> (`openembedded-core` layer).
429 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
430 other OpenEmbedded releases.
431 |====
432
433
434 [[ubuntu]]
435 === [[ubuntu-official-repositories]]Ubuntu
436
437 LTTng{nbsp}{revision} is available on Ubuntu{nbsp}16.10 _Yakkety Yak_.
438 For previous releases of Ubuntu, <<ubuntu-ppa,use the LTTng
439 Stable{nbsp}{revision} PPA>>.
440
441 To install LTTng{nbsp}{revision} on Ubuntu{nbsp}16.10 _Yakkety Yak_:
442
443 . Install the main LTTng{nbsp}{revision} packages:
444 +
445 --
446 [role="term"]
447 ----
448 sudo apt-get install lttng-tools
449 sudo apt-get install lttng-modules-dkms
450 sudo apt-get install liblttng-ust-dev
451 ----
452 --
453
454 . **If you need to instrument and trace
455 <<java-application,Java applications>>**, install the LTTng-UST
456 Java agent:
457 +
458 --
459 [role="term"]
460 ----
461 sudo apt-get install liblttng-ust-agent-java
462 ----
463 --
464
465 . **If you need to instrument and trace
466 <<python-application,Python{nbsp}3 applications>>**, install the
467 LTTng-UST Python agent:
468 +
469 --
470 [role="term"]
471 ----
472 sudo apt-get install python3-lttngust
473 ----
474 --
475
476
477 [[ubuntu-ppa]]
478 ==== noch:{LTTng} Stable {revision} PPA
479
480 The https://launchpad.net/~lttng/+archive/ubuntu/stable-{revision}[LTTng
481 Stable{nbsp}{revision} PPA] offers the latest stable
482 LTTng{nbsp}{revision} packages for:
483
484 * Ubuntu{nbsp}14.04 _Trusty Tahr_
485 * Ubuntu{nbsp}16.04 _Xenial Xerus_
486
487 To install LTTng{nbsp}{revision} from the LTTng Stable{nbsp}{revision} PPA:
488
489 . Add the LTTng Stable{nbsp}{revision} PPA repository and update the
490 list of packages:
491 +
492 --
493 [role="term"]
494 ----
495 sudo apt-add-repository ppa:lttng/stable-2.8
496 sudo apt-get update
497 ----
498 --
499
500 . Install the main LTTng{nbsp}{revision} packages:
501 +
502 --
503 [role="term"]
504 ----
505 sudo apt-get install lttng-tools
506 sudo apt-get install lttng-modules-dkms
507 sudo apt-get install liblttng-ust-dev
508 ----
509 --
510
511 . **If you need to instrument and trace
512 <<java-application,Java applications>>**, install the LTTng-UST
513 Java agent:
514 +
515 --
516 [role="term"]
517 ----
518 sudo apt-get install liblttng-ust-agent-java
519 ----
520 --
521
522 . **If you need to instrument and trace
523 <<python-application,Python{nbsp}3 applications>>**, install the
524 LTTng-UST Python agent:
525 +
526 --
527 [role="term"]
528 ----
529 sudo apt-get install python3-lttngust
530 ----
531 --
532
533
534 [[fedora]]
535 === Fedora
536
537 To install LTTng{nbsp}{revision} on Fedora{nbsp}25:
538
539 . Install the LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision}
540 packages:
541 +
542 --
543 [role="term"]
544 ----
545 sudo yum install lttng-tools
546 sudo yum install lttng-ust
547 ----
548 --
549
550 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
551 +
552 --
553 [role="term"]
554 ----
555 cd $(mktemp -d) &&
556 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.8.tar.bz2 &&
557 tar -xf lttng-modules-latest-2.8.tar.bz2 &&
558 cd lttng-modules-2.8.* &&
559 make &&
560 sudo make modules_install &&
561 sudo depmod -a
562 ----
563 --
564
565 [IMPORTANT]
566 .Java and Python application instrumentation and tracing
567 ====
568 If you need to instrument and trace <<java-application,Java
569 applications>> on openSUSE, you need to build and install
570 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
571 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
572 `--enable-java-agent-all` options to the `configure` script, depending
573 on which Java logging framework you use.
574
575 If you need to instrument and trace <<python-application,Python
576 applications>> on openSUSE, you need to build and install
577 LTTng-UST{nbsp}{revision} from source and pass the
578 `--enable-python-agent` option to the `configure` script.
579 ====
580
581
582 [[debian]]
583 === Debian
584
585 To install LTTng{nbsp}{revision} on Debian "stretch" (testing):
586
587 . Install the main LTTng{nbsp}{revision} packages:
588 +
589 --
590 [role="term"]
591 ----
592 sudo apt-get install lttng-modules-dkms
593 sudo apt-get install liblttng-ust-dev
594 sudo apt-get install lttng-tools
595 ----
596 --
597
598 . **If you need to instrument and trace <<java-application,Java
599 applications>>**, install the LTTng-UST Java agent:
600 +
601 --
602 [role="term"]
603 ----
604 sudo apt-get install liblttng-ust-agent-java
605 ----
606 --
607
608 . **If you need to instrument and trace <<python-application,Python
609 applications>>**, install the LTTng-UST Python agent:
610 +
611 --
612 [role="term"]
613 ----
614 sudo apt-get install python3-lttngust
615 ----
616 --
617
618
619 [[alpine-linux]]
620 === Alpine Linux
621
622 To install LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision} on
623 Alpine Linux "edge":
624
625 . Make sure your system is
626 https://wiki.alpinelinux.org/wiki/Edge[configured for "edge"].
627 . Enable the _testing_ repository by uncommenting the corresponding
628 line in path:{/etc/apk/repositories}.
629 . Add the LTTng packages:
630 +
631 --
632 [role="term"]
633 ----
634 sudo apk add lttng-tools
635 sudo apk add lttng-ust-dev
636 ----
637 --
638
639 To install LTTng-modules{nbsp}{revision} (Linux kernel tracing support)
640 on Alpine Linux "edge":
641
642 . Add the vanilla Linux kernel:
643 +
644 --
645 [role="term"]
646 ----
647 apk add linux-vanilla linux-vanilla-dev
648 ----
649 --
650
651 . Reboot with the vanilla Linux kernel.
652 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
653 +
654 --
655 [role="term"]
656 ----
657 cd $(mktemp -d) &&
658 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.8.tar.bz2 &&
659 tar -xf lttng-modules-latest-2.8.tar.bz2 &&
660 cd lttng-modules-2.8.* &&
661 make &&
662 sudo make modules_install &&
663 sudo depmod -a
664 ----
665 --
666
667
668 [[enterprise-distributions]]
669 === RHEL, SUSE, and other enterprise distributions
670
671 To install LTTng on enterprise Linux distributions, such as Red Hat
672 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SUSE), please
673 see http://packages.efficios.com/[EfficiOS Enterprise Packages].
674
675
676 [[buildroot]]
677 === Buildroot
678
679 To install LTTng{nbsp}{revision} on Buildroot{nbsp}2016.11:
680
681 . Launch the Buildroot configuration tool:
682 +
683 --
684 [role="term"]
685 ----
686 make menuconfig
687 ----
688 --
689
690 . In **Kernel**, check **Linux kernel**.
691 . In **Toolchain**, check **Enable WCHAR support**.
692 . In **Target packages**{nbsp}&#8594; **Debugging, profiling and benchmark**,
693 check **lttng-modules** and **lttng-tools**.
694 . In **Target packages**{nbsp}&#8594; **Libraries**{nbsp}&#8594;
695 **Other**, check **lttng-libust**.
696
697
698 [[oe-yocto]]
699 === OpenEmbedded and Yocto
700
701 LTTng{nbsp}{revision} recipes are available in the
702 http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
703 layer for Yocto Project{nbsp}2.2 _Morty_ under the following names:
704
705 * `lttng-tools`
706 * `lttng-modules`
707 * `lttng-ust`
708
709 With BitBake, the simplest way to include LTTng recipes in your target
710 image is to add them to `IMAGE_INSTALL_append` in path:{conf/local.conf}:
711
712 ----
713 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
714 ----
715
716 If you use Hob:
717
718 . Select a machine and an image recipe.
719 . Click **Edit image recipe**.
720 . Under the **All recipes** tab, search for **lttng**.
721 . Check the desired LTTng recipes.
722
723 [IMPORTANT]
724 .Java and Python application instrumentation and tracing
725 ====
726 If you need to instrument and trace <<java-application,Java
727 applications>> on openSUSE, you need to build and install
728 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
729 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
730 `--enable-java-agent-all` options to the `configure` script, depending
731 on which Java logging framework you use.
732
733 If you need to instrument and trace <<python-application,Python
734 applications>> on openSUSE, you need to build and install
735 LTTng-UST{nbsp}{revision} from source and pass the
736 `--enable-python-agent` option to the `configure` script.
737 ====
738
739
740 [[building-from-source]]
741 === Build from source
742
743 To build and install LTTng{nbsp}{revision} from source:
744
745 . Using your distribution's package manager, or from source, install
746 the following dependencies of LTTng-tools and LTTng-UST:
747 +
748 --
749 * https://sourceforge.net/projects/libuuid/[libuuid]
750 * http://directory.fsf.org/wiki/Popt[popt]
751 * http://liburcu.org/[Userspace RCU]
752 * http://www.xmlsoft.org/[libxml2]
753 --
754
755 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
756 +
757 --
758 [role="term"]
759 ----
760 cd $(mktemp -d) &&
761 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.8.tar.bz2 &&
762 tar -xf lttng-modules-latest-2.8.tar.bz2 &&
763 cd lttng-modules-2.8.* &&
764 make &&
765 sudo make modules_install &&
766 sudo depmod -a
767 ----
768 --
769
770 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
771 +
772 --
773 [role="term"]
774 ----
775 cd $(mktemp -d) &&
776 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.8.tar.bz2 &&
777 tar -xf lttng-ust-latest-2.8.tar.bz2 &&
778 cd lttng-ust-2.8.* &&
779 ./configure &&
780 make &&
781 sudo make install &&
782 sudo ldconfig
783 ----
784 --
785 +
786 --
787 [IMPORTANT]
788 .Java and Python application tracing
789 ====
790 If you need to instrument and trace <<java-application,Java
791 applications>>, pass the `--enable-java-agent-jul`,
792 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
793 `configure` script, depending on which Java logging framework you use.
794
795 If you need to instrument and trace <<python-application,Python
796 applications>>, pass the `--enable-python-agent` option to the
797 `configure` script. You can set the `PYTHON` environment variable to the
798 path to the Python interpreter for which to install the LTTng-UST Python
799 agent package.
800 ====
801 --
802 +
803 --
804 [NOTE]
805 ====
806 By default, LTTng-UST libraries are installed to
807 dir:{/usr/local/lib}, which is the de facto directory in which to
808 keep self-compiled and third-party libraries.
809
810 When <<building-tracepoint-providers-and-user-application,linking an
811 instrumented user application with `liblttng-ust`>>:
812
813 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
814 variable.
815 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
816 man:gcc(1), man:g++(1), or man:clang(1).
817 ====
818 --
819
820 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
821 +
822 --
823 [role="term"]
824 ----
825 cd $(mktemp -d) &&
826 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.8.tar.bz2 &&
827 tar -xf lttng-tools-latest-2.8.tar.bz2 &&
828 cd lttng-tools-2.8.* &&
829 ./configure &&
830 make &&
831 sudo make install &&
832 sudo ldconfig
833 ----
834 --
835
836 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
837 previous steps automatically for a given version of LTTng and confine
838 the installed files in a specific directory. This can be useful to test
839 LTTng without installing it on your system.
840
841
842 [[getting-started]]
843 == Quick start
844
845 This is a short guide to get started quickly with LTTng kernel and user
846 space tracing.
847
848 Before you follow this guide, make sure to <<installing-lttng,install>>
849 LTTng.
850
851 This tutorial walks you through the steps to:
852
853 . <<tracing-the-linux-kernel,Trace the Linux kernel>>.
854 . <<tracing-your-own-user-application,Trace a user application>> written
855 in C.
856 . <<viewing-and-analyzing-your-traces,View and analyze the
857 recorded events>>.
858
859
860 [[tracing-the-linux-kernel]]
861 === Trace the Linux kernel
862
863 The following command lines start with cmd:sudo because you need root
864 privileges to trace the Linux kernel. You can avoid using cmd:sudo if
865 your Unix user is a member of the <<lttng-sessiond,tracing group>>.
866
867 . Create a <<tracing-session,tracing session>>:
868 +
869 --
870 [role="term"]
871 ----
872 sudo lttng create my-kernel-session
873 ----
874 --
875
876 . List the available kernel tracepoints and system calls:
877 +
878 --
879 [role="term"]
880 ----
881 lttng list --kernel
882 ----
883 --
884
885 . Create an <<event,event rule>> which matches the desired event names,
886 for example `sched_switch` and `sched_process_fork`:
887 +
888 --
889 [role="term"]
890 ----
891 sudo lttng enable-event --kernel sched_switch,sched_process_fork
892 ----
893 --
894 +
895 You can also create an event rule which _matches_ all the Linux kernel
896 tracepoints (this will generate a lot of data when tracing):
897 +
898 --
899 [role="term"]
900 ----
901 sudo lttng enable-event --kernel --all
902 ----
903 --
904
905 . Start tracing:
906 +
907 --
908 [role="term"]
909 ----
910 sudo lttng start
911 ----
912 --
913
914 . Do some operation on your system for a few seconds. For example,
915 load a website, or list the files of a directory.
916 . Stop tracing and destroy the tracing session:
917 +
918 --
919 [role="term"]
920 ----
921 sudo lttng stop
922 sudo lttng destroy
923 ----
924 --
925 +
926 The man:lttng-destroy(1) command does not destroy the trace data; it
927 only destroys the state of the tracing session.
928
929 By default, LTTng saves the traces in
930 +$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
931 where +__name__+ is the tracing session name. Note that the
932 env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
933
934 See <<viewing-and-analyzing-your-traces,View and analyze the
935 recorded events>> to view the recorded events.
936
937
938 [[tracing-your-own-user-application]]
939 === Trace a user application
940
941 This section steps you through a simple example to trace a
942 _Hello world_ program written in C.
943
944 To create the traceable user application:
945
946 . Create the tracepoint provider header file, which defines the
947 tracepoints and the events they can generate:
948 +
949 --
950 [source,c]
951 .path:{hello-tp.h}
952 ----
953 #undef TRACEPOINT_PROVIDER
954 #define TRACEPOINT_PROVIDER hello_world
955
956 #undef TRACEPOINT_INCLUDE
957 #define TRACEPOINT_INCLUDE "./hello-tp.h"
958
959 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
960 #define _HELLO_TP_H
961
962 #include <lttng/tracepoint.h>
963
964 TRACEPOINT_EVENT(
965 hello_world,
966 my_first_tracepoint,
967 TP_ARGS(
968 int, my_integer_arg,
969 char*, my_string_arg
970 ),
971 TP_FIELDS(
972 ctf_string(my_string_field, my_string_arg)
973 ctf_integer(int, my_integer_field, my_integer_arg)
974 )
975 )
976
977 #endif /* _HELLO_TP_H */
978
979 #include <lttng/tracepoint-event.h>
980 ----
981 --
982
983 . Create the tracepoint provider package source file:
984 +
985 --
986 [source,c]
987 .path:{hello-tp.c}
988 ----
989 #define TRACEPOINT_CREATE_PROBES
990 #define TRACEPOINT_DEFINE
991
992 #include "hello-tp.h"
993 ----
994 --
995
996 . Build the tracepoint provider package:
997 +
998 --
999 [role="term"]
1000 ----
1001 gcc -c -I. hello-tp.c
1002 ----
1003 --
1004
1005 . Create the _Hello World_ application source file:
1006 +
1007 --
1008 [source,c]
1009 .path:{hello.c}
1010 ----
1011 #include <stdio.h>
1012 #include "hello-tp.h"
1013
1014 int main(int argc, char *argv[])
1015 {
1016 int x;
1017
1018 puts("Hello, World!\nPress Enter to continue...");
1019
1020 /*
1021 * The following getchar() call is only placed here for the purpose
1022 * of this demonstration, to pause the application in order for
1023 * you to have time to list its tracepoints. It is not
1024 * needed otherwise.
1025 */
1026 getchar();
1027
1028 /*
1029 * A tracepoint() call.
1030 *
1031 * Arguments, as defined in hello-tp.h:
1032 *
1033 * 1. Tracepoint provider name (required)
1034 * 2. Tracepoint name (required)
1035 * 3. my_integer_arg (first user-defined argument)
1036 * 4. my_string_arg (second user-defined argument)
1037 *
1038 * Notice the tracepoint provider and tracepoint names are
1039 * NOT strings: they are in fact parts of variables that the
1040 * macros in hello-tp.h create.
1041 */
1042 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
1043
1044 for (x = 0; x < argc; ++x) {
1045 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
1046 }
1047
1048 puts("Quitting now!");
1049 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
1050
1051 return 0;
1052 }
1053 ----
1054 --
1055
1056 . Build the application:
1057 +
1058 --
1059 [role="term"]
1060 ----
1061 gcc -c hello.c
1062 ----
1063 --
1064
1065 . Link the application with the tracepoint provider package,
1066 `liblttng-ust`, and `libdl`:
1067 +
1068 --
1069 [role="term"]
1070 ----
1071 gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
1072 ----
1073 --
1074
1075 Here's the whole build process:
1076
1077 [role="img-100"]
1078 .User space tracing tutorial's build steps.
1079 image::ust-flow.png[]
1080
1081 To trace the user application:
1082
1083 . Run the application with a few arguments:
1084 +
1085 --
1086 [role="term"]
1087 ----
1088 ./hello world and beyond
1089 ----
1090 --
1091 +
1092 You see:
1093 +
1094 --
1095 ----
1096 Hello, World!
1097 Press Enter to continue...
1098 ----
1099 --
1100
1101 . Start an LTTng <<lttng-sessiond,session daemon>>:
1102 +
1103 --
1104 [role="term"]
1105 ----
1106 lttng-sessiond --daemonize
1107 ----
1108 --
1109 +
1110 Note that a session daemon might already be running, for example as
1111 a service that the distribution's service manager started.
1112
1113 . List the available user space tracepoints:
1114 +
1115 --
1116 [role="term"]
1117 ----
1118 lttng list --userspace
1119 ----
1120 --
1121 +
1122 You see the `hello_world:my_first_tracepoint` tracepoint listed
1123 under the `./hello` process.
1124
1125 . Create a <<tracing-session,tracing session>>:
1126 +
1127 --
1128 [role="term"]
1129 ----
1130 lttng create my-user-space-session
1131 ----
1132 --
1133
1134 . Create an <<event,event rule>> which matches the
1135 `hello_world:my_first_tracepoint` event name:
1136 +
1137 --
1138 [role="term"]
1139 ----
1140 lttng enable-event --userspace hello_world:my_first_tracepoint
1141 ----
1142 --
1143
1144 . Start tracing:
1145 +
1146 --
1147 [role="term"]
1148 ----
1149 lttng start
1150 ----
1151 --
1152
1153 . Go back to the running `hello` application and press Enter. The
1154 program executes all `tracepoint()` instrumentation points and exits.
1155 . Stop tracing and destroy the tracing session:
1156 +
1157 --
1158 [role="term"]
1159 ----
1160 sudo lttng stop
1161 sudo lttng destroy
1162 ----
1163 --
1164 +
1165 The man:lttng-destroy(1) command does not destroy the trace data; it
1166 only destroys the state of the tracing session.
1167
1168 By default, LTTng saves the traces in
1169 +$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
1170 where +__name__+ is the tracing session name. Note that the
1171 env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
1172
1173 See <<viewing-and-analyzing-your-traces,View and analyze the
1174 recorded events>> to view the recorded events.
1175
1176
1177 [[viewing-and-analyzing-your-traces]]
1178 === View and analyze the recorded events
1179
1180 Once you have completed the <<tracing-the-linux-kernel,Trace the Linux
1181 kernel>> and <<tracing-your-own-user-application,Trace a user
1182 application>> tutorials, you can inspect the recorded events.
1183
1184 Many tools are available to read LTTng traces:
1185
1186 * **cmd:babeltrace** is a command-line utility which converts trace
1187 formats; it supports the format that LTTng produces, CTF, as well as a
1188 basic text output which can be ++grep++ed. The cmd:babeltrace command
1189 is part of the http://diamon.org/babeltrace[Babeltrace] project.
1190 * Babeltrace also includes
1191 **https://www.python.org/[Python] bindings** so
1192 that you can easily open and read an LTTng trace with your own script,
1193 benefiting from the power of Python.
1194 * http://tracecompass.org/[**Trace Compass**]
1195 is a graphical user interface for viewing and analyzing any type of
1196 logs or traces, including LTTng's.
1197 * https://github.com/lttng/lttng-analyses[**LTTng analyses**] is a
1198 project which includes many high-level analyses of LTTng kernel
1199 traces, like scheduling statistics, interrupt frequency distribution,
1200 top CPU usage, and more.
1201
1202 NOTE: This section assumes that the traces recorded during the previous
1203 tutorials were saved to their default location, in the
1204 dir:{$LTTNG_HOME/lttng-traces} directory. Note that the env:LTTNG_HOME
1205 environment variable defaults to `$HOME` if not set.
1206
1207
1208 [[viewing-and-analyzing-your-traces-bt]]
1209 ==== Use the cmd:babeltrace command-line tool
1210
1211 The simplest way to list all the recorded events of a trace is to pass
1212 its path to cmd:babeltrace with no options:
1213
1214 [role="term"]
1215 ----
1216 babeltrace ~/lttng-traces/my-user-space-session*
1217 ----
1218
1219 cmd:babeltrace finds all traces recursively within the given path and
1220 prints all their events, merging them in chronological order.
1221
1222 You can pipe the output of cmd:babeltrace into a tool like man:grep(1) for
1223 further filtering:
1224
1225 [role="term"]
1226 ----
1227 babeltrace ~/lttng-traces/my-kernel-session* | grep sys_
1228 ----
1229
1230 You can pipe the output of cmd:babeltrace into a tool like man:wc(1) to
1231 count the recorded events:
1232
1233 [role="term"]
1234 ----
1235 babeltrace ~/lttng-traces/my-kernel-session* | grep sys_read | wc --lines
1236 ----
1237
1238
1239 [[viewing-and-analyzing-your-traces-bt-python]]
1240 ==== Use the Babeltrace Python bindings
1241
1242 The <<viewing-and-analyzing-your-traces-bt,text output of cmd:babeltrace>>
1243 is useful to isolate events by simple matching using man:grep(1) and
1244 similar utilities. However, more elaborate filters, such as keeping only
1245 event records with a field value falling within a specific range, are
1246 not trivial to write using a shell. Moreover, reductions and even the
1247 most basic computations involving multiple event records are virtually
1248 impossible to implement.
1249
1250 Fortunately, Babeltrace ships with Python 3 bindings which makes it easy
1251 to read the event records of an LTTng trace sequentially and compute the
1252 desired information.
1253
1254 The following script accepts an LTTng Linux kernel trace path as its
1255 first argument and prints the short names of the top 5 running processes
1256 on CPU 0 during the whole trace:
1257
1258 [source,python]
1259 .path:{top5proc.py}
1260 ----
1261 from collections import Counter
1262 import babeltrace
1263 import sys
1264
1265
1266 def top5proc():
1267 if len(sys.argv) != 2:
1268 msg = 'Usage: python3 {} TRACEPATH'.format(sys.argv[0])
1269 print(msg, file=sys.stderr)
1270 return False
1271
1272 # A trace collection contains one or more traces
1273 col = babeltrace.TraceCollection()
1274
1275 # Add the trace provided by the user (LTTng traces always have
1276 # the 'ctf' format)
1277 if col.add_trace(sys.argv[1], 'ctf') is None:
1278 raise RuntimeError('Cannot add trace')
1279
1280 # This counter dict contains execution times:
1281 #
1282 # task command name -> total execution time (ns)
1283 exec_times = Counter()
1284
1285 # This contains the last `sched_switch` timestamp
1286 last_ts = None
1287
1288 # Iterate on events
1289 for event in col.events:
1290 # Keep only `sched_switch` events
1291 if event.name != 'sched_switch':
1292 continue
1293
1294 # Keep only events which happened on CPU 0
1295 if event['cpu_id'] != 0:
1296 continue
1297
1298 # Event timestamp
1299 cur_ts = event.timestamp
1300
1301 if last_ts is None:
1302 # We start here
1303 last_ts = cur_ts
1304
1305 # Previous task command (short) name
1306 prev_comm = event['prev_comm']
1307
1308 # Initialize entry in our dict if not yet done
1309 if prev_comm not in exec_times:
1310 exec_times[prev_comm] = 0
1311
1312 # Compute previous command execution time
1313 diff = cur_ts - last_ts
1314
1315 # Update execution time of this command
1316 exec_times[prev_comm] += diff
1317
1318 # Update last timestamp
1319 last_ts = cur_ts
1320
1321 # Display top 5
1322 for name, ns in exec_times.most_common(5):
1323 s = ns / 1000000000
1324 print('{:20}{} s'.format(name, s))
1325
1326 return True
1327
1328
1329 if __name__ == '__main__':
1330 sys.exit(0 if top5proc() else 1)
1331 ----
1332
1333 Run this script:
1334
1335 [role="term"]
1336 ----
1337 python3 top5proc.py ~/lttng-traces/my-kernel-session*/kernel
1338 ----
1339
1340 Output example:
1341
1342 ----
1343 swapper/0 48.607245889 s
1344 chromium 7.192738188 s
1345 pavucontrol 0.709894415 s
1346 Compositor 0.660867933 s
1347 Xorg.bin 0.616753786 s
1348 ----
1349
1350 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
1351 weren't using the CPU that much when tracing, its first position in the
1352 list makes sense.
1353
1354
1355 [[core-concepts]]
1356 == [[understanding-lttng]]Core concepts
1357
1358 From a user's perspective, the LTTng system is built on a few concepts,
1359 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1360 operates by sending commands to the <<lttng-sessiond,session daemon>>.
1361 Understanding how those objects relate to eachother is key in mastering
1362 the toolkit.
1363
1364 The core concepts are:
1365
1366 * <<tracing-session,Tracing session>>
1367 * <<domain,Tracing domain>>
1368 * <<channel,Channel and ring buffer>>
1369 * <<"event","Instrumentation point, event rule, event, and event record">>
1370
1371
1372 [[tracing-session]]
1373 === Tracing session
1374
1375 A _tracing session_ is a stateful dialogue between you and
1376 a <<lttng-sessiond,session daemon>>. You can
1377 <<creating-destroying-tracing-sessions,create a new tracing
1378 session>> with the `lttng create` command.
1379
1380 Anything that you do when you control LTTng tracers happens within a
1381 tracing session. In particular, a tracing session:
1382
1383 * Has its own name.
1384 * Has its own set of trace files.
1385 * Has its own state of activity (started or stopped).
1386 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
1387 snapshot, or live).
1388 * Has its own <<channel,channels>> which have their own
1389 <<event,event rules>>.
1390
1391 [role="img-100"]
1392 .A _tracing session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,event rules>>.
1393 image::concepts.png[]
1394
1395 Those attributes and objects are completely isolated between different
1396 tracing sessions.
1397
1398 A tracing session is analogous to a cash machine session:
1399 the operations you do on the banking system through the cash machine do
1400 not alter the data of other users of the same system. In the case of
1401 the cash machine, a session lasts as long as your bank card is inside.
1402 In the case of LTTng, a tracing session lasts from the `lttng create`
1403 command to the `lttng destroy` command.
1404
1405 [role="img-100"]
1406 .Each Unix user has its own set of tracing sessions.
1407 image::many-sessions.png[]
1408
1409
1410 [[tracing-session-mode]]
1411 ==== Tracing session mode
1412
1413 LTTng can send the generated trace data to different locations. The
1414 _tracing session mode_ dictates where to send it. The following modes
1415 are available in LTTng{nbsp}{revision}:
1416
1417 Local mode::
1418 LTTng writes the traces to the file system of the machine being traced
1419 (target system).
1420
1421 Network streaming mode::
1422 LTTng sends the traces over the network to a
1423 <<lttng-relayd,relay daemon>> running on a remote system.
1424
1425 Snapshot mode::
1426 LTTng does not write the traces by default. Instead, you can request
1427 LTTng to <<taking-a-snapshot,take a snapshot>>, that is, a copy of the
1428 current tracing buffers, and to write it to the target's file system
1429 or to send it over the network to a <<lttng-relayd,relay daemon>>
1430 running on a remote system.
1431
1432 Live mode::
1433 This mode is similar to the network streaming mode, but a live
1434 trace viewer can connect to the distant relay daemon to
1435 <<lttng-live,view event records as LTTng generates them>> by
1436 the tracers.
1437
1438
1439 [[domain]]
1440 === Tracing domain
1441
1442 A _tracing domain_ is a namespace for event sources. A tracing domain
1443 has its own properties and features.
1444
1445 There are currently five available tracing domains:
1446
1447 * Linux kernel
1448 * User space
1449 * `java.util.logging` (JUL)
1450 * log4j
1451 * Python
1452
1453 You must specify a tracing domain when using some commands to avoid
1454 ambiguity. For example, since all the domains support named tracepoints
1455 as event sources (instrumentation points that you manually insert in the
1456 source code), you need to specify a tracing domain when
1457 <<enabling-disabling-events,creating an event rule>> because all the
1458 tracing domains could have tracepoints with the same names.
1459
1460 Some features are reserved to specific tracing domains. Dynamic function
1461 entry and return instrumentation points, for example, are currently only
1462 supported in the Linux kernel tracing domain, but support for other
1463 tracing domains could be added in the future.
1464
1465 You can create <<channel,channels>> in the Linux kernel and user space
1466 tracing domains. The other tracing domains have a single default
1467 channel.
1468
1469
1470 [[channel]]
1471 === Channel and ring buffer
1472
1473 A _channel_ is an object which is responsible for a set of ring buffers.
1474 Each ring buffer is divided into multiple sub-buffers. When an LTTng
1475 tracer emits an event, it can record it to one or more
1476 sub-buffers. The attributes of a channel determine what to do when
1477 there's no space left for a new event record because all sub-buffers
1478 are full, where to send a full sub-buffer, and other behaviours.
1479
1480 A channel is always associated to a <<domain,tracing domain>>. The
1481 `java.util.logging` (JUL), log4j, and Python tracing domains each have
1482 a default channel which you cannot configure.
1483
1484 A channel also owns <<event,event rules>>. When an LTTng tracer emits
1485 an event, it records it to the sub-buffers of all
1486 the enabled channels with a satisfied event rule, as long as those
1487 channels are part of active <<tracing-session,tracing sessions>>.
1488
1489
1490 [[channel-buffering-schemes]]
1491 ==== Per-user vs. per-process buffering schemes
1492
1493 A channel has at least one ring buffer _per CPU_. LTTng always
1494 records an event to the ring buffer associated to the CPU on which it
1495 occurred.
1496
1497 Two _buffering schemes_ are available when you
1498 <<enabling-disabling-channels,create a channel>> in the
1499 user space <<domain,tracing domain>>:
1500
1501 Per-user buffering::
1502 Allocate one set of ring buffers--one per CPU--shared by all the
1503 instrumented processes of each Unix user.
1504 +
1505 --
1506 [role="img-100"]
1507 .Per-user buffering scheme.
1508 image::per-user-buffering.png[]
1509 --
1510
1511 Per-process buffering::
1512 Allocate one set of ring buffers--one per CPU--for each
1513 instrumented process.
1514 +
1515 --
1516 [role="img-100"]
1517 .Per-process buffering scheme.
1518 image::per-process-buffering.png[]
1519 --
1520 +
1521 The per-process buffering scheme tends to consume more memory than the
1522 per-user option because systems generally have more instrumented
1523 processes than Unix users running instrumented processes. However, the
1524 per-process buffering scheme ensures that one process having a high
1525 event throughput won't fill all the shared sub-buffers of the same
1526 user, only its own.
1527
1528 The Linux kernel tracing domain has only one available buffering scheme
1529 which is to allocate a single set of ring buffers for the whole system.
1530 This scheme is similar to the per-user option, but with a single, global
1531 user "running" the kernel.
1532
1533
1534 [[channel-overwrite-mode-vs-discard-mode]]
1535 ==== Overwrite vs. discard event loss modes
1536
1537 When an event occurs, LTTng records it to a specific sub-buffer (yellow
1538 arc in the following animation) of a specific channel's ring buffer.
1539 When there's no space left in a sub-buffer, the tracer marks it as
1540 consumable (red) and another, empty sub-buffer starts receiving the
1541 following event records. A <<lttng-consumerd,consumer daemon>>
1542 eventually consumes the marked sub-buffer (returns to white).
1543
1544 [NOTE]
1545 [role="docsvg-channel-subbuf-anim"]
1546 ====
1547 {note-no-anim}
1548 ====
1549
1550 In an ideal world, sub-buffers are consumed faster than they are filled,
1551 as is the case in the previous animation. In the real world,
1552 however, all sub-buffers can be full at some point, leaving no space to
1553 record the following events.
1554
1555 By design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer is
1556 available, it is acceptable to lose event records when the alternative
1557 would be to cause substantial delays in the instrumented application's
1558 execution. LTTng privileges performance over integrity; it aims at
1559 perturbing the traced system as little as possible in order to make
1560 tracing of subtle race conditions and rare interrupt cascades possible.
1561
1562 When it comes to losing event records because no empty sub-buffer is
1563 available, the channel's _event loss mode_ determines what to do. The
1564 available event loss modes are:
1565
1566 Discard mode::
1567 Drop the newest event records until a the tracer
1568 releases a sub-buffer.
1569
1570 Overwrite mode::
1571 Clear the sub-buffer containing the oldest event records and start
1572 writing the newest event records there.
1573 +
1574 This mode is sometimes called _flight recorder mode_ because it's
1575 similar to a
1576 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]:
1577 always keep a fixed amount of the latest data.
1578
1579 Which mechanism you should choose depends on your context: prioritize
1580 the newest or the oldest event records in the ring buffer?
1581
1582 Beware that, in overwrite mode, the tracer abandons a whole sub-buffer
1583 as soon as a there's no space left for a new event record, whereas in
1584 discard mode, the tracer only discards the event record that doesn't
1585 fit.
1586
1587 In discard mode, LTTng increments a count of lost event records when
1588 an event record is lost and saves this count to the trace. In
1589 overwrite mode, LTTng keeps no information when it overwrites a
1590 sub-buffer before consuming it.
1591
1592 There are a few ways to decrease your probability of losing event
1593 records.
1594 <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>> shows
1595 how you can fine-une the sub-buffer count and size of a channel to
1596 virtually stop losing event records, though at the cost of greater
1597 memory usage.
1598
1599
1600 [[channel-subbuf-size-vs-subbuf-count]]
1601 ==== Sub-buffer count and size
1602
1603 When you <<enabling-disabling-channels,create a channel>>, you can
1604 set its number of sub-buffers and their size.
1605
1606 Note that there is noticeable CPU overhead introduced when
1607 switching sub-buffers (marking a full one as consumable and switching
1608 to an empty one for the following events to be recorded). Knowing this,
1609 the following list presents a few practical situations along with how
1610 to configure the sub-buffer count and size for them:
1611
1612 * **High event throughput**: In general, prefer bigger sub-buffers to
1613 lower the risk of losing event records.
1614 +
1615 Having bigger sub-buffers also ensures a lower sub-buffer switching
1616 frequency.
1617 +
1618 The number of sub-buffers is only meaningful if you create the channel
1619 in overwrite mode: in this case, if a sub-buffer overwrite happens, the
1620 other sub-buffers are left unaltered.
1621
1622 * **Low event throughput**: In general, prefer smaller sub-buffers
1623 since the risk of losing event records is low.
1624 +
1625 Because events occur less frequently, the sub-buffer switching frequency
1626 should remain low and thus the tracer's overhead should not be a
1627 problem.
1628
1629 * **Low memory system**: If your target system has a low memory
1630 limit, prefer fewer first, then smaller sub-buffers.
1631 +
1632 Even if the system is limited in memory, you want to keep the
1633 sub-buffers as big as possible to avoid a high sub-buffer switching
1634 frequency.
1635
1636 Note that LTTng uses http://diamon.org/ctf/[CTF] as its trace format,
1637 which means event data is very compact. For example, the average
1638 LTTng kernel event record weights about 32{nbsp}bytes. Thus, a
1639 sub-buffer size of 1{nbsp}MiB is considered big.
1640
1641 The previous situations highlight the major trade-off between a few big
1642 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1643 frequency vs. how much data is lost in overwrite mode. Assuming a
1644 constant event throughput and using the overwrite mode, the two
1645 following configurations have the same ring buffer total size:
1646
1647 [NOTE]
1648 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1649 ====
1650 {note-no-anim}
1651 ====
1652
1653 * **2 sub-buffers of 4{nbsp}MiB each**: Expect a very low sub-buffer
1654 switching frequency, but if a sub-buffer overwrite happens, half of
1655 the event records so far (4{nbsp}MiB) are definitely lost.
1656 * **8 sub-buffers of 1{nbsp}MiB each**: Expect 4{nbsp}times the tracer's
1657 overhead as the previous configuration, but if a sub-buffer
1658 overwrite happens, only the eighth of event records so far are
1659 definitely lost.
1660
1661 In discard mode, the sub-buffers count parameter is pointless: use two
1662 sub-buffers and set their size according to the requirements of your
1663 situation.
1664
1665
1666 [[channel-switch-timer]]
1667 ==== Switch timer period
1668
1669 The _switch timer period_ is an important configurable attribute of
1670 a channel to ensure periodic sub-buffer flushing.
1671
1672 When the _switch timer_ expires, a sub-buffer switch happens. You can
1673 set the switch timer period attribute when you
1674 <<enabling-disabling-channels,create a channel>> to ensure that event
1675 data is consumed and committed to trace files or to a distant relay
1676 daemon periodically in case of a low event throughput.
1677
1678 [NOTE]
1679 [role="docsvg-channel-switch-timer"]
1680 ====
1681 {note-no-anim}
1682 ====
1683
1684 This attribute is also convenient when you use big sub-buffers to cope
1685 with a sporadic high event throughput, even if the throughput is
1686 normally low.
1687
1688
1689 [[channel-read-timer]]
1690 ==== Read timer period
1691
1692 By default, the LTTng tracers use a notification mechanism to signal a
1693 full sub-buffer so that a consumer daemon can consume it. When such
1694 notifications must be avoided, for example in real-time applications,
1695 you can use the channel's _read timer_ instead. When the read timer
1696 fires, the <<lttng-consumerd,consumer daemon>> checks for full,
1697 consumable sub-buffers.
1698
1699
1700 [[tracefile-rotation]]
1701 ==== Trace file count and size
1702
1703 By default, trace files can grow as large as needed. You can set the
1704 maximum size of each trace file that a channel writes when you
1705 <<enabling-disabling-channels,create a channel>>. When the size of
1706 a trace file reaches the channel's fixed maximum size, LTTng creates
1707 another file to contain the next event records. LTTng appends a file
1708 count to each trace file name in this case.
1709
1710 If you set the trace file size attribute when you create a channel, the
1711 maximum number of trace files that LTTng creates is _unlimited_ by
1712 default. To limit them, you can also set a maximum number of trace
1713 files. When the number of trace files reaches the channel's fixed
1714 maximum count, the oldest trace file is overwritten. This mechanism is
1715 called _trace file rotation_.
1716
1717
1718 [[event]]
1719 === Instrumentation point, event rule, event, and event record
1720
1721 An _event rule_ is a set of conditions which must be **all** satisfied
1722 for LTTng to record an occuring event.
1723
1724 You set the conditions when you <<enabling-disabling-events,create
1725 an event rule>>.
1726
1727 You always attach an event rule to <<channel,channel>> when you create
1728 it.
1729
1730 When an event passes the conditions of an event rule, LTTng records it
1731 in one of the attached channel's sub-buffers.
1732
1733 The available conditions, as of LTTng{nbsp}{revision}, are:
1734
1735 * The event rule _is enabled_.
1736 * The instrumentation point's type _is{nbsp}T_.
1737 * The instrumentation point's name (sometimes called _event name_)
1738 _matches{nbsp}N_, but _is not{nbsp}E_.
1739 * The instrumentation point's log level _is as severe as{nbsp}L_, or
1740 _is exactly{nbsp}L_.
1741 * The fields of the event's payload _satisfy_ a filter
1742 expression{nbsp}__F__.
1743
1744 As you can see, all the conditions but the dynamic filter are related to
1745 the event rule's status or to the instrumentation point, not to the
1746 occurring events. This is why, without a filter, checking if an event
1747 passes an event rule is not a dynamic task: when you create or modify an
1748 event rule, all the tracers of its tracing domain enable or disable the
1749 instrumentation points themselves once. This is possible because the
1750 attributes of an instrumentation point (type, name, and log level) are
1751 defined statically. In other words, without a dynamic filter, the tracer
1752 _does not evaluate_ the arguments of an instrumentation point unless it
1753 matches an enabled event rule.
1754
1755 Note that, for LTTng to record an event, the <<channel,channel>> to
1756 which a matching event rule is attached must also be enabled, and the
1757 tracing session owning this channel must be active.
1758
1759 [role="img-100"]
1760 .Logical path from an instrumentation point to an event record.
1761 image::event-rule.png[]
1762
1763 .Event, event record, or event rule?
1764 ****
1765 With so many similar terms, it's easy to get confused.
1766
1767 An **event** is the consequence of the execution of an _instrumentation
1768 point_, like a tracepoint that you manually place in some source code,
1769 or a Linux kernel KProbe. An event is said to _occur_ at a specific
1770 time. Different actions can be taken upon the occurance of an event,
1771 like record the event's payload to a buffer.
1772
1773 An **event record** is the representation of an event in a sub-buffer. A
1774 tracer is responsible for capturing the payload of an event, current
1775 context variables, the event's ID, and the event's timestamp. LTTng
1776 can append this sub-buffer to a trace file.
1777
1778 An **event rule** is a set of conditions which must all be satisfied for
1779 LTTng to record an occuring event. Events still occur without
1780 satisfying event rules, but LTTng does not record them.
1781 ****
1782
1783
1784 [[plumbing]]
1785 == Components of noch:{LTTng}
1786
1787 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1788 to call LTTng a simple _tool_ since it is composed of multiple
1789 interacting components. This section describes those components,
1790 explains their respective roles, and shows how they connect together to
1791 form the LTTng ecosystem.
1792
1793 The following diagram shows how the most important components of LTTng
1794 interact with user applications, the Linux kernel, and you:
1795
1796 [role="img-100"]
1797 .Control and trace data paths between LTTng components.
1798 image::plumbing.png[]
1799
1800 The LTTng project incorporates:
1801
1802 * **LTTng-tools**: Libraries and command-line interface to
1803 control tracing sessions.
1804 ** <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1805 ** <<lttng-consumerd,Consumer daemon>> (man:lttng-consumerd(8)).
1806 ** <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1807 ** <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1808 ** <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1809 * **LTTng-UST**: Libraries and Java/Python packages to trace user
1810 applications.
1811 ** <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1812 headers to instrument and trace any native user application.
1813 ** <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1814 *** `liblttng-ust-libc-wrapper`
1815 *** `liblttng-ust-pthread-wrapper`
1816 *** `liblttng-ust-cyg-profile`
1817 *** `liblttng-ust-cyg-profile-fast`
1818 *** `liblttng-ust-dl`
1819 ** User space tracepoint provider source files generator command-line
1820 tool (man:lttng-gen-tp(1)).
1821 ** <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1822 Java applications using `java.util.logging` or
1823 Apache log4j 1.2 logging.
1824 ** <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1825 Python applications using the standard `logging` package.
1826 * **LTTng-modules**: <<lttng-modules,Linux kernel modules>> to trace
1827 the kernel.
1828 ** LTTng kernel tracer module.
1829 ** Tracing ring buffer kernel modules.
1830 ** Probe kernel modules.
1831 ** LTTng logger kernel module.
1832
1833
1834 [[lttng-cli]]
1835 === Tracing control command-line interface
1836
1837 [role="img-100"]
1838 .The tracing control command-line interface.
1839 image::plumbing-lttng-cli.png[]
1840
1841 The _man:lttng(1) command-line tool_ is the standard user interface to
1842 control LTTng <<tracing-session,tracing sessions>>. The cmd:lttng tool
1843 is part of LTTng-tools.
1844
1845 The cmd:lttng tool is linked with
1846 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1847 one or more <<lttng-sessiond,session daemons>> behind the scenes.
1848
1849 The cmd:lttng tool has a Git-like interface:
1850
1851 [role="term"]
1852 ----
1853 lttng <general options> <command> <command options>
1854 ----
1855
1856 The <<controlling-tracing,Tracing control>> section explores the
1857 available features of LTTng using the cmd:lttng tool.
1858
1859
1860 [[liblttng-ctl-lttng]]
1861 === Tracing control library
1862
1863 [role="img-100"]
1864 .The tracing control library.
1865 image::plumbing-liblttng-ctl.png[]
1866
1867 The _LTTng control library_, `liblttng-ctl`, is used to communicate
1868 with a <<lttng-sessiond,session daemon>> using a C API that hides the
1869 underlying protocol's details. `liblttng-ctl` is part of LTTng-tools.
1870
1871 The <<lttng-cli,cmd:lttng command-line tool>>
1872 is linked with `liblttng-ctl`.
1873
1874 You can use `liblttng-ctl` in C or $$C++$$ source code by including its
1875 "master" header:
1876
1877 [source,c]
1878 ----
1879 #include <lttng/lttng.h>
1880 ----
1881
1882 Some objects are referenced by name (C string), such as tracing
1883 sessions, but most of them require to create a handle first using
1884 `lttng_create_handle()`.
1885
1886 The best available developer documentation for `liblttng-ctl` is, as of
1887 LTTng{nbsp}{revision}, its installed header files. Every function and
1888 structure is thoroughly documented.
1889
1890
1891 [[lttng-ust]]
1892 === User space tracing library
1893
1894 [role="img-100"]
1895 .The user space tracing library.
1896 image::plumbing-liblttng-ust.png[]
1897
1898 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1899 is the LTTng user space tracer. It receives commands from a
1900 <<lttng-sessiond,session daemon>>, for example to
1901 enable and disable specific instrumentation points, and writes event
1902 records to ring buffers shared with a
1903 <<lttng-consumerd,consumer daemon>>.
1904 `liblttng-ust` is part of LTTng-UST.
1905
1906 Public C header files are installed beside `liblttng-ust` to
1907 instrument any <<c-application,C or $$C++$$ application>>.
1908
1909 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1910 packages, use their own library providing tracepoints which is
1911 linked with `liblttng-ust`.
1912
1913 An application or library does not have to initialize `liblttng-ust`
1914 manually: its constructor does the necessary tasks to properly register
1915 to a session daemon. The initialization phase also enables the
1916 instrumentation points matching the <<event,event rules>> that you
1917 already created.
1918
1919
1920 [[lttng-ust-agents]]
1921 === User space tracing agents
1922
1923 [role="img-100"]
1924 .The user space tracing agents.
1925 image::plumbing-lttng-ust-agents.png[]
1926
1927 The _LTTng-UST Java and Python agents_ are regular Java and Python
1928 packages which add LTTng tracing capabilities to the
1929 native logging frameworks. The LTTng-UST agents are part of LTTng-UST.
1930
1931 In the case of Java, the
1932 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1933 core logging facilities] and
1934 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2] are supported.
1935 Note that Apache Log4{nbsp}2 is not supported.
1936
1937 In the case of Python, the standard
1938 https://docs.python.org/3/library/logging.html[`logging`] package
1939 is supported. Both Python 2 and Python 3 modules can import the
1940 LTTng-UST Python agent package.
1941
1942 The applications using the LTTng-UST agents are in the
1943 `java.util.logging` (JUL),
1944 log4j, and Python <<domain,tracing domains>>.
1945
1946 Both agents use the same mechanism to trace the log statements. When an
1947 agent is initialized, it creates a log handler that attaches to the root
1948 logger. The agent also registers to a <<lttng-sessiond,session daemon>>.
1949 When the application executes a log statement, it is passed to the
1950 agent's log handler by the root logger. The agent's log handler calls a
1951 native function in a tracepoint provider package shared library linked
1952 with <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
1953 other fields, like its logger name and its log level. This native
1954 function contains a user space instrumentation point, hence tracing the
1955 log statement.
1956
1957 The log level condition of an
1958 <<event,event rule>> is considered when tracing
1959 a Java or a Python application, and it's compatible with the standard
1960 JUL, log4j, and Python log levels.
1961
1962
1963 [[lttng-modules]]
1964 === LTTng kernel modules
1965
1966 [role="img-100"]
1967 .The LTTng kernel modules.
1968 image::plumbing-lttng-modules.png[]
1969
1970 The _LTTng kernel modules_ are a set of Linux kernel modules
1971 which implement the kernel tracer of the LTTng project. The LTTng
1972 kernel modules are part of LTTng-modules.
1973
1974 The LTTng kernel modules include:
1975
1976 * A set of _probe_ modules.
1977 +
1978 Each module attaches to a specific subsystem
1979 of the Linux kernel using its tracepoint instrument points. There are
1980 also modules to attach to the entry and return points of the Linux
1981 system call functions.
1982
1983 * _Ring buffer_ modules.
1984 +
1985 A ring buffer implementation is provided as kernel modules. The LTTng
1986 kernel tracer writes to the ring buffer; a
1987 <<lttng-consumerd,consumer daemon>> reads from the ring buffer.
1988
1989 * The _LTTng kernel tracer_ module.
1990 * The _LTTng logger_ module.
1991 +
1992 The LTTng logger module implements the special path:{/proc/lttng-logger}
1993 file so that any executable can generate LTTng events by opening and
1994 writing to this file.
1995 +
1996 See <<proc-lttng-logger-abi,LTTng logger>>.
1997
1998 Generally, you do not have to load the LTTng kernel modules manually
1999 (using man:modprobe(8), for example): a root <<lttng-sessiond,session
2000 daemon>> loads the necessary modules when starting. If you have extra
2001 probe modules, you can specify to load them to the session daemon on
2002 the command line.
2003
2004 The LTTng kernel modules are installed in
2005 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
2006 the kernel release (see `uname --kernel-release`).
2007
2008
2009 [[lttng-sessiond]]
2010 === Session daemon
2011
2012 [role="img-100"]
2013 .The session daemon.
2014 image::plumbing-sessiond.png[]
2015
2016 The _session daemon_, man:lttng-sessiond(8), is a daemon responsible for
2017 managing tracing sessions and for controlling the various components of
2018 LTTng. The session daemon is part of LTTng-tools.
2019
2020 The session daemon sends control requests to and receives control
2021 responses from:
2022
2023 * The <<lttng-ust,user space tracing library>>.
2024 +
2025 Any instance of the user space tracing library first registers to
2026 a session daemon. Then, the session daemon can send requests to
2027 this instance, such as:
2028 +
2029 --
2030 ** Get the list of tracepoints.
2031 ** Share an <<event,event rule>> so that the user space tracing library
2032 can enable or disable tracepoints. Amongst the possible conditions
2033 of an event rule is a filter expression which `liblttng-ust` evalutes
2034 when an event occurs.
2035 ** Share <<channel,channel>> attributes and ring buffer locations.
2036 --
2037 +
2038 The session daemon and the user space tracing library use a Unix
2039 domain socket for their communication.
2040
2041 * The <<lttng-ust-agents,user space tracing agents>>.
2042 +
2043 Any instance of a user space tracing agent first registers to
2044 a session daemon. Then, the session daemon can send requests to
2045 this instance, such as:
2046 +
2047 --
2048 ** Get the list of loggers.
2049 ** Enable or disable a specific logger.
2050 --
2051 +
2052 The session daemon and the user space tracing agent use a TCP connection
2053 for their communication.
2054
2055 * The <<lttng-modules,LTTng kernel tracer>>.
2056 * The <<lttng-consumerd,consumer daemon>>.
2057 +
2058 The session daemon sends requests to the consumer daemon to instruct
2059 it where to send the trace data streams, amongst other information.
2060
2061 * The <<lttng-relayd,relay daemon>>.
2062
2063 The session daemon receives commands from the
2064 <<liblttng-ctl-lttng,tracing control library>>.
2065
2066 The root session daemon loads the appropriate
2067 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
2068 a <<lttng-consumerd,consumer daemon>> as soon as you create
2069 an <<event,event rule>>.
2070
2071 The session daemon does not send and receive trace data: this is the
2072 role of the <<lttng-consumerd,consumer daemon>> and
2073 <<lttng-relayd,relay daemon>>. It does, however, generate the
2074 http://diamon.org/ctf/[CTF] metadata stream.
2075
2076 Each Unix user can have its own session daemon instance. The
2077 tracing sessions managed by different session daemons are completely
2078 independent.
2079
2080 The root user's session daemon is the only one which is
2081 allowed to control the LTTng kernel tracer, and its spawned consumer
2082 daemon is the only one which is allowed to consume trace data from the
2083 LTTng kernel tracer. Note, however, that any Unix user which is a member
2084 of the <<tracing-group,tracing group>> is allowed
2085 to create <<channel,channels>> in the
2086 Linux kernel <<domain,tracing domain>>, and thus to trace the Linux
2087 kernel.
2088
2089 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
2090 session daemon when using its `create` command if none is currently
2091 running. You can also start the session daemon manually.
2092
2093
2094 [[lttng-consumerd]]
2095 === Consumer daemon
2096
2097 [role="img-100"]
2098 .The consumer daemon.
2099 image::plumbing-consumerd.png[]
2100
2101 The _consumer daemon_, man:lttng-consumerd(8), is a daemon which shares
2102 ring buffers with user applications or with the LTTng kernel modules to
2103 collect trace data and send it to some location (on disk or to a
2104 <<lttng-relayd,relay daemon>> over the network). The consumer daemon
2105 is part of LTTng-tools.
2106
2107 You do not start a consumer daemon manually: a consumer daemon is always
2108 spawned by a <<lttng-sessiond,session daemon>> as soon as you create an
2109 <<event,event rule>>, that is, before you start tracing. When you kill
2110 its owner session daemon, the consumer daemon also exits because it is
2111 the session daemon's child process. Command-line options of
2112 man:lttng-sessiond(8) target the consumer daemon process.
2113
2114 There are up to two running consumer daemons per Unix user, whereas only
2115 one session daemon can run per user. This is because each process can be
2116 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
2117 and 64-bit processes, it is more efficient to have separate
2118 corresponding 32-bit and 64-bit consumer daemons. The root user is an
2119 exception: it can have up to _three_ running consumer daemons: 32-bit
2120 and 64-bit instances for its user applications, and one more
2121 reserved for collecting kernel trace data.
2122
2123
2124 [[lttng-relayd]]
2125 === Relay daemon
2126
2127 [role="img-100"]
2128 .The relay daemon.
2129 image::plumbing-relayd.png[]
2130
2131 The _relay daemon_, man:lttng-relayd(8), is a daemon acting as a bridge
2132 between remote session and consumer daemons, local trace files, and a
2133 remote live trace viewer. The relay daemon is part of LTTng-tools.
2134
2135 The main purpose of the relay daemon is to implement a receiver of
2136 <<sending-trace-data-over-the-network,trace data over the network>>.
2137 This is useful when the target system does not have much file system
2138 space to record trace files locally.
2139
2140 The relay daemon is also a server to which a
2141 <<lttng-live,live trace viewer>> can
2142 connect. The live trace viewer sends requests to the relay daemon to
2143 receive trace data as the target system emits events. The
2144 communication protocol is named _LTTng live_; it is used over TCP
2145 connections.
2146
2147 Note that you can start the relay daemon on the target system directly.
2148 This is the setup of choice when the use case is to view events as
2149 the target system emits them without the need of a remote system.
2150
2151
2152 [[instrumenting]]
2153 == [[using-lttng]]Instrumentation
2154
2155 There are many examples of tracing and monitoring in our everyday life:
2156
2157 * You have access to real-time and historical weather reports and
2158 forecasts thanks to weather stations installed around the country.
2159 * You know your heart is safe thanks to an electrocardiogram.
2160 * You make sure not to drive your car too fast and to have enough fuel
2161 to reach your destination thanks to gauges visible on your dashboard.
2162
2163 All the previous examples have something in common: they rely on
2164 **instruments**. Without the electrodes attached to the surface of your
2165 body's skin, cardiac monitoring is futile.
2166
2167 LTTng, as a tracer, is no different from those real life examples. If
2168 you're about to trace a software system or, in other words, record its
2169 history of execution, you better have **instrumentation points** in the
2170 subject you're tracing, that is, the actual software.
2171
2172 Various ways were developed to instrument a piece of software for LTTng
2173 tracing. The most straightforward one is to manually place
2174 instrumentation points, called _tracepoints_, in the software's source
2175 code. It is also possible to add instrumentation points dynamically in
2176 the Linux kernel <<domain,tracing domain>>.
2177
2178 If you're only interested in tracing the Linux kernel, your
2179 instrumentation needs are probably already covered by LTTng's built-in
2180 <<lttng-modules,Linux kernel tracepoints>>. You may also wish to trace a
2181 user application which is already instrumented for LTTng tracing.
2182 In such cases, you can skip this whole section and read the topics of
2183 the <<controlling-tracing,Tracing control>> section.
2184
2185 Many methods are available to instrument a piece of software for LTTng
2186 tracing. They are:
2187
2188 * <<c-application,User space instrumentation for C and $$C++$$
2189 applications>>.
2190 * <<prebuilt-ust-helpers,Prebuilt user space tracing helpers>>.
2191 * <<java-application,User space Java agent>>.
2192 * <<python-application,User space Python agent>>.
2193 * <<proc-lttng-logger-abi,LTTng logger>>.
2194 * <<instrumenting-linux-kernel,LTTng kernel tracepoints>>.
2195
2196
2197 [[c-application]]
2198 === [[cxx-application]]User space instrumentation for C and $$C++$$ applications
2199
2200 The procedure to instrument a C or $$C++$$ user application with
2201 the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`, is:
2202
2203 . <<tracepoint-provider,Create the source files of a tracepoint provider
2204 package>>.
2205 . <<probing-the-application-source-code,Add tracepoints to
2206 the application's source code>>.
2207 . <<building-tracepoint-providers-and-user-application,Build and link
2208 a tracepoint provider package and the user application>>.
2209
2210 If you need quick, man:printf(3)-like instrumentation, you can skip
2211 those steps and use <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>>
2212 instead.
2213
2214 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
2215 instrument a user application with `liblttng-ust`.
2216
2217
2218 [[tracepoint-provider]]
2219 ==== Create the source files of a tracepoint provider package
2220
2221 A _tracepoint provider_ is a set of compiled functions which provide
2222 **tracepoints** to an application, the type of instrumentation point
2223 supported by LTTng-UST. Those functions can emit events with
2224 user-defined fields and serialize those events as event records to one
2225 or more LTTng-UST <<channel,channel>> sub-buffers. The `tracepoint()`
2226 macro, which you <<probing-the-application-source-code,insert in a user
2227 application's source code>>, calls those functions.
2228
2229 A _tracepoint provider package_ is an object file (`.o`) or a shared
2230 library (`.so`) which contains one or more tracepoint providers.
2231 Its source files are:
2232
2233 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
2234 * A <<tpp-source,tracepoint provider package source>> (`.c`).
2235
2236 A tracepoint provider package is dynamically linked with `liblttng-ust`,
2237 the LTTng user space tracer, at run time.
2238
2239 [role="img-100"]
2240 .User application linked with `liblttng-ust` and containing a tracepoint provider.
2241 image::ust-app.png[]
2242
2243 NOTE: If you need quick, man:printf(3)-like instrumentation, you can
2244 skip creating and using a tracepoint provider and use
2245 <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>> instead.
2246
2247
2248 [[tpp-header]]
2249 ===== Create a tracepoint provider header file template
2250
2251 A _tracepoint provider header file_ contains the tracepoint
2252 definitions of a tracepoint provider.
2253
2254 To create a tracepoint provider header file:
2255
2256 . Start from this template:
2257 +
2258 --
2259 [source,c]
2260 .Tracepoint provider header file template (`.h` file extension).
2261 ----
2262 #undef TRACEPOINT_PROVIDER
2263 #define TRACEPOINT_PROVIDER provider_name
2264
2265 #undef TRACEPOINT_INCLUDE
2266 #define TRACEPOINT_INCLUDE "./tp.h"
2267
2268 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
2269 #define _TP_H
2270
2271 #include <lttng/tracepoint.h>
2272
2273 /*
2274 * Use TRACEPOINT_EVENT(), TRACEPOINT_EVENT_CLASS(),
2275 * TRACEPOINT_EVENT_INSTANCE(), and TRACEPOINT_LOGLEVEL() here.
2276 */
2277
2278 #endif /* _TP_H */
2279
2280 #include <lttng/tracepoint-event.h>
2281 ----
2282 --
2283
2284 . Replace:
2285 +
2286 * `provider_name` with the name of your tracepoint provider.
2287 * `"tp.h"` with the name of your tracepoint provider header file.
2288
2289 . Below the `#include <lttng/tracepoint.h>` line, put your
2290 <<defining-tracepoints,tracepoint definitions>>.
2291
2292 Your tracepoint provider name must be unique amongst all the possible
2293 tracepoint provider names used on the same target system. We
2294 suggest to include the name of your project or company in the name,
2295 for example, `org_lttng_my_project_tpp`.
2296
2297 TIP: [[lttng-gen-tp]]You can use the man:lttng-gen-tp(1) tool to create
2298 this boilerplate for you. When using cmd:lttng-gen-tp, all you need to
2299 write are the <<defining-tracepoints,tracepoint definitions>>.
2300
2301
2302 [[defining-tracepoints]]
2303 ===== Create a tracepoint definition
2304
2305 A _tracepoint definition_ defines, for a given tracepoint:
2306
2307 * Its **input arguments**. They are the macro parameters that the
2308 `tracepoint()` macro accepts for this particular tracepoint
2309 in the user application's source code.
2310 * Its **output event fields**. They are the sources of event fields
2311 that form the payload of any event that the execution of the
2312 `tracepoint()` macro emits for this particular tracepoint.
2313
2314 You can create a tracepoint definition by using the
2315 `TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2316 line in the
2317 <<tpp-header,tracepoint provider header file template>>.
2318
2319 The syntax of the `TRACEPOINT_EVENT()` macro is:
2320
2321 [source,c]
2322 .`TRACEPOINT_EVENT()` macro syntax.
2323 ----
2324 TRACEPOINT_EVENT(
2325 /* Tracepoint provider name */
2326 provider_name,
2327
2328 /* Tracepoint name */
2329 tracepoint_name,
2330
2331 /* Input arguments */
2332 TP_ARGS(
2333 arguments
2334 ),
2335
2336 /* Output event fields */
2337 TP_FIELDS(
2338 fields
2339 )
2340 )
2341 ----
2342
2343 Replace:
2344
2345 * `provider_name` with your tracepoint provider name.
2346 * `tracepoint_name` with your tracepoint name.
2347 * `arguments` with the <<tpp-def-input-args,input arguments>>.
2348 * `fields` with the <<tpp-def-output-fields,output event field>>
2349 definitions.
2350
2351 This tracepoint emits events named `provider_name:tracepoint_name`.
2352
2353 [IMPORTANT]
2354 .Event name's length limitation
2355 ====
2356 The concatenation of the tracepoint provider name and the
2357 tracepoint name must not exceed **254 characters**. If it does, the
2358 instrumented application compiles and runs, but LTTng throws multiple
2359 warnings and you could experience serious issues.
2360 ====
2361
2362 [[tpp-def-input-args]]The syntax of the `TP_ARGS()` macro is:
2363
2364 [source,c]
2365 .`TP_ARGS()` macro syntax.
2366 ----
2367 TP_ARGS(
2368 type, arg_name
2369 )
2370 ----
2371
2372 Replace:
2373
2374 * `type` with the C type of the argument.
2375 * `arg_name` with the argument name.
2376
2377 You can repeat `type` and `arg_name` up to 10 times to have
2378 more than one argument.
2379
2380 .`TP_ARGS()` usage with three arguments.
2381 ====
2382 [source,c]
2383 ----
2384 TP_ARGS(
2385 int, count,
2386 float, ratio,
2387 const char*, query
2388 )
2389 ----
2390 ====
2391
2392 The `TP_ARGS()` and `TP_ARGS(void)` forms are valid to create a
2393 tracepoint definition with no input arguments.
2394
2395 [[tpp-def-output-fields]]The `TP_FIELDS()` macro contains a list of
2396 `ctf_*()` macros. Each `ctf_*()` macro defines one event field. See
2397 man:lttng-ust(3) for a complete description of the available `ctf_*()`
2398 macros. A `ctf_*()` macro specifies the type, size, and byte order of
2399 one event field.
2400
2401 Each `ctf_*()` macro takes an _argument expression_ parameter. This is a
2402 C expression that the tracer evalutes at the `tracepoint()` macro site
2403 in the application's source code. This expression provides a field's
2404 source of data. The argument expression can include input argument names
2405 listed in the `TP_ARGS()` macro.
2406
2407 Each `ctf_*()` macro also takes a _field name_ parameter. Field names
2408 must be unique within a given tracepoint definition.
2409
2410 Here's a complete tracepoint definition example:
2411
2412 .Tracepoint definition.
2413 ====
2414 The following tracepoint definition defines a tracepoint which takes
2415 three input arguments and has four output event fields.
2416
2417 [source,c]
2418 ----
2419 #include "my-custom-structure.h"
2420
2421 TRACEPOINT_EVENT(
2422 my_provider,
2423 my_tracepoint,
2424 TP_ARGS(
2425 const struct my_custom_structure*, my_custom_structure,
2426 float, ratio,
2427 const char*, query
2428 ),
2429 TP_FIELDS(
2430 ctf_string(query_field, query)
2431 ctf_float(double, ratio_field, ratio)
2432 ctf_integer(int, recv_size, my_custom_structure->recv_size)
2433 ctf_integer(int, send_size, my_custom_structure->send_size)
2434 )
2435 )
2436 ----
2437
2438 You can refer to this tracepoint definition with the `tracepoint()`
2439 macro in your application's source code like this:
2440
2441 [source,c]
2442 ----
2443 tracepoint(my_provider, my_tracepoint,
2444 my_structure, some_ratio, the_query);
2445 ----
2446 ====
2447
2448 NOTE: The LTTng tracer only evaluates tracepoint arguments at run time
2449 if they satisfy an enabled <<event,event rule>>.
2450
2451
2452 [[using-tracepoint-classes]]
2453 ===== Use a tracepoint class
2454
2455 A _tracepoint class_ is a class of tracepoints which share the same
2456 output event field definitions. A _tracepoint instance_ is one
2457 instance of such a defined tracepoint class, with its own tracepoint
2458 name.
2459
2460 The <<defining-tracepoints,`TRACEPOINT_EVENT()` macro>> is actually a
2461 shorthand which defines both a tracepoint class and a tracepoint
2462 instance at the same time.
2463
2464 When you build a tracepoint provider package, the C or $$C++$$ compiler
2465 creates one serialization function for each **tracepoint class**. A
2466 serialization function is responsible for serializing the event fields
2467 of a tracepoint to a sub-buffer when tracing.
2468
2469 For various performance reasons, when your situation requires multiple
2470 tracepoint definitions with different names, but with the same event
2471 fields, we recommend that you manually create a tracepoint class
2472 and instantiate as many tracepoint instances as needed. One positive
2473 effect of such a design, amongst other advantages, is that all
2474 tracepoint instances of the same tracepoint class reuse the same
2475 serialization function, thus reducing
2476 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2477
2478 .Use a tracepoint class and tracepoint instances.
2479 ====
2480 Consider the following three tracepoint definitions:
2481
2482 [source,c]
2483 ----
2484 TRACEPOINT_EVENT(
2485 my_app,
2486 get_account,
2487 TP_ARGS(
2488 int, userid,
2489 size_t, len
2490 ),
2491 TP_FIELDS(
2492 ctf_integer(int, userid, userid)
2493 ctf_integer(size_t, len, len)
2494 )
2495 )
2496
2497 TRACEPOINT_EVENT(
2498 my_app,
2499 get_settings,
2500 TP_ARGS(
2501 int, userid,
2502 size_t, len
2503 ),
2504 TP_FIELDS(
2505 ctf_integer(int, userid, userid)
2506 ctf_integer(size_t, len, len)
2507 )
2508 )
2509
2510 TRACEPOINT_EVENT(
2511 my_app,
2512 get_transaction,
2513 TP_ARGS(
2514 int, userid,
2515 size_t, len
2516 ),
2517 TP_FIELDS(
2518 ctf_integer(int, userid, userid)
2519 ctf_integer(size_t, len, len)
2520 )
2521 )
2522 ----
2523
2524 In this case, we create three tracepoint classes, with one implicit
2525 tracepoint instance for each of them: `get_account`, `get_settings`, and
2526 `get_transaction`. However, they all share the same event field names
2527 and types. Hence three identical, yet independent serialization
2528 functions are created when you build the tracepoint provider package.
2529
2530 A better design choice is to define a single tracepoint class and three
2531 tracepoint instances:
2532
2533 [source,c]
2534 ----
2535 /* The tracepoint class */
2536 TRACEPOINT_EVENT_CLASS(
2537 /* Tracepoint provider name */
2538 my_app,
2539
2540 /* Tracepoint class name */
2541 my_class,
2542
2543 /* Input arguments */
2544 TP_ARGS(
2545 int, userid,
2546 size_t, len
2547 ),
2548
2549 /* Output event fields */
2550 TP_FIELDS(
2551 ctf_integer(int, userid, userid)
2552 ctf_integer(size_t, len, len)
2553 )
2554 )
2555
2556 /* The tracepoint instances */
2557 TRACEPOINT_EVENT_INSTANCE(
2558 /* Tracepoint provider name */
2559 my_app,
2560
2561 /* Tracepoint class name */
2562 my_class,
2563
2564 /* Tracepoint name */
2565 get_account,
2566
2567 /* Input arguments */
2568 TP_ARGS(
2569 int, userid,
2570 size_t, len
2571 )
2572 )
2573 TRACEPOINT_EVENT_INSTANCE(
2574 my_app,
2575 my_class,
2576 get_settings,
2577 TP_ARGS(
2578 int, userid,
2579 size_t, len
2580 )
2581 )
2582 TRACEPOINT_EVENT_INSTANCE(
2583 my_app,
2584 my_class,
2585 get_transaction,
2586 TP_ARGS(
2587 int, userid,
2588 size_t, len
2589 )
2590 )
2591 ----
2592 ====
2593
2594
2595 [[assigning-log-levels]]
2596 ===== Assign a log level to a tracepoint definition
2597
2598 You can assign an optional _log level_ to a
2599 <<defining-tracepoints,tracepoint definition>>.
2600
2601 Assigning different levels of severity to tracepoint definitions can
2602 be useful: when you <<enabling-disabling-events,create an event rule>>,
2603 you can target tracepoints having a log level as severe as a specific
2604 value.
2605
2606 The concept of LTTng-UST log levels is similar to the levels found
2607 in typical logging frameworks:
2608
2609 * In a logging framework, the log level is given by the function
2610 or method name you use at the log statement site: `debug()`,
2611 `info()`, `warn()`, `error()`, and so on.
2612 * In LTTng-UST, you statically assign the log level to a tracepoint
2613 definition; any `tracepoint()` macro invocation which refers to
2614 this definition has this log level.
2615
2616 You can assign a log level to a tracepoint definition with the
2617 `TRACEPOINT_LOGLEVEL()` macro. You must use this macro _after_ the
2618 <<defining-tracepoints,`TRACEPOINT_EVENT()`>> or
2619 <<using-tracepoint-classes,`TRACEPOINT_INSTANCE()`>> macro for a given
2620 tracepoint.
2621
2622 The syntax of the `TRACEPOINT_LOGLEVEL()` macro is:
2623
2624 [source,c]
2625 .`TRACEPOINT_LOGLEVEL()` macro syntax.
2626 ----
2627 TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2628 ----
2629
2630 Replace:
2631
2632 * `provider_name` with the tracepoint provider name.
2633 * `tracepoint_name` with the tracepoint name.
2634 * `log_level` with the log level to assign to the tracepoint
2635 definition named `tracepoint_name` in the `provider_name`
2636 tracepoint provider.
2637 +
2638 See man:lttng-ust(3) for a list of available log level names.
2639
2640 .Assign the `TRACE_DEBUG_UNIT` log level to a tracepoint definition.
2641 ====
2642 [source,c]
2643 ----
2644 /* Tracepoint definition */
2645 TRACEPOINT_EVENT(
2646 my_app,
2647 get_transaction,
2648 TP_ARGS(
2649 int, userid,
2650 size_t, len
2651 ),
2652 TP_FIELDS(
2653 ctf_integer(int, userid, userid)
2654 ctf_integer(size_t, len, len)
2655 )
2656 )
2657
2658 /* Log level assignment */
2659 TRACEPOINT_LOGLEVEL(my_app, get_transaction, TRACE_DEBUG_UNIT)
2660 ----
2661 ====
2662
2663
2664 [[tpp-source]]
2665 ===== Create a tracepoint provider package source file
2666
2667 A _tracepoint provider package source file_ is a C source file which
2668 includes a <<tpp-header,tracepoint provider header file>> to expand its
2669 macros into event serialization and other functions.
2670
2671 You can always use the following tracepoint provider package source
2672 file template:
2673
2674 [source,c]
2675 .Tracepoint provider package source file template.
2676 ----
2677 #define TRACEPOINT_CREATE_PROBES
2678
2679 #include "tp.h"
2680 ----
2681
2682 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2683 header file>> name. You may also include more than one tracepoint
2684 provider header file here to create a tracepoint provider package
2685 holding more than one tracepoint providers.
2686
2687
2688 [[probing-the-application-source-code]]
2689 ==== Add tracepoints to an application's source code
2690
2691 Once you <<tpp-header,create a tracepoint provider header file>>, you
2692 can use the `tracepoint()` macro in your application's
2693 source code to insert the tracepoints that this header
2694 <<defining-tracepoints,defines>>.
2695
2696 The `tracepoint()` macro takes at least two parameters: the tracepoint
2697 provider name and the tracepoint name. The corresponding tracepoint
2698 definition defines the other parameters.
2699
2700 .`tracepoint()` usage.
2701 ====
2702 The following <<defining-tracepoints,tracepoint definition>> defines a
2703 tracepoint which takes two input arguments and has two output event
2704 fields.
2705
2706 [source,c]
2707 .Tracepoint provider header file.
2708 ----
2709 #include "my-custom-structure.h"
2710
2711 TRACEPOINT_EVENT(
2712 my_provider,
2713 my_tracepoint,
2714 TP_ARGS(
2715 int, argc,
2716 const char*, cmd_name
2717 ),
2718 TP_FIELDS(
2719 ctf_string(cmd_name, cmd_name)
2720 ctf_integer(int, number_of_args, argc)
2721 )
2722 )
2723 ----
2724
2725 You can refer to this tracepoint definition with the `tracepoint()`
2726 macro in your application's source code like this:
2727
2728 [source,c]
2729 .Application's source file.
2730 ----
2731 #include "tp.h"
2732
2733 int main(int argc, char* argv[])
2734 {
2735 tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2736
2737 return 0;
2738 }
2739 ----
2740
2741 Note how the application's source code includes
2742 the tracepoint provider header file containing the tracepoint
2743 definitions to use, path:{tp.h}.
2744 ====
2745
2746 .`tracepoint()` usage with a complex tracepoint definition.
2747 ====
2748 Consider this complex tracepoint definition, where multiple event
2749 fields refer to the same input arguments in their argument expression
2750 parameter:
2751
2752 [source,c]
2753 .Tracepoint provider header file.
2754 ----
2755 /* For `struct stat` */
2756 #include <sys/types.h>
2757 #include <sys/stat.h>
2758 #include <unistd.h>
2759
2760 TRACEPOINT_EVENT(
2761 my_provider,
2762 my_tracepoint,
2763 TP_ARGS(
2764 int, my_int_arg,
2765 char*, my_str_arg,
2766 struct stat*, st
2767 ),
2768 TP_FIELDS(
2769 ctf_integer(int, my_constant_field, 23 + 17)
2770 ctf_integer(int, my_int_arg_field, my_int_arg)
2771 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2772 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2773 my_str_arg[2] + my_str_arg[3])
2774 ctf_string(my_str_arg_field, my_str_arg)
2775 ctf_integer_hex(off_t, size_field, st->st_size)
2776 ctf_float(double, size_dbl_field, (double) st->st_size)
2777 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2778 size_t, strlen(my_str_arg) / 2)
2779 )
2780 )
2781 ----
2782
2783 You can refer to this tracepoint definition with the `tracepoint()`
2784 macro in your application's source code like this:
2785
2786 [source,c]
2787 .Application's source file.
2788 ----
2789 #define TRACEPOINT_DEFINE
2790 #include "tp.h"
2791
2792 int main(void)
2793 {
2794 struct stat s;
2795
2796 stat("/etc/fstab", &s);
2797 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2798
2799 return 0;
2800 }
2801 ----
2802
2803 If you look at the event record that LTTng writes when tracing this
2804 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2805 it should look like this:
2806
2807 .Event record fields
2808 |====
2809 |Field's name |Field's value
2810 |`my_constant_field` |40
2811 |`my_int_arg_field` |23
2812 |`my_int_arg_field2` |529
2813 |`sum4_field` |389
2814 |`my_str_arg_field` |`Hello, World!`
2815 |`size_field` |0x12d
2816 |`size_dbl_field` |301.0
2817 |`half_my_str_arg_field` |`Hello,`
2818 |====
2819 ====
2820
2821 Sometimes, the arguments you pass to `tracepoint()` are expensive to
2822 compute--they use the call stack, for example. To avoid this
2823 computation when the tracepoint is disabled, you can use the
2824 `tracepoint_enabled()` and `do_tracepoint()` macros.
2825
2826 The syntax of the `tracepoint_enabled()` and `do_tracepoint()` macros
2827 is:
2828
2829 [source,c]
2830 .`tracepoint_enabled()` and `do_tracepoint()` macros syntax.
2831 ----
2832 tracepoint_enabled(provider_name, tracepoint_name)
2833 do_tracepoint(provider_name, tracepoint_name, ...)
2834 ----
2835
2836 Replace:
2837
2838 * `provider_name` with the tracepoint provider name.
2839 * `tracepoint_name` with the tracepoint name.
2840
2841 `tracepoint_enabled()` returns a non-zero value if the tracepoint named
2842 `tracepoint_name` from the provider named `provider_name` is enabled
2843 **at run time**.
2844
2845 `do_tracepoint()` is like `tracepoint()`, except that it doesn't check
2846 if the tracepoint is enabled. Using `tracepoint()` with
2847 `tracepoint_enabled()` is dangerous since `tracepoint()` also contains
2848 the `tracepoint_enabled()` check, thus a race condition is
2849 possible in this situation:
2850
2851 [source,c]
2852 .Possible race condition when using `tracepoint_enabled()` with `tracepoint()`.
2853 ----
2854 if (tracepoint_enabled(my_provider, my_tracepoint)) {
2855 stuff = prepare_stuff();
2856 }
2857
2858 tracepoint(my_provider, my_tracepoint, stuff);
2859 ----
2860
2861 If the tracepoint is enabled after the condition, then `stuff` is not
2862 prepared: the emitted event will either contain wrong data, or the whole
2863 application could crash (segmentation fault, for example).
2864
2865 NOTE: Neither `tracepoint_enabled()` nor `do_tracepoint()` have an
2866 `STAP_PROBEV()` call. If you need it, you must emit
2867 this call yourself.
2868
2869
2870 [[building-tracepoint-providers-and-user-application]]
2871 ==== Build and link a tracepoint provider package and an application
2872
2873 Once you have one or more <<tpp-header,tracepoint provider header
2874 files>> and a <<tpp-source,tracepoint provider package source file>>,
2875 you can create the tracepoint provider package by compiling its source
2876 file. From here, multiple build and run scenarios are possible. The
2877 following table shows common application and library configurations
2878 along with the required command lines to achieve them.
2879
2880 In the following diagrams, we use the following file names:
2881
2882 `app`::
2883 Executable application.
2884
2885 `app.o`::
2886 Application's object file.
2887
2888 `tpp.o`::
2889 Tracepoint provider package object file.
2890
2891 `tpp.a`::
2892 Tracepoint provider package archive file.
2893
2894 `libtpp.so`::
2895 Tracepoint provider package shared object file.
2896
2897 `emon.o`::
2898 User library object file.
2899
2900 `libemon.so`::
2901 User library shared object file.
2902
2903 We use the following symbols in the diagrams of table below:
2904
2905 [role="img-100"]
2906 .Symbols used in the build scenario diagrams.
2907 image::ust-sit-symbols.png[]
2908
2909 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
2910 variable in the following instructions.
2911
2912 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
2913 .Common tracepoint provider package scenarios.
2914 |====
2915 |Scenario |Instructions
2916
2917 |
2918 The instrumented application is statically linked with
2919 the tracepoint provider package object.
2920
2921 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
2922
2923 |
2924 include::../common/ust-sit-step-tp-o.txt[]
2925
2926 To build the instrumented application:
2927
2928 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2929 +
2930 --
2931 [source,c]
2932 ----
2933 #define TRACEPOINT_DEFINE
2934 ----
2935 --
2936
2937 . Compile the application source file:
2938 +
2939 --
2940 [role="term"]
2941 ----
2942 gcc -c app.c
2943 ----
2944 --
2945
2946 . Build the application:
2947 +
2948 --
2949 [role="term"]
2950 ----
2951 gcc -o app app.o tpp.o -llttng-ust -ldl
2952 ----
2953 --
2954
2955 To run the instrumented application:
2956
2957 * Start the application:
2958 +
2959 --
2960 [role="term"]
2961 ----
2962 ./app
2963 ----
2964 --
2965
2966 |
2967 The instrumented application is statically linked with the
2968 tracepoint provider package archive file.
2969
2970 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
2971
2972 |
2973 To create the tracepoint provider package archive file:
2974
2975 . Compile the <<tpp-source,tracepoint provider package source file>>:
2976 +
2977 --
2978 [role="term"]
2979 ----
2980 gcc -I. -c tpp.c
2981 ----
2982 --
2983
2984 . Create the tracepoint provider package archive file:
2985 +
2986 --
2987 [role="term"]
2988 ----
2989 ar rcs tpp.a tpp.o
2990 ----
2991 --
2992
2993 To build the instrumented application:
2994
2995 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2996 +
2997 --
2998 [source,c]
2999 ----
3000 #define TRACEPOINT_DEFINE
3001 ----
3002 --
3003
3004 . Compile the application source file:
3005 +
3006 --
3007 [role="term"]
3008 ----
3009 gcc -c app.c
3010 ----
3011 --
3012
3013 . Build the application:
3014 +
3015 --
3016 [role="term"]
3017 ----
3018 gcc -o app app.o tpp.a -llttng-ust -ldl
3019 ----
3020 --
3021
3022 To run the instrumented application:
3023
3024 * Start the application:
3025 +
3026 --
3027 [role="term"]
3028 ----
3029 ./app
3030 ----
3031 --
3032
3033 |
3034 The instrumented application is linked with the tracepoint provider
3035 package shared object.
3036
3037 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
3038
3039 |
3040 include::../common/ust-sit-step-tp-so.txt[]
3041
3042 To build the instrumented application:
3043
3044 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3045 +
3046 --
3047 [source,c]
3048 ----
3049 #define TRACEPOINT_DEFINE
3050 ----
3051 --
3052
3053 . Compile the application source file:
3054 +
3055 --
3056 [role="term"]
3057 ----
3058 gcc -c app.c
3059 ----
3060 --
3061
3062 . Build the application:
3063 +
3064 --
3065 [role="term"]
3066 ----
3067 gcc -o app app.o -ldl -L. -ltpp
3068 ----
3069 --
3070
3071 To run the instrumented application:
3072
3073 * Start the application:
3074 +
3075 --
3076 [role="term"]
3077 ----
3078 ./app
3079 ----
3080 --
3081
3082 |
3083 The tracepoint provider package shared object is preloaded before the
3084 instrumented application starts.
3085
3086 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
3087
3088 |
3089 include::../common/ust-sit-step-tp-so.txt[]
3090
3091 To build the instrumented application:
3092
3093 . In path:{app.c}, before including path:{tpp.h}, add the
3094 following lines:
3095 +
3096 --
3097 [source,c]
3098 ----
3099 #define TRACEPOINT_DEFINE
3100 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3101 ----
3102 --
3103
3104 . Compile the application source file:
3105 +
3106 --
3107 [role="term"]
3108 ----
3109 gcc -c app.c
3110 ----
3111 --
3112
3113 . Build the application:
3114 +
3115 --
3116 [role="term"]
3117 ----
3118 gcc -o app app.o -ldl
3119 ----
3120 --
3121
3122 To run the instrumented application with tracing support:
3123
3124 * Preload the tracepoint provider package shared object and
3125 start the application:
3126 +
3127 --
3128 [role="term"]
3129 ----
3130 LD_PRELOAD=./libtpp.so ./app
3131 ----
3132 --
3133
3134 To run the instrumented application without tracing support:
3135
3136 * Start the application:
3137 +
3138 --
3139 [role="term"]
3140 ----
3141 ./app
3142 ----
3143 --
3144
3145 |
3146 The instrumented application dynamically loads the tracepoint provider
3147 package shared object.
3148
3149 See the <<dlclose-warning,warning about `dlclose()`>>.
3150
3151 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
3152
3153 |
3154 include::../common/ust-sit-step-tp-so.txt[]
3155
3156 To build the instrumented application:
3157
3158 . In path:{app.c}, before including path:{tpp.h}, add the
3159 following lines:
3160 +
3161 --
3162 [source,c]
3163 ----
3164 #define TRACEPOINT_DEFINE
3165 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3166 ----
3167 --
3168
3169 . Compile the application source file:
3170 +
3171 --
3172 [role="term"]
3173 ----
3174 gcc -c app.c
3175 ----
3176 --
3177
3178 . Build the application:
3179 +
3180 --
3181 [role="term"]
3182 ----
3183 gcc -o app app.o -ldl
3184 ----
3185 --
3186
3187 To run the instrumented application:
3188
3189 * Start the application:
3190 +
3191 --
3192 [role="term"]
3193 ----
3194 ./app
3195 ----
3196 --
3197
3198 |
3199 The application is linked with the instrumented user library.
3200
3201 The instrumented user library is statically linked with the tracepoint
3202 provider package object file.
3203
3204 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
3205
3206 |
3207 include::../common/ust-sit-step-tp-o-fpic.txt[]
3208
3209 To build the instrumented user library:
3210
3211 . In path:{emon.c}, before including path:{tpp.h}, add the
3212 following line:
3213 +
3214 --
3215 [source,c]
3216 ----
3217 #define TRACEPOINT_DEFINE
3218 ----
3219 --
3220
3221 . Compile the user library source file:
3222 +
3223 --
3224 [role="term"]
3225 ----
3226 gcc -I. -fpic -c emon.c
3227 ----
3228 --
3229
3230 . Build the user library shared object:
3231 +
3232 --
3233 [role="term"]
3234 ----
3235 gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
3236 ----
3237 --
3238
3239 To build the application:
3240
3241 . Compile the application source file:
3242 +
3243 --
3244 [role="term"]
3245 ----
3246 gcc -c app.c
3247 ----
3248 --
3249
3250 . Build the application:
3251 +
3252 --
3253 [role="term"]
3254 ----
3255 gcc -o app app.o -L. -lemon
3256 ----
3257 --
3258
3259 To run the application:
3260
3261 * Start the application:
3262 +
3263 --
3264 [role="term"]
3265 ----
3266 ./app
3267 ----
3268 --
3269
3270 |
3271 The application is linked with the instrumented user library.
3272
3273 The instrumented user library is linked with the tracepoint provider
3274 package shared object.
3275
3276 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3277
3278 |
3279 include::../common/ust-sit-step-tp-so.txt[]
3280
3281 To build the instrumented user library:
3282
3283 . In path:{emon.c}, before including path:{tpp.h}, add the
3284 following line:
3285 +
3286 --
3287 [source,c]
3288 ----
3289 #define TRACEPOINT_DEFINE
3290 ----
3291 --
3292
3293 . Compile the user library source file:
3294 +
3295 --
3296 [role="term"]
3297 ----
3298 gcc -I. -fpic -c emon.c
3299 ----
3300 --
3301
3302 . Build the user library shared object:
3303 +
3304 --
3305 [role="term"]
3306 ----
3307 gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3308 ----
3309 --
3310
3311 To build the application:
3312
3313 . Compile the application source file:
3314 +
3315 --
3316 [role="term"]
3317 ----
3318 gcc -c app.c
3319 ----
3320 --
3321
3322 . Build the application:
3323 +
3324 --
3325 [role="term"]
3326 ----
3327 gcc -o app app.o -L. -lemon
3328 ----
3329 --
3330
3331 To run the application:
3332
3333 * Start the application:
3334 +
3335 --
3336 [role="term"]
3337 ----
3338 ./app
3339 ----
3340 --
3341
3342 |
3343 The tracepoint provider package shared object is preloaded before the
3344 application starts.
3345
3346 The application is linked with the instrumented user library.
3347
3348 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3349
3350 |
3351 include::../common/ust-sit-step-tp-so.txt[]
3352
3353 To build the instrumented user library:
3354
3355 . In path:{emon.c}, before including path:{tpp.h}, add the
3356 following lines:
3357 +
3358 --
3359 [source,c]
3360 ----
3361 #define TRACEPOINT_DEFINE
3362 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3363 ----
3364 --
3365
3366 . Compile the user library source file:
3367 +
3368 --
3369 [role="term"]
3370 ----
3371 gcc -I. -fpic -c emon.c
3372 ----
3373 --
3374
3375 . Build the user library shared object:
3376 +
3377 --
3378 [role="term"]
3379 ----
3380 gcc -shared -o libemon.so emon.o -ldl
3381 ----
3382 --
3383
3384 To build the application:
3385
3386 . Compile the application source file:
3387 +
3388 --
3389 [role="term"]
3390 ----
3391 gcc -c app.c
3392 ----
3393 --
3394
3395 . Build the application:
3396 +
3397 --
3398 [role="term"]
3399 ----
3400 gcc -o app app.o -L. -lemon
3401 ----
3402 --
3403
3404 To run the application with tracing support:
3405
3406 * Preload the tracepoint provider package shared object and
3407 start the application:
3408 +
3409 --
3410 [role="term"]
3411 ----
3412 LD_PRELOAD=./libtpp.so ./app
3413 ----
3414 --
3415
3416 To run the application without tracing support:
3417
3418 * Start the application:
3419 +
3420 --
3421 [role="term"]
3422 ----
3423 ./app
3424 ----
3425 --
3426
3427 |
3428 The application is linked with the instrumented user library.
3429
3430 The instrumented user library dynamically loads the tracepoint provider
3431 package shared object.
3432
3433 See the <<dlclose-warning,warning about `dlclose()`>>.
3434
3435 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3436
3437 |
3438 include::../common/ust-sit-step-tp-so.txt[]
3439
3440 To build the instrumented user library:
3441
3442 . In path:{emon.c}, before including path:{tpp.h}, add the
3443 following lines:
3444 +
3445 --
3446 [source,c]
3447 ----
3448 #define TRACEPOINT_DEFINE
3449 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3450 ----
3451 --
3452
3453 . Compile the user library source file:
3454 +
3455 --
3456 [role="term"]
3457 ----
3458 gcc -I. -fpic -c emon.c
3459 ----
3460 --
3461
3462 . Build the user library shared object:
3463 +
3464 --
3465 [role="term"]
3466 ----
3467 gcc -shared -o libemon.so emon.o -ldl
3468 ----
3469 --
3470
3471 To build the application:
3472
3473 . Compile the application source file:
3474 +
3475 --
3476 [role="term"]
3477 ----
3478 gcc -c app.c
3479 ----
3480 --
3481
3482 . Build the application:
3483 +
3484 --
3485 [role="term"]
3486 ----
3487 gcc -o app app.o -L. -lemon
3488 ----
3489 --
3490
3491 To run the application:
3492
3493 * Start the application:
3494 +
3495 --
3496 [role="term"]
3497 ----
3498 ./app
3499 ----
3500 --
3501
3502 |
3503 The application dynamically loads the instrumented user library.
3504
3505 The instrumented user library is linked with the tracepoint provider
3506 package shared object.
3507
3508 See the <<dlclose-warning,warning about `dlclose()`>>.
3509
3510 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3511
3512 |
3513 include::../common/ust-sit-step-tp-so.txt[]
3514
3515 To build the instrumented user library:
3516
3517 . In path:{emon.c}, before including path:{tpp.h}, add the
3518 following line:
3519 +
3520 --
3521 [source,c]
3522 ----
3523 #define TRACEPOINT_DEFINE
3524 ----
3525 --
3526
3527 . Compile the user library source file:
3528 +
3529 --
3530 [role="term"]
3531 ----
3532 gcc -I. -fpic -c emon.c
3533 ----
3534 --
3535
3536 . Build the user library shared object:
3537 +
3538 --
3539 [role="term"]
3540 ----
3541 gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3542 ----
3543 --
3544
3545 To build the application:
3546
3547 . Compile the application source file:
3548 +
3549 --
3550 [role="term"]
3551 ----
3552 gcc -c app.c
3553 ----
3554 --
3555
3556 . Build the application:
3557 +
3558 --
3559 [role="term"]
3560 ----
3561 gcc -o app app.o -ldl -L. -lemon
3562 ----
3563 --
3564
3565 To run the application:
3566
3567 * Start the application:
3568 +
3569 --
3570 [role="term"]
3571 ----
3572 ./app
3573 ----
3574 --
3575
3576 |
3577 The application dynamically loads the instrumented user library.
3578
3579 The instrumented user library dynamically loads the tracepoint provider
3580 package shared object.
3581
3582 See the <<dlclose-warning,warning about `dlclose()`>>.
3583
3584 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3585
3586 |
3587 include::../common/ust-sit-step-tp-so.txt[]
3588
3589 To build the instrumented user library:
3590
3591 . In path:{emon.c}, before including path:{tpp.h}, add the
3592 following lines:
3593 +
3594 --
3595 [source,c]
3596 ----
3597 #define TRACEPOINT_DEFINE
3598 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3599 ----
3600 --
3601
3602 . Compile the user library source file:
3603 +
3604 --
3605 [role="term"]
3606 ----
3607 gcc -I. -fpic -c emon.c
3608 ----
3609 --
3610
3611 . Build the user library shared object:
3612 +
3613 --
3614 [role="term"]
3615 ----
3616 gcc -shared -o libemon.so emon.o -ldl
3617 ----
3618 --
3619
3620 To build the application:
3621
3622 . Compile the application source file:
3623 +
3624 --
3625 [role="term"]
3626 ----
3627 gcc -c app.c
3628 ----
3629 --
3630
3631 . Build the application:
3632 +
3633 --
3634 [role="term"]
3635 ----
3636 gcc -o app app.o -ldl -L. -lemon
3637 ----
3638 --
3639
3640 To run the application:
3641
3642 * Start the application:
3643 +
3644 --
3645 [role="term"]
3646 ----
3647 ./app
3648 ----
3649 --
3650
3651 |
3652 The tracepoint provider package shared object is preloaded before the
3653 application starts.
3654
3655 The application dynamically loads the instrumented user library.
3656
3657 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3658
3659 |
3660 include::../common/ust-sit-step-tp-so.txt[]
3661
3662 To build the instrumented user library:
3663
3664 . In path:{emon.c}, before including path:{tpp.h}, add the
3665 following lines:
3666 +
3667 --
3668 [source,c]
3669 ----
3670 #define TRACEPOINT_DEFINE
3671 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3672 ----
3673 --
3674
3675 . Compile the user library source file:
3676 +
3677 --
3678 [role="term"]
3679 ----
3680 gcc -I. -fpic -c emon.c
3681 ----
3682 --
3683
3684 . Build the user library shared object:
3685 +
3686 --
3687 [role="term"]
3688 ----
3689 gcc -shared -o libemon.so emon.o -ldl
3690 ----
3691 --
3692
3693 To build the application:
3694
3695 . Compile the application source file:
3696 +
3697 --
3698 [role="term"]
3699 ----
3700 gcc -c app.c
3701 ----
3702 --
3703
3704 . Build the application:
3705 +
3706 --
3707 [role="term"]
3708 ----
3709 gcc -o app app.o -L. -lemon
3710 ----
3711 --
3712
3713 To run the application with tracing support:
3714
3715 * Preload the tracepoint provider package shared object and
3716 start the application:
3717 +
3718 --
3719 [role="term"]
3720 ----
3721 LD_PRELOAD=./libtpp.so ./app
3722 ----
3723 --
3724
3725 To run the application without tracing support:
3726
3727 * Start the application:
3728 +
3729 --
3730 [role="term"]
3731 ----
3732 ./app
3733 ----
3734 --
3735
3736 |
3737 The application is statically linked with the tracepoint provider
3738 package object file.
3739
3740 The application is linked with the instrumented user library.
3741
3742 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3743
3744 |
3745 include::../common/ust-sit-step-tp-o.txt[]
3746
3747 To build the instrumented user library:
3748
3749 . In path:{emon.c}, before including path:{tpp.h}, add the
3750 following line:
3751 +
3752 --
3753 [source,c]
3754 ----
3755 #define TRACEPOINT_DEFINE
3756 ----
3757 --
3758
3759 . Compile the user library source file:
3760 +
3761 --
3762 [role="term"]
3763 ----
3764 gcc -I. -fpic -c emon.c
3765 ----
3766 --
3767
3768 . Build the user library shared object:
3769 +
3770 --
3771 [role="term"]
3772 ----
3773 gcc -shared -o libemon.so emon.o
3774 ----
3775 --
3776
3777 To build the application:
3778
3779 . Compile the application source file:
3780 +
3781 --
3782 [role="term"]
3783 ----
3784 gcc -c app.c
3785 ----
3786 --
3787
3788 . Build the application:
3789 +
3790 --
3791 [role="term"]
3792 ----
3793 gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3794 ----
3795 --
3796
3797 To run the instrumented application:
3798
3799 * Start the application:
3800 +
3801 --
3802 [role="term"]
3803 ----
3804 ./app
3805 ----
3806 --
3807
3808 |
3809 The application is statically linked with the tracepoint provider
3810 package object file.
3811
3812 The application dynamically loads the instrumented user library.
3813
3814 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3815
3816 |
3817 include::../common/ust-sit-step-tp-o.txt[]
3818
3819 To build the application:
3820
3821 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3822 +
3823 --
3824 [source,c]
3825 ----
3826 #define TRACEPOINT_DEFINE
3827 ----
3828 --
3829
3830 . Compile the application source file:
3831 +
3832 --
3833 [role="term"]
3834 ----
3835 gcc -c app.c
3836 ----
3837 --
3838
3839 . Build the application:
3840 +
3841 --
3842 [role="term"]
3843 ----
3844 gcc -Wl,--export-dynamic -o app app.o tpp.o \
3845 -llttng-ust -ldl
3846 ----
3847 --
3848 +
3849 The `--export-dynamic` option passed to the linker is necessary for the
3850 dynamically loaded library to ``see'' the tracepoint symbols defined in
3851 the application.
3852
3853 To build the instrumented user library:
3854
3855 . Compile the user library source file:
3856 +
3857 --
3858 [role="term"]
3859 ----
3860 gcc -I. -fpic -c emon.c
3861 ----
3862 --
3863
3864 . Build the user library shared object:
3865 +
3866 --
3867 [role="term"]
3868 ----
3869 gcc -shared -o libemon.so emon.o
3870 ----
3871 --
3872
3873 To run the application:
3874
3875 * Start the application:
3876 +
3877 --
3878 [role="term"]
3879 ----
3880 ./app
3881 ----
3882 --
3883 |====
3884
3885 [[dlclose-warning]]
3886 [IMPORTANT]
3887 .Do not use man:dlclose(3) on a tracepoint provider package
3888 ====
3889 Never use man:dlclose(3) on any shared object which:
3890
3891 * Is linked with, statically or dynamically, a tracepoint provider
3892 package.
3893 * Calls man:dlopen(3) itself to dynamically open a tracepoint provider
3894 package shared object.
3895
3896 This is currently considered **unsafe** due to a lack of reference
3897 counting from LTTng-UST to the shared object.
3898
3899 A known workaround (available since glibc 2.2) is to use the
3900 `RTLD_NODELETE` flag when calling man:dlopen(3) initially. This has the
3901 effect of not unloading the loaded shared object, even if man:dlclose(3)
3902 is called.
3903
3904 You can also preload the tracepoint provider package shared object with
3905 the env:LD_PRELOAD environment variable to overcome this limitation.
3906 ====
3907
3908
3909 [[using-lttng-ust-with-daemons]]
3910 ===== Use noch:{LTTng-UST} with daemons
3911
3912 If your instrumented application calls man:fork(2), man:clone(2),
3913 or BSD's man:rfork(2), without a following man:exec(3)-family
3914 system call, you must preload the path:{liblttng-ust-fork.so} shared
3915 object when starting the application.
3916
3917 [role="term"]
3918 ----
3919 LD_PRELOAD=liblttng-ust-fork.so ./my-app
3920 ----
3921
3922 If your tracepoint provider package is
3923 a shared library which you also preload, you must put both
3924 shared objects in env:LD_PRELOAD:
3925
3926 [role="term"]
3927 ----
3928 LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
3929 ----
3930
3931
3932 [[lttng-ust-pkg-config]]
3933 ===== Use noch:{pkg-config}
3934
3935 On some distributions, LTTng-UST ships with a
3936 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
3937 metadata file. If this is your case, then you can use cmd:pkg-config to
3938 build an application on the command line:
3939
3940 [role="term"]
3941 ----
3942 gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
3943 ----
3944
3945
3946 [[instrumenting-32-bit-app-on-64-bit-system]]
3947 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
3948
3949 In order to trace a 32-bit application running on a 64-bit system,
3950 LTTng must use a dedicated 32-bit
3951 <<lttng-consumerd,consumer daemon>>.
3952
3953 The following steps show how to build and install a 32-bit consumer
3954 daemon, which is _not_ part of the default 64-bit LTTng build, how to
3955 build and install the 32-bit LTTng-UST libraries, and how to build and
3956 link an instrumented 32-bit application in that context.
3957
3958 To build a 32-bit instrumented application for a 64-bit target system,
3959 assuming you have a fresh target system with no installed Userspace RCU
3960 or LTTng packages:
3961
3962 . Download, build, and install a 32-bit version of Userspace RCU:
3963 +
3964 --
3965 [role="term"]
3966 ----
3967 cd $(mktemp -d) &&
3968 wget http://lttng.org/files/urcu/userspace-rcu-latest-0.9.tar.bz2 &&
3969 tar -xf userspace-rcu-latest-0.9.tar.bz2 &&
3970 cd userspace-rcu-0.9.* &&
3971 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
3972 make &&
3973 sudo make install &&
3974 sudo ldconfig
3975 ----
3976 --
3977
3978 . Using your distribution's package manager, or from source, install
3979 the following 32-bit versions of the following dependencies of
3980 LTTng-tools and LTTng-UST:
3981 +
3982 --
3983 * https://sourceforge.net/projects/libuuid/[libuuid]
3984 * http://directory.fsf.org/wiki/Popt[popt]
3985 * http://www.xmlsoft.org/[libxml2]
3986 --
3987
3988 . Download, build, and install a 32-bit version of the latest
3989 LTTng-UST{nbsp}{revision}:
3990 +
3991 --
3992 [role="term"]
3993 ----
3994 cd $(mktemp -d) &&
3995 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.8.tar.bz2 &&
3996 tar -xf lttng-ust-latest-2.8.tar.bz2 &&
3997 cd lttng-ust-2.8.* &&
3998 ./configure --libdir=/usr/local/lib32 \
3999 CFLAGS=-m32 CXXFLAGS=-m32 \
4000 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
4001 make &&
4002 sudo make install &&
4003 sudo ldconfig
4004 ----
4005 --
4006 +
4007 [NOTE]
4008 ====
4009 Depending on your distribution,
4010 32-bit libraries could be installed at a different location than
4011 `/usr/lib32`. For example, Debian is known to install
4012 some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
4013
4014 In this case, make sure to set `LDFLAGS` to all the
4015 relevant 32-bit library paths, for example:
4016
4017 [role="term"]
4018 ----
4019 LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
4020 ----
4021 ====
4022
4023 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
4024 the 32-bit consumer daemon:
4025 +
4026 --
4027 [role="term"]
4028 ----
4029 cd $(mktemp -d) &&
4030 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.8.tar.bz2 &&
4031 tar -xf lttng-tools-latest-2.8.tar.bz2 &&
4032 cd lttng-tools-2.8.* &&
4033 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
4034 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
4035 make &&
4036 cd src/bin/lttng-consumerd &&
4037 sudo make install &&
4038 sudo ldconfig
4039 ----
4040 --
4041
4042 . From your distribution or from source,
4043 <<installing-lttng,install>> the 64-bit versions of
4044 LTTng-UST and Userspace RCU.
4045 . Download, build, and install the 64-bit version of the
4046 latest LTTng-tools{nbsp}{revision}:
4047 +
4048 --
4049 [role="term"]
4050 ----
4051 cd $(mktemp -d) &&
4052 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.8.tar.bz2 &&
4053 tar -xf lttng-tools-latest-2.8.tar.bz2 &&
4054 cd lttng-tools-2.8.* &&
4055 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
4056 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
4057 make &&
4058 sudo make install &&
4059 sudo ldconfig
4060 ----
4061 --
4062
4063 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
4064 when linking your 32-bit application:
4065 +
4066 ----
4067 -m32 -L/usr/lib32 -L/usr/local/lib32 \
4068 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
4069 ----
4070 +
4071 For example, let's rebuild the quick start example in
4072 <<tracing-your-own-user-application,Trace a user application>> as an
4073 instrumented 32-bit application:
4074 +
4075 --
4076 [role="term"]
4077 ----
4078 gcc -m32 -c -I. hello-tp.c
4079 gcc -m32 -c hello.c
4080 gcc -m32 -o hello hello.o hello-tp.o \
4081 -L/usr/lib32 -L/usr/local/lib32 \
4082 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
4083 -llttng-ust -ldl
4084 ----
4085 --
4086
4087 No special action is required to execute the 32-bit application and
4088 to trace it: use the command-line man:lttng(1) tool as usual.
4089
4090
4091 [role="since-2.5"]
4092 [[tracef]]
4093 ==== Use `tracef()`
4094
4095 man:tracef(3) is a small LTTng-UST API designed for quick,
4096 man:printf(3)-like instrumentation without the burden of
4097 <<tracepoint-provider,creating>> and
4098 <<building-tracepoint-providers-and-user-application,building>>
4099 a tracepoint provider package.
4100
4101 To use `tracef()` in your application:
4102
4103 . In the C or C++ source files where you need to use `tracef()`,
4104 include `<lttng/tracef.h>`:
4105 +
4106 --
4107 [source,c]
4108 ----
4109 #include <lttng/tracef.h>
4110 ----
4111 --
4112
4113 . In the application's source code, use `tracef()` like you would use
4114 man:printf(3):
4115 +
4116 --
4117 [source,c]
4118 ----
4119 /* ... */
4120
4121 tracef("my message: %d (%s)", my_integer, my_string);
4122
4123 /* ... */
4124 ----
4125 --
4126
4127 . Link your application with `liblttng-ust`:
4128 +
4129 --
4130 [role="term"]
4131 ----
4132 gcc -o app app.c -llttng-ust
4133 ----
4134 --
4135
4136 To trace the events that `tracef()` calls emit:
4137
4138 * <<enabling-disabling-events,Create an event rule>> which matches the
4139 `lttng_ust_tracef:*` event name:
4140 +
4141 --
4142 [role="term"]
4143 ----
4144 lttng enable-event --userspace 'lttng_ust_tracef:*'
4145 ----
4146 --
4147
4148 [IMPORTANT]
4149 .Limitations of `tracef()`
4150 ====
4151 The `tracef()` utility function was developed to make user space tracing
4152 super simple, albeit with notable disadvantages compared to
4153 <<defining-tracepoints,user-defined tracepoints>>:
4154
4155 * All the emitted events have the same tracepoint provider and
4156 tracepoint names, respectively `lttng_ust_tracef` and `event`.
4157 * There is no static type checking.
4158 * The only event record field you actually get, named `msg`, is a string
4159 potentially containing the values you passed to `tracef()`
4160 using your own format string. This also means that you cannot filter
4161 events with a custom expression at run time because there are no
4162 isolated fields.
4163 * Since `tracef()` uses the C standard library's man:vasprintf(3)
4164 function behind the scenes to format the strings at run time, its
4165 expected performance is lower than with user-defined tracepoints,
4166 which do not require a conversion to a string.
4167
4168 Taking this into consideration, `tracef()` is useful for some quick
4169 prototyping and debugging, but you should not consider it for any
4170 permanent and serious applicative instrumentation.
4171 ====
4172
4173
4174 [role="since-2.7"]
4175 [[tracelog]]
4176 ==== Use `tracelog()`
4177
4178 The man:tracelog(3) API is very similar to <<tracef,`tracef()`>>, with
4179 the difference that it accepts an additional log level parameter.
4180
4181 The goal of `tracelog()` is to ease the migration from logging to
4182 tracing.
4183
4184 To use `tracelog()` in your application:
4185
4186 . In the C or C++ source files where you need to use `tracelog()`,
4187 include `<lttng/tracelog.h>`:
4188 +
4189 --
4190 [source,c]
4191 ----
4192 #include <lttng/tracelog.h>
4193 ----
4194 --
4195
4196 . In the application's source code, use `tracelog()` like you would use
4197 man:printf(3), except for the first parameter which is the log
4198 level:
4199 +
4200 --
4201 [source,c]
4202 ----
4203 /* ... */
4204
4205 tracelog(TRACE_WARNING, "my message: %d (%s)",
4206 my_integer, my_string);
4207
4208 /* ... */
4209 ----
4210 --
4211 +
4212 See man:lttng-ust(3) for a list of available log level names.
4213
4214 . Link your application with `liblttng-ust`:
4215 +
4216 --
4217 [role="term"]
4218 ----
4219 gcc -o app app.c -llttng-ust
4220 ----
4221 --
4222
4223 To trace the events that `tracelog()` calls emit with a log level
4224 _as severe as_ a specific log level:
4225
4226 * <<enabling-disabling-events,Create an event rule>> which matches the
4227 `lttng_ust_tracelog:*` event name and a minimum level
4228 of severity:
4229 +
4230 --
4231 [role="term"]
4232 ----
4233 lttng enable-event --userspace 'lttng_ust_tracelog:*'
4234 --loglevel=TRACE_WARNING
4235 ----
4236 --
4237
4238 To trace the events that `tracelog()` calls emit with a
4239 _specific log level_:
4240
4241 * Create an event rule which matches the `lttng_ust_tracelog:*`
4242 event name and a specific log level:
4243 +
4244 --
4245 [role="term"]
4246 ----
4247 lttng enable-event --userspace 'lttng_ust_tracelog:*'
4248 --loglevel-only=TRACE_INFO
4249 ----
4250 --
4251
4252
4253 [[prebuilt-ust-helpers]]
4254 === Prebuilt user space tracing helpers
4255
4256 The LTTng-UST package provides a few helpers in the form or preloadable
4257 shared objects which automatically instrument system functions and
4258 calls.
4259
4260 The helper shared objects are normally found in dir:{/usr/lib}. If you
4261 built LTTng-UST <<building-from-source,from source>>, they are probably
4262 located in dir:{/usr/local/lib}.
4263
4264 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4265 are:
4266
4267 path:{liblttng-ust-libc-wrapper.so}::
4268 path:{liblttng-ust-pthread-wrapper.so}::
4269 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4270 memory and POSIX threads function tracing>>.
4271
4272 path:{liblttng-ust-cyg-profile.so}::
4273 path:{liblttng-ust-cyg-profile-fast.so}::
4274 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4275
4276 path:{liblttng-ust-dl.so}::
4277 <<liblttng-ust-dl,Dynamic linker tracing>>.
4278
4279 To use a user space tracing helper with any user application:
4280
4281 * Preload the helper shared object when you start the application:
4282 +
4283 --
4284 [role="term"]
4285 ----
4286 LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4287 ----
4288 --
4289 +
4290 You can preload more than one helper:
4291 +
4292 --
4293 [role="term"]
4294 ----
4295 LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4296 ----
4297 --
4298
4299
4300 [role="since-2.3"]
4301 [[liblttng-ust-libc-pthread-wrapper]]
4302 ==== Instrument C standard library memory and POSIX threads functions
4303
4304 The path:{liblttng-ust-libc-wrapper.so} and
4305 path:{liblttng-ust-pthread-wrapper.so} helpers
4306 add instrumentation to some C standard library and POSIX
4307 threads functions.
4308
4309 [role="growable"]
4310 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4311 |====
4312 |TP provider name |TP name |Instrumented function
4313
4314 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4315 |`calloc` |man:calloc(3)
4316 |`realloc` |man:realloc(3)
4317 |`free` |man:free(3)
4318 |`memalign` |man:memalign(3)
4319 |`posix_memalign` |man:posix_memalign(3)
4320 |====
4321
4322 [role="growable"]
4323 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4324 |====
4325 |TP provider name |TP name |Instrumented function
4326
4327 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4328 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4329 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4330 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4331 |====
4332
4333 When you preload the shared object, it replaces the functions listed
4334 in the previous tables by wrappers which contain tracepoints and call
4335 the replaced functions.
4336
4337
4338 [[liblttng-ust-cyg-profile]]
4339 ==== Instrument function entry and exit
4340
4341 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4342 to the entry and exit points of functions.
4343
4344 man:gcc(1) and man:clang(1) have an option named
4345 https://gcc.gnu.org/onlinedocs/gcc/Code-Gen-Options.html[`-finstrument-functions`]
4346 which generates instrumentation calls for entry and exit to functions.
4347 The LTTng-UST function tracing helpers,
4348 path:{liblttng-ust-cyg-profile.so} and
4349 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4350 to add tracepoints to the two generated functions (which contain
4351 `cyg_profile` in their names, hence the helper's name).
4352
4353 To use the LTTng-UST function tracing helper, the source files to
4354 instrument must be built using the `-finstrument-functions` compiler
4355 flag.
4356
4357 There are two versions of the LTTng-UST function tracing helper:
4358
4359 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4360 that you should only use when it can be _guaranteed_ that the
4361 complete event stream is recorded without any lost event record.
4362 Any kind of duplicate information is left out.
4363 +
4364 Assuming no event record is lost, having only the function addresses on
4365 entry is enough to create a call graph, since an event record always
4366 contains the ID of the CPU that generated it.
4367 +
4368 You can use a tool like man:addr2line(1) to convert function addresses
4369 back to source file names and line numbers.
4370
4371 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4372 which also works in use cases where event records might get discarded or
4373 not recorded from application startup.
4374 In these cases, the trace analyzer needs more information to be
4375 able to reconstruct the program flow.
4376
4377 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4378 points of this helper.
4379
4380 All the tracepoints that this helper provides have the
4381 log level `TRACE_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4382
4383 TIP: It's sometimes a good idea to limit the number of source files that
4384 you compile with the `-finstrument-functions` option to prevent LTTng
4385 from writing an excessive amount of trace data at run time. When using
4386 man:gcc(1), you can use the
4387 `-finstrument-functions-exclude-function-list` option to avoid
4388 instrument entries and exits of specific function names.
4389
4390
4391 [role="since-2.4"]
4392 [[liblttng-ust-dl]]
4393 ==== Instrument the dynamic linker
4394
4395 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4396 man:dlopen(3) and man:dlclose(3) function calls.
4397
4398 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4399 of this helper.
4400
4401
4402 [role="since-2.4"]
4403 [[java-application]]
4404 === User space Java agent
4405
4406 You can instrument any Java application which uses one of the following
4407 logging frameworks:
4408
4409 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4410 (JUL) core logging facilities.
4411 * http://logging.apache.org/log4j/1.2/[**Apache log4j 1.2**], since
4412 LTTng 2.6. Note that Apache Log4j{nbsp}2 is not supported.
4413
4414 [role="img-100"]
4415 .LTTng-UST Java agent imported by a Java application.
4416 image::java-app.png[]
4417
4418 Note that the methods described below are new in LTTng{nbsp}{revision}.
4419 Previous LTTng versions use another technique.
4420
4421 NOTE: We use http://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4422 and https://ci.lttng.org/[continuous integration], thus this version is
4423 directly supported. However, the LTTng-UST Java agent is also tested
4424 with OpenJDK{nbsp}7.
4425
4426
4427 [role="since-2.8"]
4428 [[jul]]
4429 ==== Use the LTTng-UST Java agent for `java.util.logging`
4430
4431 To use the LTTng-UST Java agent in a Java application which uses
4432 `java.util.logging` (JUL):
4433
4434 . In the Java application's source code, import the LTTng-UST
4435 log handler package for `java.util.logging`:
4436 +
4437 --
4438 [source,java]
4439 ----
4440 import org.lttng.ust.agent.jul.LttngLogHandler;
4441 ----
4442 --
4443
4444 . Create an LTTng-UST JUL log handler:
4445 +
4446 --
4447 [source,java]
4448 ----
4449 Handler lttngUstLogHandler = new LttngLogHandler();
4450 ----
4451 --
4452
4453 . Add this handler to the JUL loggers which should emit LTTng events:
4454 +
4455 --
4456 [source,java]
4457 ----
4458 Logger myLogger = Logger.getLogger("some-logger");
4459
4460 myLogger.addHandler(lttngUstLogHandler);
4461 ----
4462 --
4463
4464 . Use `java.util.logging` log statements and configuration as usual.
4465 The loggers with an attached LTTng-UST log handler can emit
4466 LTTng events.
4467
4468 . Before exiting the application, remove the LTTng-UST log handler from
4469 the loggers attached to it and call its `close()` method:
4470 +
4471 --
4472 [source,java]
4473 ----
4474 myLogger.removeHandler(lttngUstLogHandler);
4475 lttngUstLogHandler.close();
4476 ----
4477 --
4478 +
4479 This is not strictly necessary, but it is recommended for a clean
4480 disposal of the handler's resources.
4481
4482 . Include the LTTng-UST Java agent's common and JUL-specific JAR files,
4483 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4484 in the
4485 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4486 path] when you build the Java application.
4487 +
4488 The JAR files are typically located in dir:{/usr/share/java}.
4489 +
4490 IMPORTANT: The LTTng-UST Java agent must be
4491 <<installing-lttng,installed>> for the logging framework your
4492 application uses.
4493
4494 .Use the LTTng-UST Java agent for `java.util.logging`.
4495 ====
4496 [source,java]
4497 .path:{Test.java}
4498 ----
4499 import java.io.IOException;
4500 import java.util.logging.Handler;
4501 import java.util.logging.Logger;
4502 import org.lttng.ust.agent.jul.LttngLogHandler;
4503
4504 public class Test
4505 {
4506 private static final int answer = 42;
4507
4508 public static void main(String[] argv) throws Exception
4509 {
4510 // Create a logger
4511 Logger logger = Logger.getLogger("jello");
4512
4513 // Create an LTTng-UST log handler
4514 Handler lttngUstLogHandler = new LttngLogHandler();
4515
4516 // Add the LTTng-UST log handler to our logger
4517 logger.addHandler(lttngUstLogHandler);
4518
4519 // Log at will!
4520 logger.info("some info");
4521 logger.warning("some warning");
4522 Thread.sleep(500);
4523 logger.finer("finer information; the answer is " + answer);
4524 Thread.sleep(123);
4525 logger.severe("error!");
4526
4527 // Not mandatory, but cleaner
4528 logger.removeHandler(lttngUstLogHandler);
4529 lttngUstLogHandler.close();
4530 }
4531 }
4532 ----
4533
4534 Build this example:
4535
4536 [role="term"]
4537 ----
4538 javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4539 ----
4540
4541 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4542 <<enabling-disabling-events,create an event rule>> matching the
4543 `jello` JUL logger, and <<basic-tracing-session-control,start tracing>>:
4544
4545 [role="term"]
4546 ----
4547 lttng create
4548 lttng enable-event --jul jello
4549 lttng start
4550 ----
4551
4552 Run the compiled class:
4553
4554 [role="term"]
4555 ----
4556 java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4557 ----
4558
4559 <<basic-tracing-session-control,Stop tracing>> and inspect the
4560 recorded events:
4561
4562 [role="term"]
4563 ----
4564 lttng stop
4565 lttng view
4566 ----
4567 ====
4568
4569 You can use the opt:lttng-enable-event(1):--loglevel or
4570 opt:lttng-enable-event(1):--loglevel-only option of the
4571 man:lttng-enable-event(1) command to target a range of JUL log levels
4572 or a specific JUL log level.
4573
4574
4575 [role="since-2.8"]
4576 [[log4j]]
4577 ==== Use the LTTng-UST Java agent for Apache log4j
4578
4579 To use the LTTng-UST Java agent in a Java application which uses
4580 Apache log4j 1.2:
4581
4582 . In the Java application's source code, import the LTTng-UST
4583 log appender package for Apache log4j:
4584 +
4585 --
4586 [source,java]
4587 ----
4588 import org.lttng.ust.agent.log4j.LttngLogAppender;
4589 ----
4590 --
4591
4592 . Create an LTTng-UST log4j log appender:
4593 +
4594 --
4595 [source,java]
4596 ----
4597 Appender lttngUstLogAppender = new LttngLogAppender();
4598 ----
4599 --
4600
4601 . Add this appender to the log4j loggers which should emit LTTng events:
4602 +
4603 --
4604 [source,java]
4605 ----
4606 Logger myLogger = Logger.getLogger("some-logger");
4607
4608 myLogger.addAppender(lttngUstLogAppender);
4609 ----
4610 --
4611
4612 . Use Apache log4j log statements and configuration as usual. The
4613 loggers with an attached LTTng-UST log appender can emit LTTng events.
4614
4615 . Before exiting the application, remove the LTTng-UST log appender from
4616 the loggers attached to it and call its `close()` method:
4617 +
4618 --
4619 [source,java]
4620 ----
4621 myLogger.removeAppender(lttngUstLogAppender);
4622 lttngUstLogAppender.close();
4623 ----
4624 --
4625 +
4626 This is not strictly necessary, but it is recommended for a clean
4627 disposal of the appender's resources.
4628
4629 . Include the LTTng-UST Java agent's common and log4j-specific JAR
4630 files, path:{lttng-ust-agent-common.jar} and
4631 path:{lttng-ust-agent-log4j.jar}, in the
4632 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4633 path] when you build the Java application.
4634 +
4635 The JAR files are typically located in dir:{/usr/share/java}.
4636 +
4637 IMPORTANT: The LTTng-UST Java agent must be
4638 <<installing-lttng,installed>> for the logging framework your
4639 application uses.
4640
4641 .Use the LTTng-UST Java agent for Apache log4j.
4642 ====
4643 [source,java]
4644 .path:{Test.java}
4645 ----
4646 import org.apache.log4j.Appender;
4647 import org.apache.log4j.Logger;
4648 import org.lttng.ust.agent.log4j.LttngLogAppender;
4649
4650 public class Test
4651 {
4652 private static final int answer = 42;
4653
4654 public static void main(String[] argv) throws Exception
4655 {
4656 // Create a logger
4657 Logger logger = Logger.getLogger("jello");
4658
4659 // Create an LTTng-UST log appender
4660 Appender lttngUstLogAppender = new LttngLogAppender();
4661
4662 // Add the LTTng-UST log appender to our logger
4663 logger.addAppender(lttngUstLogAppender);
4664
4665 // Log at will!
4666 logger.info("some info");
4667 logger.warn("some warning");
4668 Thread.sleep(500);
4669 logger.debug("debug information; the answer is " + answer);
4670 Thread.sleep(123);
4671 logger.fatal("error!");
4672
4673 // Not mandatory, but cleaner
4674 logger.removeAppender(lttngUstLogAppender);
4675 lttngUstLogAppender.close();
4676 }
4677 }
4678
4679 ----
4680
4681 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4682 file):
4683
4684 [role="term"]
4685 ----
4686 javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4687 ----
4688
4689 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4690 <<enabling-disabling-events,create an event rule>> matching the
4691 `jello` log4j logger, and <<basic-tracing-session-control,start tracing>>:
4692
4693 [role="term"]
4694 ----
4695 lttng create
4696 lttng enable-event --log4j jello
4697 lttng start
4698 ----
4699
4700 Run the compiled class:
4701
4702 [role="term"]
4703 ----
4704 java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4705 ----
4706
4707 <<basic-tracing-session-control,Stop tracing>> and inspect the
4708 recorded events:
4709
4710 [role="term"]
4711 ----
4712 lttng stop
4713 lttng view
4714 ----
4715 ====
4716
4717 You can use the opt:lttng-enable-event(1):--loglevel or
4718 opt:lttng-enable-event(1):--loglevel-only option of the
4719 man:lttng-enable-event(1) command to target a range of Apache log4j log levels
4720 or a specific log4j log level.
4721
4722
4723 [role="since-2.8"]
4724 [[java-application-context]]
4725 ==== Provide application-specific context fields in a Java application
4726
4727 A Java application-specific context field is a piece of state provided
4728 by the application which <<adding-context,you can add>>, using the
4729 man:lttng-add-context(1) command, to each <<event,event record>>
4730 produced by the log statements of this application.
4731
4732 For example, a given object might have a current request ID variable.
4733 You can create a context information retriever for this object and
4734 assign a name to this current request ID. You can then, using the
4735 man:lttng-add-context(1) command, add this context field by name to
4736 the JUL or log4j <<channel,channel>>.
4737
4738 To provide application-specific context fields in a Java application:
4739
4740 . In the Java application's source code, import the LTTng-UST
4741 Java agent context classes and interfaces:
4742 +
4743 --
4744 [source,java]
4745 ----
4746 import org.lttng.ust.agent.context.ContextInfoManager;
4747 import org.lttng.ust.agent.context.IContextInfoRetriever;
4748 ----
4749 --
4750
4751 . Create a context information retriever class, that is, a class which
4752 implements the `IContextInfoRetriever` interface:
4753 +
4754 --
4755 [source,java]
4756 ----
4757 class MyContextInfoRetriever implements IContextInfoRetriever
4758 {
4759 @Override
4760 public Object retrieveContextInfo(String key)
4761 {
4762 if (key.equals("intCtx")) {
4763 return (short) 17;
4764 } else if (key.equals("strContext")) {
4765 return "context value!";
4766 } else {
4767 return null;
4768 }
4769 }
4770 }
4771 ----
4772 --
4773 +
4774 This `retrieveContextInfo()` method is the only member of the
4775 `IContextInfoRetriever` interface. Its role is to return the current
4776 value of a state by name to create a context field. The names of the
4777 context fields and which state variables they return depends on your
4778 specific scenario.
4779 +
4780 All primitive types and objects are supported as context fields.
4781 When `retrieveContextInfo()` returns an object, the context field
4782 serializer calls its `toString()` method to add a string field to
4783 event records. The method can also return `null`, which means that
4784 no context field is available for the required name.
4785
4786 . Register an instance of your context information retriever class to
4787 the context information manager singleton:
4788 +
4789 --
4790 [source,java]
4791 ----
4792 IContextInfoRetriever cir = new MyContextInfoRetriever();
4793 ContextInfoManager cim = ContextInfoManager.getInstance();
4794 cim.registerContextInfoRetriever("retrieverName", cir);
4795 ----
4796 --
4797
4798 . Before exiting the application, remove your context information
4799 retriever from the context information manager singleton:
4800 +
4801 --
4802 [source,java]
4803 ----
4804 ContextInfoManager cim = ContextInfoManager.getInstance();
4805 cim.unregisterContextInfoRetriever("retrieverName");
4806 ----
4807 --
4808 +
4809 This is not strictly necessary, but it is recommended for a clean
4810 disposal of some manager's resources.
4811
4812 . Build your Java application with LTTng-UST Java agent support as
4813 usual, following the procedure for either the <<jul,JUL>> or
4814 <<log4j,Apache log4j>> framework.
4815
4816
4817 .Provide application-specific context fields in a Java application.
4818 ====
4819 [source,java]
4820 .path:{Test.java}
4821 ----
4822 import java.util.logging.Handler;
4823 import java.util.logging.Logger;
4824 import org.lttng.ust.agent.jul.LttngLogHandler;
4825 import org.lttng.ust.agent.context.ContextInfoManager;
4826 import org.lttng.ust.agent.context.IContextInfoRetriever;
4827
4828 public class Test
4829 {
4830 // Our context information retriever class
4831 private static class MyContextInfoRetriever
4832 implements IContextInfoRetriever
4833 {
4834 @Override
4835 public Object retrieveContextInfo(String key) {
4836 if (key.equals("intCtx")) {
4837 return (short) 17;
4838 } else if (key.equals("strContext")) {
4839 return "context value!";
4840 } else {
4841 return null;
4842 }
4843 }
4844 }
4845
4846 private static final int answer = 42;
4847
4848 public static void main(String args[]) throws Exception
4849 {
4850 // Get the context information manager instance
4851 ContextInfoManager cim = ContextInfoManager.getInstance();
4852
4853 // Create and register our context information retriever
4854 IContextInfoRetriever cir = new MyContextInfoRetriever();
4855 cim.registerContextInfoRetriever("myRetriever", cir);
4856
4857 // Create a logger
4858 Logger logger = Logger.getLogger("jello");
4859
4860 // Create an LTTng-UST log handler
4861 Handler lttngUstLogHandler = new LttngLogHandler();
4862
4863 // Add the LTTng-UST log handler to our logger
4864 logger.addHandler(lttngUstLogHandler);
4865
4866 // Log at will!
4867 logger.info("some info");
4868 logger.warning("some warning");
4869 Thread.sleep(500);
4870 logger.finer("finer information; the answer is " + answer);
4871 Thread.sleep(123);
4872 logger.severe("error!");
4873
4874 // Not mandatory, but cleaner
4875 logger.removeHandler(lttngUstLogHandler);
4876 lttngUstLogHandler.close();
4877 cim.unregisterContextInfoRetriever("myRetriever");
4878 }
4879 }
4880 ----
4881
4882 Build this example:
4883
4884 [role="term"]
4885 ----
4886 javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4887 ----
4888
4889 <<creating-destroying-tracing-sessions,Create a tracing session>>
4890 and <<enabling-disabling-events,create an event rule>> matching the
4891 `jello` JUL logger:
4892
4893 [role="term"]
4894 ----
4895 lttng create
4896 lttng enable-event --jul jello
4897 ----
4898
4899 <<adding-context,Add the application-specific context fields>> to the
4900 JUL channel:
4901
4902 [role="term"]
4903 ----
4904 lttng add-context --jul --type='$app.myRetriever:intCtx'
4905 lttng add-context --jul --type='$app.myRetriever:strContext'
4906 ----
4907
4908 <<basic-tracing-session-control,Start tracing>>:
4909
4910 [role="term"]
4911 ----
4912 lttng start
4913 ----
4914
4915 Run the compiled class:
4916
4917 [role="term"]
4918 ----
4919 java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4920 ----
4921
4922 <<basic-tracing-session-control,Stop tracing>> and inspect the
4923 recorded events:
4924
4925 [role="term"]
4926 ----
4927 lttng stop
4928 lttng view
4929 ----
4930 ====
4931
4932
4933 [role="since-2.7"]
4934 [[python-application]]
4935 === User space Python agent
4936
4937 You can instrument a Python 2 or Python 3 application which uses the
4938 standard https://docs.python.org/3/library/logging.html[`logging`]
4939 package.
4940
4941 Each log statement emits an LTTng event once the
4942 application module imports the
4943 <<lttng-ust-agents,LTTng-UST Python agent>> package.
4944
4945 [role="img-100"]
4946 .A Python application importing the LTTng-UST Python agent.
4947 image::python-app.png[]
4948
4949 To use the LTTng-UST Python agent:
4950
4951 . In the Python application's source code, import the LTTng-UST Python
4952 agent:
4953 +
4954 --
4955 [source,python]
4956 ----
4957 import lttngust
4958 ----
4959 --
4960 +
4961 The LTTng-UST Python agent automatically adds its logging handler to the
4962 root logger at import time.
4963 +
4964 Any log statement that the application executes before this import does
4965 not emit an LTTng event.
4966 +
4967 IMPORTANT: The LTTng-UST Python agent must be
4968 <<installing-lttng,installed>>.
4969
4970 . Use log statements and logging configuration as usual.
4971 Since the LTTng-UST Python agent adds a handler to the _root_
4972 logger, you can trace any log statement from any logger.
4973
4974 .Use the LTTng-UST Python agent.
4975 ====
4976 [source,python]
4977 .path:{test.py}
4978 ----
4979 import lttngust
4980 import logging
4981 import time
4982
4983
4984 def example():
4985 logging.basicConfig()
4986 logger = logging.getLogger('my-logger')
4987
4988 while True:
4989 logger.debug('debug message')
4990 logger.info('info message')
4991 logger.warn('warn message')
4992 logger.error('error message')
4993 logger.critical('critical message')
4994 time.sleep(1)
4995
4996
4997 if __name__ == '__main__':
4998 example()
4999 ----
5000
5001 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
5002 logging handler which prints to the standard error stream, is not
5003 strictly required for LTTng-UST tracing to work, but in versions of
5004 Python preceding 3.2, you could see a warning message which indicates
5005 that no handler exists for the logger `my-logger`.
5006
5007 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5008 <<enabling-disabling-events,create an event rule>> matching the
5009 `my-logger` Python logger, and <<basic-tracing-session-control,start
5010 tracing>>:
5011
5012 [role="term"]
5013 ----
5014 lttng create
5015 lttng enable-event --python my-logger
5016 lttng start
5017 ----
5018
5019 Run the Python script:
5020
5021 [role="term"]
5022 ----
5023 python test.py
5024 ----
5025
5026 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5027 events:
5028
5029 [role="term"]
5030 ----
5031 lttng stop
5032 lttng view
5033 ----
5034 ====
5035
5036 You can use the opt:lttng-enable-event(1):--loglevel or
5037 opt:lttng-enable-event(1):--loglevel-only option of the
5038 man:lttng-enable-event(1) command to target a range of Python log levels
5039 or a specific Python log level.
5040
5041 When an application imports the LTTng-UST Python agent, the agent tries
5042 to register to a <<lttng-sessiond,session daemon>>. Note that you must
5043 <<start-sessiond,start the session daemon>> _before_ you run the Python
5044 application. If a session daemon is found, the agent tries to register
5045 to it during 5{nbsp}seconds, after which the application continues
5046 without LTTng tracing support. You can override this timeout value with
5047 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
5048 (milliseconds).
5049
5050 If the session daemon stops while a Python application with an imported
5051 LTTng-UST Python agent runs, the agent retries to connect and to
5052 register to a session daemon every 3{nbsp}seconds. You can override this
5053 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
5054 variable.
5055
5056
5057 [role="since-2.5"]
5058 [[proc-lttng-logger-abi]]
5059 === LTTng logger
5060
5061 The `lttng-tracer` Linux kernel module, part of
5062 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger file
5063 path:{/proc/lttng-logger} when it's loaded. Any application can write
5064 text data to this file to emit an LTTng event.
5065
5066 [role="img-100"]
5067 .An application writes to the LTTng logger file to emit an LTTng event.
5068 image::lttng-logger.png[]
5069
5070 The LTTng logger is the quickest method--not the most efficient,
5071 however--to add instrumentation to an application. It is designed
5072 mostly to instrument shell scripts:
5073
5074 [role="term"]
5075 ----
5076 echo "Some message, some $variable" > /proc/lttng-logger
5077 ----
5078
5079 Any event that the LTTng logger emits is named `lttng_logger` and
5080 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
5081 other instrumentation points in the kernel tracing domain, **any Unix
5082 user** can <<enabling-disabling-events,create an event rule>> which
5083 matches its event name, not only the root user or users in the tracing
5084 group.
5085
5086 To use the LTTng logger:
5087
5088 * From any application, write text data to the path:{/proc/lttng-logger}
5089 file.
5090
5091 The `msg` field of `lttng_logger` event records contains the
5092 recorded message.
5093
5094 NOTE: The maximum message length of an LTTng logger event is
5095 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
5096 than one event to contain the remaining data.
5097
5098 You should not use the LTTng logger to trace a user application which
5099 can be instrumented in a more efficient way, namely:
5100
5101 * <<c-application,C and $$C++$$ applications>>.
5102 * <<java-application,Java applications>>.
5103 * <<python-application,Python applications>>.
5104
5105 .Use the LTTng logger.
5106 ====
5107 [source,bash]
5108 .path:{test.bash}
5109 ----
5110 echo 'Hello, World!' > /proc/lttng-logger
5111 sleep 2
5112 df --human-readable --print-type / > /proc/lttng-logger
5113 ----
5114
5115 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5116 <<enabling-disabling-events,create an event rule>> matching the
5117 `lttng_logger` Linux kernel tracepoint, and
5118 <<basic-tracing-session-control,start tracing>>:
5119
5120 [role="term"]
5121 ----
5122 lttng create
5123 lttng enable-event --kernel lttng_logger
5124 lttng start
5125 ----
5126
5127 Run the Bash script:
5128
5129 [role="term"]
5130 ----
5131 bash test.bash
5132 ----
5133
5134 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5135 events:
5136
5137 [role="term"]
5138 ----
5139 lttng stop
5140 lttng view
5141 ----
5142 ====
5143
5144
5145 [[instrumenting-linux-kernel]]
5146 === LTTng kernel tracepoints
5147
5148 NOTE: This section shows how to _add_ instrumentation points to the
5149 Linux kernel. The kernel's subsystems are already thoroughly
5150 instrumented at strategic places for LTTng when you
5151 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
5152 package.
5153
5154 ////
5155 There are two methods to instrument the Linux kernel:
5156
5157 . <<linux-add-lttng-layer,Add an LTTng layer>> over an existing ftrace
5158 tracepoint which uses the `TRACE_EVENT()` API.
5159 +
5160 Choose this if you want to instrumentation a Linux kernel tree with an
5161 instrumentation point compatible with ftrace, perf, and SystemTap.
5162
5163 . Use an <<linux-lttng-tracepoint-event,LTTng-only approach>> to
5164 instrument an out-of-tree kernel module.
5165 +
5166 Choose this if you don't need ftrace, perf, or SystemTap support.
5167 ////
5168
5169
5170 [[linux-add-lttng-layer]]
5171 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5172
5173 This section shows how to add an LTTng layer to existing ftrace
5174 instrumentation using the `TRACE_EVENT()` API.
5175
5176 This section does not document the `TRACE_EVENT()` macro. You can
5177 read the following articles to learn more about this API:
5178
5179 * http://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part 1)]
5180 * http://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part 2)]
5181 * http://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part 3)]
5182
5183 The following procedure assumes that your ftrace tracepoints are
5184 correctly defined in their own header and that they are created in
5185 one source file using the `CREATE_TRACE_POINTS` definition.
5186
5187 To add an LTTng layer over an existing ftrace tracepoint:
5188
5189 . Make sure the following kernel configuration options are
5190 enabled:
5191 +
5192 --
5193 * `CONFIG_MODULES`
5194 * `CONFIG_KALLSYMS`
5195 * `CONFIG_HIGH_RES_TIMERS`
5196 * `CONFIG_TRACEPOINTS`
5197 --
5198
5199 . Build the Linux source tree with your custom ftrace tracepoints.
5200 . Boot the resulting Linux image on your target system.
5201 +
5202 Confirm that the tracepoints exist by looking for their names in the
5203 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5204 is your subsystem's name.
5205
5206 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
5207 +
5208 --
5209 [role="term"]
5210 ----
5211 cd $(mktemp -d) &&
5212 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.8.tar.bz2 &&
5213 tar -xf lttng-modules-latest-2.8.tar.bz2 &&
5214 cd lttng-modules-2.8.*
5215 ----
5216 --
5217
5218 . In dir:{instrumentation/events/lttng-module}, relative to the root
5219 of the LTTng-modules source tree, create a header file named
5220 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5221 LTTng-modules tracepoint definitions using the LTTng-modules
5222 macros in it.
5223 +
5224 Start with this template:
5225 +
5226 --
5227 [source,c]
5228 .path:{instrumentation/events/lttng-module/my_subsys.h}
5229 ----
5230 #undef TRACE_SYSTEM
5231 #define TRACE_SYSTEM my_subsys
5232
5233 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5234 #define _LTTNG_MY_SUBSYS_H
5235
5236 #include "../../../probes/lttng-tracepoint-event.h"
5237 #include <linux/tracepoint.h>
5238
5239 LTTNG_TRACEPOINT_EVENT(
5240 /*
5241 * Format is identical to TRACE_EVENT()'s version for the three
5242 * following macro parameters:
5243 */
5244 my_subsys_my_event,
5245 TP_PROTO(int my_int, const char *my_string),
5246 TP_ARGS(my_int, my_string),
5247
5248 /* LTTng-modules specific macros */
5249 TP_FIELDS(
5250 ctf_integer(int, my_int_field, my_int)
5251 ctf_string(my_bar_field, my_bar)
5252 )
5253 )
5254
5255 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5256
5257 #include "../../../probes/define_trace.h"
5258 ----
5259 --
5260 +
5261 The entries in the `TP_FIELDS()` section are the list of fields for the
5262 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5263 ftrace's `TRACE_EVENT()` macro.
5264 +
5265 See <<lttng-modules-tp-fields,Tracepoint fields macros>> for a
5266 complete description of the available `ctf_*()` macros.
5267
5268 . Create the LTTng-modules probe's kernel module C source file,
5269 +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5270 subsystem name:
5271 +
5272 --
5273 [source,c]
5274 .path:{probes/lttng-probe-my-subsys.c}
5275 ----
5276 #include <linux/module.h>
5277 #include "../lttng-tracer.h"
5278
5279 /*
5280 * Build-time verification of mismatch between mainline
5281 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5282 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5283 */
5284 #include <trace/events/my_subsys.h>
5285
5286 /* Create LTTng tracepoint probes */
5287 #define LTTNG_PACKAGE_BUILD
5288 #define CREATE_TRACE_POINTS
5289 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5290
5291 #include "../instrumentation/events/lttng-module/my_subsys.h"
5292
5293 MODULE_LICENSE("GPL and additional rights");
5294 MODULE_AUTHOR("Your name <your-email>");
5295 MODULE_DESCRIPTION("LTTng my_subsys probes");
5296 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5297 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5298 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5299 LTTNG_MODULES_EXTRAVERSION);
5300 ----
5301 --
5302
5303 . Edit path:{probes/Makefile} and add your new kernel module object
5304 next to the existing ones:
5305 +
5306 --
5307 [source,make]
5308 .path:{probes/Makefile}
5309 ----
5310 # ...
5311
5312 obj-m += lttng-probe-module.o
5313 obj-m += lttng-probe-power.o
5314
5315 obj-m += lttng-probe-my-subsys.o
5316
5317 # ...
5318 ----
5319 --
5320
5321 . Build and install the LTTng kernel modules:
5322 +
5323 --
5324 [role="term"]
5325 ----
5326 make KERNELDIR=/path/to/linux
5327 sudo make modules_install
5328 ----
5329 --
5330 +
5331 Replace `/path/to/linux` with the path to the Linux source tree where
5332 you defined and used tracepoints with ftrace's `TRACE_EVENT()` macro.
5333
5334 Note that you can also use the
5335 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5336 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5337 C code that need to be executed before the event fields are recorded.
5338
5339 The best way to learn how to use the previous LTTng-modules macros is to
5340 inspect the existing LTTng-modules tracepoint definitions in the
5341 dir:{instrumentation/events/lttng-module} header files. Compare them
5342 with the Linux kernel mainline versions in the
5343 dir:{include/trace/events} directory of the Linux source tree.
5344
5345
5346 [role="since-2.7"]
5347 [[lttng-tracepoint-event-code]]
5348 ===== Use custom C code to access the data for tracepoint fields
5349
5350 Although we recommended to always use the
5351 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5352 the arguments and fields of an LTTng-modules tracepoint when possible,
5353 sometimes you need a more complex process to access the data that the
5354 tracer records as event record fields. In other words, you need local
5355 variables and multiple C{nbsp}statements instead of simple
5356 argument-based expressions that you pass to the
5357 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5358
5359 You can use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5360 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5361 a block of C{nbsp}code to be executed before LTTng records the fields.
5362 The structure of this macro is:
5363
5364 [source,c]
5365 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5366 ----
5367 LTTNG_TRACEPOINT_EVENT_CODE(
5368 /*
5369 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5370 * version for the following three macro parameters:
5371 */
5372 my_subsys_my_event,
5373 TP_PROTO(int my_int, const char *my_string),
5374 TP_ARGS(my_int, my_string),
5375
5376 /* Declarations of custom local variables */
5377 TP_locvar(
5378 int a = 0;
5379 unsigned long b = 0;
5380 const char *name = "(undefined)";
5381 struct my_struct *my_struct;
5382 ),
5383
5384 /*
5385 * Custom code which uses both tracepoint arguments
5386 * (in TP_ARGS()) and local variables (in TP_locvar()).
5387 *
5388 * Local variables are actually members of a structure pointed
5389 * to by the special variable tp_locvar.
5390 */
5391 TP_code(
5392 if (my_int) {
5393 tp_locvar->a = my_int + 17;
5394 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5395 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5396 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5397 put_my_struct(tp_locvar->my_struct);
5398
5399 if (tp_locvar->b) {
5400 tp_locvar->a = 1;
5401 }
5402 }
5403 ),
5404
5405 /*
5406 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5407 * version for this, except that tp_locvar members can be
5408 * used in the argument expression parameters of
5409 * the ctf_*() macros.
5410 */
5411 TP_FIELDS(
5412 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5413 ctf_integer(int, my_struct_a, tp_locvar->a)
5414 ctf_string(my_string_field, my_string)
5415 ctf_string(my_struct_name, tp_locvar->name)
5416 )
5417 )
5418 ----
5419
5420 IMPORTANT: The C code defined in `TP_code()` must not have any side
5421 effects when executed. In particular, the code must not allocate
5422 memory or get resources without deallocating this memory or putting
5423 those resources afterwards.
5424
5425
5426 [[instrumenting-linux-kernel-tracing]]
5427 ==== Load and unload a custom probe kernel module
5428
5429 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5430 kernel module>> in the kernel before it can emit LTTng events.
5431
5432 To load the default probe kernel modules and a custom probe kernel
5433 module:
5434
5435 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5436 probe modules to load when starting a root <<lttng-sessiond,session
5437 daemon>>:
5438 +
5439 --
5440 .Load the `my_subsys`, `usb`, and the default probe modules.
5441 ====
5442 [role="term"]
5443 ----
5444 sudo lttng-sessiond --extra-kmod-probes=my_subsys,usb
5445 ----
5446 ====
5447 --
5448 +
5449 You only need to pass the subsystem name, not the whole kernel module
5450 name.
5451
5452 To load _only_ a given custom probe kernel module:
5453
5454 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5455 modules to load when starting a root session daemon:
5456 +
5457 --
5458 .Load only the `my_subsys` and `usb` probe modules.
5459 ====
5460 [role="term"]
5461 ----
5462 sudo lttng-sessiond --kmod-probes=my_subsys,usb
5463 ----
5464 ====
5465 --
5466
5467 To confirm that a probe module is loaded:
5468
5469 * Use man:lsmod(8):
5470 +
5471 --
5472 [role="term"]
5473 ----
5474 lsmod | grep lttng_probe_usb
5475 ----
5476 --
5477
5478 To unload the loaded probe modules:
5479
5480 * Kill the session daemon with `SIGTERM`:
5481 +
5482 --
5483 [role="term"]
5484 ----
5485 sudo pkill lttng-sessiond
5486 ----
5487 --
5488 +
5489 You can also use man:modprobe(8)'s `--remove` option if the session
5490 daemon terminates abnormally.
5491
5492
5493 [[controlling-tracing]]
5494 == Tracing control
5495
5496 Once an application or a Linux kernel is
5497 <<instrumenting,instrumented>> for LTTng tracing,
5498 you can _trace_ it.
5499
5500 This section is divided in topics on how to use the various
5501 <<plumbing,components of LTTng>>, in particular the <<lttng-cli,cmd:lttng
5502 command-line tool>>, to _control_ the LTTng daemons and tracers.
5503
5504 NOTE: In the following subsections, we refer to an man:lttng(1) command
5505 using its man page name. For example, instead of _Run the `create`
5506 command to..._, we use _Run the man:lttng-create(1) command to..._.
5507
5508
5509 [[start-sessiond]]
5510 === Start a session daemon
5511
5512 In some situations, you need to run a <<lttng-sessiond,session daemon>>
5513 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5514 command-line tool.
5515
5516 You will see the following error when you run a command while no session
5517 daemon is running:
5518
5519 ----
5520 Error: No session daemon is available
5521 ----
5522
5523 The only command that automatically runs a session daemon is
5524 man:lttng-create(1), which you use to
5525 <<creating-destroying-tracing-sessions,create a tracing session>>. While
5526 this is most of the time the first operation that you do, sometimes it's
5527 not. Some examples are:
5528
5529 * <<list-instrumentation-points,List the available instrumentation points>>.
5530 * <<saving-loading-tracing-session,Load a tracing session configuration>>.
5531
5532 [[tracing-group]] Each Unix user must have its own running session
5533 daemon to trace user applications. The session daemon that the root user
5534 starts is the only one allowed to control the LTTng kernel tracer. Users
5535 that are part of the _tracing group_ can control the root session
5536 daemon. The default tracing group name is `tracing`; you can set it to
5537 something else with the opt:lttng-sessiond(8):--group option when you
5538 start the root session daemon.
5539
5540 To start a user session daemon:
5541
5542 * Run man:lttng-sessiond(8):
5543 +
5544 --
5545 [role="term"]
5546 ----
5547 lttng-sessiond --daemonize
5548 ----
5549 --
5550
5551 To start the root session daemon:
5552
5553 * Run man:lttng-sessiond(8) as the root user:
5554 +
5555 --
5556 [role="term"]
5557 ----
5558 sudo lttng-sessiond --daemonize
5559 ----
5560 --
5561
5562 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5563 start the session daemon in foreground.
5564
5565 To stop a session daemon, use man:kill(1) on its process ID (standard
5566 `TERM` signal).
5567
5568 Note that some Linux distributions could manage the LTTng session daemon
5569 as a service. In this case, you should use the service manager to
5570 start, restart, and stop session daemons.
5571
5572
5573 [[creating-destroying-tracing-sessions]]
5574 === Create and destroy a tracing session
5575
5576 Almost all the LTTng control operations happen in the scope of
5577 a <<tracing-session,tracing session>>, which is the dialogue between the
5578 <<lttng-sessiond,session daemon>> and you.
5579
5580 To create a tracing session with a generated name:
5581
5582 * Use the man:lttng-create(1) command:
5583 +
5584 --
5585 [role="term"]
5586 ----
5587 lttng create
5588 ----
5589 --
5590
5591 The created tracing session's name is `auto` followed by the
5592 creation date.
5593
5594 To create a tracing session with a specific name:
5595
5596 * Use the optional argument of the man:lttng-create(1) command:
5597 +
5598 --
5599 [role="term"]
5600 ----
5601 lttng create my-session
5602 ----
5603 --
5604 +
5605 Replace `my-session` with the specific tracing session name.
5606
5607 LTTng appends the creation date to the created tracing session's name.
5608
5609 LTTng writes the traces of a tracing session in
5610 +$LTTNG_HOME/lttng-trace/__name__+ by default, where +__name__+ is the
5611 name of the tracing session. Note that the env:LTTNG_HOME environment
5612 variable defaults to `$HOME` if not set.
5613
5614 To output LTTng traces to a non-default location:
5615
5616 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command:
5617 +
5618 --
5619 [role="term"]
5620 ----
5621 lttng create --output=/tmp/some-directory my-session
5622 ----
5623 --
5624
5625 You may create as many tracing sessions as you wish.
5626
5627 To list all the existing tracing sessions for your Unix user:
5628
5629 * Use the man:lttng-list(1) command:
5630 +
5631 --
5632 [role="term"]
5633 ----
5634 lttng list
5635 ----
5636 --
5637
5638 When you create a tracing session, it is set as the _current tracing
5639 session_. The following man:lttng(1) commands operate on the current
5640 tracing session when you don't specify one:
5641
5642 [role="list-3-cols"]
5643 * `add-context`
5644 * `destroy`
5645 * `disable-channel`
5646 * `disable-event`
5647 * `enable-channel`
5648 * `enable-event`
5649 * `load`
5650 * `save`
5651 * `snapshot`
5652 * `start`
5653 * `stop`
5654 * `track`
5655 * `untrack`
5656 * `view`
5657
5658 To change the current tracing session:
5659
5660 * Use the man:lttng-set-session(1) command:
5661 +
5662 --
5663 [role="term"]
5664 ----
5665 lttng set-session new-session
5666 ----
5667 --
5668 +
5669 Replace `new-session` by the name of the new current tracing session.
5670
5671 When you are done tracing in a given tracing session, you can destroy
5672 it. This operation frees the resources taken by the tracing session
5673 to destroy; it does not destroy the trace data that LTTng wrote for
5674 this tracing session.
5675
5676 To destroy the current tracing session:
5677
5678 * Use the man:lttng-destroy(1) command:
5679 +
5680 --
5681 [role="term"]
5682 ----
5683 lttng destroy
5684 ----
5685 --
5686
5687
5688 [[list-instrumentation-points]]
5689 === List the available instrumentation points
5690
5691 The <<lttng-sessiond,session daemon>> can query the running instrumented
5692 user applications and the Linux kernel to get a list of available
5693 instrumentation points. For the Linux kernel <<domain,tracing domain>>,
5694 they are tracepoints and system calls. For the user space tracing
5695 domain, they are tracepoints. For the other tracing domains, they are
5696 logger names.
5697
5698 To list the available instrumentation points:
5699
5700 * Use the man:lttng-list(1) command with the requested tracing domain's
5701 option amongst:
5702 +
5703 --
5704 * opt:lttng-list(1):--kernel: Linux kernel tracepoints (your Unix user
5705 must be a root user, or it must be a member of the
5706 <<tracing-group,tracing group>>).
5707 * opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall: Linux
5708 kernel system calls (your Unix user must be a root user, or it must be
5709 a member of the tracing group).
5710 * opt:lttng-list(1):--userspace: user space tracepoints.
5711 * opt:lttng-list(1):--jul: `java.util.logging` loggers.
5712 * opt:lttng-list(1):--log4j: Apache log4j loggers.
5713 * opt:lttng-list(1):--python: Python loggers.
5714 --
5715
5716 .List the available user space tracepoints.
5717 ====
5718 [role="term"]
5719 ----
5720 lttng list --userspace
5721 ----
5722 ====
5723
5724 .List the available Linux kernel system call tracepoints.
5725 ====
5726 [role="term"]
5727 ----
5728 lttng list --kernel --syscall
5729 ----
5730 ====
5731
5732
5733 [[enabling-disabling-events]]
5734 === Create and enable an event rule
5735
5736 Once you <<creating-destroying-tracing-sessions,create a tracing
5737 session>>, you can create <<event,event rules>> with the
5738 man:lttng-enable-event(1) command.
5739
5740 You specify each condition with a command-line option. The available
5741 condition options are shown in the following table.
5742
5743 [role="growable",cols="asciidoc,asciidoc,default"]
5744 .Condition command-line options for the man:lttng-enable-event(1) command.
5745 |====
5746 |Option |Description |Applicable tracing domains
5747
5748 |
5749 One of:
5750
5751 . `--syscall`
5752 . +--probe=__ADDR__+
5753 . +--function=__ADDR__+
5754
5755 |
5756 Instead of using the default _tracepoint_ instrumentation type, use:
5757
5758 . A Linux system call.
5759 . A Linux https://lwn.net/Articles/132196/[KProbe] (symbol or address).
5760 . The entry and return points of a Linux function (symbol or address).
5761
5762 |Linux kernel.
5763
5764 |First positional argument.
5765
5766 |
5767 Tracepoint or system call name. In the case of a Linux KProbe or
5768 function, this is a custom name given to the event rule. With the
5769 JUL, log4j, and Python domains, this is a logger name.
5770
5771 With a tracepoint, logger, or system call name, the last character
5772 can be `*` to match anything that remains.
5773
5774 |All.
5775
5776 |
5777 One of:
5778
5779 . +--loglevel=__LEVEL__+
5780 . +--loglevel-only=__LEVEL__+
5781
5782 |
5783 . Match only tracepoints or log statements with a logging level at
5784 least as severe as +__LEVEL__+.
5785 . Match only tracepoints or log statements with a logging level
5786 equal to +__LEVEL__+.
5787
5788 See man:lttng-enable-event(1) for the list of available logging level
5789 names.
5790
5791 |User space, JUL, log4j, and Python.
5792
5793 |+--exclude=__EXCLUSIONS__+
5794
5795 |
5796 When you use a `*` character at the end of the tracepoint or logger
5797 name (first positional argument), exclude the specific names in the
5798 comma-delimited list +__EXCLUSIONS__+.
5799
5800 |
5801 User space, JUL, log4j, and Python.
5802
5803 |+--filter=__EXPR__+
5804
5805 |
5806 Match only events which satisfy the expression +__EXPR__+.
5807
5808 See man:lttng-enable-event(1) to learn more about the syntax of a
5809 filter expression.
5810
5811 |All.
5812
5813 |====
5814
5815 You attach an event rule to a <<channel,channel>> on creation. If you do
5816 not specify the channel with the opt:lttng-enable-event(1):--channel
5817 option, and if the event rule to create is the first in its
5818 <<domain,tracing domain>> for a given tracing session, then LTTng
5819 creates a _default channel_ for you. This default channel is reused in
5820 subsequent invocations of the man:lttng-enable-event(1) command for the
5821 same tracing domain.
5822
5823 An event rule is always enabled at creation time.
5824
5825 The following examples show how you can combine the previous
5826 command-line options to create simple to more complex event rules.
5827
5828 .Create an event rule targetting a Linux kernel tracepoint (default channel).
5829 ====
5830 [role="term"]
5831 ----
5832 lttng enable-event --kernel sched_switch
5833 ----
5834 ====
5835
5836 .Create an event rule matching four Linux kernel system calls (default channel).
5837 ====
5838 [role="term"]
5839 ----
5840 lttng enable-event --kernel --syscall open,write,read,close
5841 ----
5842 ====
5843
5844 .Create event rules matching tracepoints with filter expressions (default channel).
5845 ====
5846 [role="term"]
5847 ----
5848 lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
5849 ----
5850
5851 [role="term"]
5852 ----
5853 lttng enable-event --kernel --all \
5854 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
5855 ----
5856
5857 [role="term"]
5858 ----
5859 lttng enable-event --jul my_logger \
5860 --filter='$app.retriever:cur_msg_id > 3'
5861 ----
5862
5863 IMPORTANT: Make sure to always quote the filter string when you
5864 use man:lttng(1) from a shell.
5865 ====
5866
5867 .Create an event rule matching any user space tracepoint of a given tracepoint provider with a log level range (default channel).
5868 ====
5869 [role="term"]
5870 ----
5871 lttng enable-event --userspace my_app:'*' --loglevel=TRACE_INFO
5872 ----
5873
5874 IMPORTANT: Make sure to always quote the wildcard character when you
5875 use man:lttng(1) from a shell.
5876 ====
5877
5878 .Create an event rule matching multiple Python loggers with a wildcard and with exclusions (default channel).
5879 ====
5880 [role="term"]
5881 ----
5882 lttng enable-event --python my-app.'*' \
5883 --exclude='my-app.module,my-app.hello'
5884 ----
5885 ====
5886
5887 .Create an event rule matching any Apache log4j logger with a specific log level (default channel).
5888 ====
5889 [role="term"]
5890 ----
5891 lttng enable-event --log4j --all --loglevel-only=LOG4J_WARN
5892 ----
5893 ====
5894
5895 .Create an event rule attached to a specific channel matching a specific user space tracepoint provider and tracepoint.
5896 ====
5897 [role="term"]
5898 ----
5899 lttng enable-event --userspace my_app:my_tracepoint --channel=my-channel
5900 ----
5901 ====
5902
5903 The event rules of a given channel form a whitelist: as soon as an
5904 emitted event passes one of them, LTTng can record the event. For
5905 example, an event named `my_app:my_tracepoint` emitted from a user space
5906 tracepoint with a `TRACE_ERROR` log level passes both of the following
5907 rules:
5908
5909 [role="term"]
5910 ----
5911 lttng enable-event --userspace my_app:my_tracepoint
5912 lttng enable-event --userspace my_app:my_tracepoint \
5913 --loglevel=TRACE_INFO
5914 ----
5915
5916 The second event rule is redundant: the first one includes
5917 the second one.
5918
5919
5920 [[disable-event-rule]]
5921 === Disable an event rule
5922
5923 To disable an event rule that you <<enabling-disabling-events,created>>
5924 previously, use the man:lttng-disable-event(1) command. This command
5925 disables _all_ the event rules (of a given tracing domain and channel)
5926 which match an instrumentation point. The other conditions are not
5927 supported as of LTTng{nbsp}{revision}.
5928
5929 The LTTng tracer does not record an emitted event which passes
5930 a _disabled_ event rule.
5931
5932 .Disable an event rule matching a Python logger (default channel).
5933 ====
5934 [role="term"]
5935 ----
5936 lttng disable-event --python my-logger
5937 ----
5938 ====
5939
5940 .Disable an event rule matching all `java.util.logging` loggers (default channel).
5941 ====
5942 [role="term"]
5943 ----
5944 lttng disable-event --jul '*'
5945 ----
5946 ====
5947
5948 .Disable _all_ the event rules of the default channel.
5949 ====
5950 The opt:lttng-disable-event(1):--all-events option is not, like the
5951 opt:lttng-enable-event(1):--all option of man:lttng-enable-event(1), the
5952 equivalent of the event name `*` (wildcard): it disables _all_ the event
5953 rules of a given channel.
5954
5955 [role="term"]
5956 ----
5957 lttng disable-event --jul --all-events
5958 ----
5959 ====
5960
5961 NOTE: You cannot delete an event rule once you create it.
5962
5963
5964 [[status]]
5965 === Get the status of a tracing session
5966
5967 To get the status of the current tracing session, that is, its
5968 parameters, its channels, event rules, and their attributes:
5969
5970 * Use the man:lttng-status(1) command:
5971 +
5972 --
5973 [role="term"]
5974 ----
5975 lttng status
5976 ----
5977 --
5978 +
5979
5980 To get the status of any tracing session:
5981
5982 * Use the man:lttng-list(1) command with the tracing session's name:
5983 +
5984 --
5985 [role="term"]
5986 ----
5987 lttng list my-session
5988 ----
5989 --
5990 +
5991 Replace `my-session` with the desired tracing session's name.
5992
5993
5994 [[basic-tracing-session-control]]
5995 === Start and stop a tracing session
5996
5997 Once you <<creating-destroying-tracing-sessions,create a tracing
5998 session>> and
5999 <<enabling-disabling-events,create one or more event rules>>,
6000 you can start and stop the tracers for this tracing session.
6001
6002 To start tracing in the current tracing session:
6003
6004 * Use the man:lttng-start(1) command:
6005 +
6006 --
6007 [role="term"]
6008 ----
6009 lttng start
6010 ----
6011 --
6012
6013 LTTng is very flexible: you can launch user applications before
6014 or after the you start the tracers. The tracers only record the events
6015 if they pass enabled event rules and if they occur while the tracers are
6016 started.
6017
6018 To stop tracing in the current tracing session:
6019
6020 * Use the man:lttng-stop(1) command:
6021 +
6022 --
6023 [role="term"]
6024 ----
6025 lttng stop
6026 ----
6027 --
6028 +
6029 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
6030 records>> or lost sub-buffers since the last time you ran
6031 man:lttng-start(1), warnings are printed when you run the
6032 man:lttng-stop(1) command.
6033
6034
6035 [[enabling-disabling-channels]]
6036 === Create a channel
6037
6038 Once you create a tracing session, you can create a <<channel,channel>>
6039 with the man:lttng-enable-channel(1) command.
6040
6041 Note that LTTng automatically creates a default channel when, for a
6042 given <<domain,tracing domain>>, no channels exist and you
6043 <<enabling-disabling-events,create>> the first event rule. This default
6044 channel is named `channel0` and its attributes are set to reasonable
6045 values. Therefore, you only need to create a channel when you need
6046 non-default attributes.
6047
6048 You specify each non-default channel attribute with a command-line
6049 option when you use the man:lttng-enable-channel(1) command. The
6050 available command-line options are:
6051
6052 [role="growable",cols="asciidoc,asciidoc"]
6053 .Command-line options for the man:lttng-enable-channel(1) command.
6054 |====
6055 |Option |Description
6056
6057 |`--overwrite`
6058
6059 |
6060 Use the _overwrite_
6061 <<channel-overwrite-mode-vs-discard-mode,event loss mode>> instead of
6062 the default _discard_ mode.
6063
6064 |`--buffers-pid` (user space tracing domain only)
6065
6066 |
6067 Use the per-process <<channel-buffering-schemes,buffering scheme>>
6068 instead of the default per-user buffering scheme.
6069
6070 |+--subbuf-size=__SIZE__+
6071
6072 |
6073 Allocate sub-buffers of +__SIZE__+ bytes (power of two), for each CPU,
6074 either for each Unix user (default), or for each instrumented process.
6075
6076 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6077
6078 |+--num-subbuf=__COUNT__+
6079
6080 |
6081 Allocate +__COUNT__+ sub-buffers (power of two), for each CPU, either
6082 for each Unix user (default), or for each instrumented process.
6083
6084 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6085
6086 |+--tracefile-size=__SIZE__+
6087
6088 |
6089 Set the maximum size of each trace file that this channel writes within
6090 a stream to +__SIZE__+ bytes instead of no maximum.
6091
6092 See <<tracefile-rotation,Trace file count and size>>.
6093
6094 |+--tracefile-count=__COUNT__+
6095
6096 |
6097 Limit the number of trace files that this channel creates to
6098 +__COUNT__+ channels instead of no limit.
6099
6100 See <<tracefile-rotation,Trace file count and size>>.
6101
6102 |+--switch-timer=__PERIODUS__+
6103
6104 |
6105 Set the <<channel-switch-timer,switch timer period>>
6106 to +__PERIODUS__+{nbsp}µs.
6107
6108 |+--read-timer=__PERIODUS__+
6109
6110 |
6111 Set the <<channel-read-timer,read timer period>>
6112 to +__PERIODUS__+{nbsp}µs.
6113
6114 |+--output=__TYPE__+ (Linux kernel tracing domain only)
6115
6116 |
6117 Set the channel's output type to +__TYPE__+, either `mmap` or `splice`.
6118
6119 |====
6120
6121 You can only create a channel in the Linux kernel and user space
6122 <<domain,tracing domains>>: other tracing domains have their own channel
6123 created on the fly when <<enabling-disabling-events,creating event
6124 rules>>.
6125
6126 [IMPORTANT]
6127 ====
6128 Because of a current LTTng limitation, you must create all channels
6129 _before_ you <<basic-tracing-session-control,start tracing>> in a given
6130 tracing session, that is, before the first time you run
6131 man:lttng-start(1).
6132
6133 Since LTTng automatically creates a default channel when you use the
6134 man:lttng-enable-event(1) command with a specific tracing domain, you
6135 cannot, for example, create a Linux kernel event rule, start tracing,
6136 and then create a user space event rule, because no user space channel
6137 exists yet and it's too late to create one.
6138
6139 For this reason, make sure to configure your channels properly
6140 before starting the tracers for the first time!
6141 ====
6142
6143 The following examples show how you can combine the previous
6144 command-line options to create simple to more complex channels.
6145
6146 .Create a Linux kernel channel with default attributes.
6147 ====
6148 [role="term"]
6149 ----
6150 lttng enable-channel --kernel my-channel
6151 ----
6152 ====
6153
6154 .Create a user space channel with 4 sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6155 ====
6156 [role="term"]
6157 ----
6158 lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6159 --buffers-pid my-channel
6160 ----
6161 ====
6162
6163 .Create a Linux kernel channel which rotates 8 trace files of 4{nbsp}MiB each for each stream
6164 ====
6165 [role="term"]
6166 ----
6167 lttng enable-channel --kernel --tracefile-count=8 \
6168 --tracefile-size=4194304 my-channel
6169 ----
6170 ====
6171
6172 .Create a user space channel in overwrite (or _flight recorder_) mode.
6173 ====
6174 [role="term"]
6175 ----
6176 lttng enable-channel --userspace --overwrite my-channel
6177 ----
6178 ====
6179
6180 You can <<enabling-disabling-events,create>> the same event rule in
6181 two different channels:
6182
6183 [role="term"]
6184 ----
6185 lttng enable-event --userspace --channel=my-channel app:tp
6186 lttng enable-event --userspace --channel=other-channel app:tp
6187 ----
6188
6189 If both channels are enabled, when a tracepoint named `app:tp` is
6190 reached, LTTng records two events, one for each channel.
6191
6192
6193 [[disable-channel]]
6194 === Disable a channel
6195
6196 To disable a specific channel that you <<enabling-disabling-channels,created>>
6197 previously, use the man:lttng-disable-channel(1) command.
6198
6199 .Disable a specific Linux kernel channel.
6200 ====
6201 [role="term"]
6202 ----
6203 lttng disable-channel --kernel my-channel
6204 ----
6205 ====
6206
6207 The state of a channel precedes the individual states of event rules
6208 attached to it: event rules which belong to a disabled channel, even if
6209 they are enabled, are also considered disabled.
6210
6211
6212 [[adding-context]]
6213 === Add context fields to a channel
6214
6215 Event record fields in trace files provide important information about
6216 events that occured previously, but sometimes some external context may
6217 help you solve a problem faster. Examples of context fields are:
6218
6219 * The **process ID**, **thread ID**, **process name**, and
6220 **process priority** of the thread in which the event occurs.
6221 * The **hostname** of the system on which the event occurs.
6222 * The current values of many possible **performance counters** using
6223 perf, for example:
6224 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6225 ** Cache misses.
6226 ** Branch instructions, misses, and loads.
6227 ** CPU faults.
6228 * Any context defined at the application level (supported for the
6229 JUL and log4j <<domain,tracing domains>>).
6230
6231 To get the full list of available context fields, see
6232 `lttng add-context --list`. Some context fields are reserved for a
6233 specific <<domain,tracing domain>> (Linux kernel or user space).
6234
6235 You add context fields to <<channel,channels>>. All the events
6236 that a channel with added context fields records contain those fields.
6237
6238 To add context fields to one or all the channels of a given tracing
6239 session:
6240
6241 * Use the man:lttng-add-context(1) command.
6242
6243 .Add context fields to all the channels of the current tracing session.
6244 ====
6245 The following command line adds the virtual process identifier and
6246 the per-thread CPU cycles count fields to all the user space channels
6247 of the current tracing session.
6248
6249 [role="term"]
6250 ----
6251 lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6252 ----
6253 ====
6254
6255 .Add a context field to a specific channel.
6256 ====
6257 The following command line adds the thread identifier context field
6258 to the Linux kernel channel named `my-channel` in the current
6259 tracing session.
6260
6261 [role="term"]
6262 ----
6263 lttng add-context --kernel --channel=my-channel --type=tid
6264 ----
6265 ====
6266
6267 .Add an application-specific context field to a specific channel.
6268 ====
6269 The following command line adds the `cur_msg_id` context field of the
6270 `retriever` context retriever for all the instrumented
6271 <<java-application,Java applications>> recording <<event,event records>>
6272 in the channel named `my-channel`:
6273
6274 [role="term"]
6275 ----
6276 lttng add-context --kernel --channel=my-channel \
6277 --type='$app:retriever:cur_msg_id'
6278 ----
6279
6280 IMPORTANT: Make sure to always quote the `$` character when you
6281 use man:lttng-add-context(1) from a shell.
6282 ====
6283
6284 NOTE: You cannot remove context fields from a channel once you add it.
6285
6286
6287 [role="since-2.7"]
6288 [[pid-tracking]]
6289 === Track process IDs
6290
6291 It's often useful to allow only specific process IDs (PIDs) to emit
6292 events. For example, you may wish to record all the system calls made by
6293 a given process (à la http://linux.die.net/man/1/strace[strace]).
6294
6295 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6296 purpose. Both commands operate on a whitelist of process IDs. You _add_
6297 entries to this whitelist with the man:lttng-track(1) command and remove
6298 entries with the man:lttng-untrack(1) command. Any process which has one
6299 of the PIDs in the whitelist is allowed to emit LTTng events which pass
6300 an enabled <<event,event rule>>.
6301
6302 NOTE: The PID tracker tracks the _numeric process IDs_. Should a
6303 process with a given tracked ID exit and another process be given this
6304 ID, then the latter would also be allowed to emit events.
6305
6306 .Track and untrack process IDs.
6307 ====
6308 For the sake of the following example, assume the target system has 16
6309 possible PIDs.
6310
6311 When you
6312 <<creating-destroying-tracing-sessions,create a tracing session>>,
6313 the whitelist contains all the possible PIDs:
6314
6315 [role="img-100"]
6316 .All PIDs are tracked.
6317 image::track-all.png[]
6318
6319 When the whitelist is full and you use the man:lttng-track(1) command to
6320 specify some PIDs to track, LTTng first clears the whitelist, then it
6321 tracks the specific PIDs. After:
6322
6323 [role="term"]
6324 ----
6325 lttng track --pid=3,4,7,10,13
6326 ----
6327
6328 the whitelist is:
6329
6330 [role="img-100"]
6331 .PIDs 3, 4, 7, 10, and 13 are tracked.
6332 image::track-3-4-7-10-13.png[]
6333
6334 You can add more PIDs to the whitelist afterwards:
6335
6336 [role="term"]
6337 ----
6338 lttng track --pid=1,15,16
6339 ----
6340
6341 The result is:
6342
6343 [role="img-100"]
6344 .PIDs 1, 15, and 16 are added to the whitelist.
6345 image::track-1-3-4-7-10-13-15-16.png[]
6346
6347 The man:lttng-untrack(1) command removes entries from the PID tracker's
6348 whitelist. Given the previous example, the following command:
6349
6350 [role="term"]
6351 ----
6352 lttng untrack --pid=3,7,10,13
6353 ----
6354
6355 leads to this whitelist:
6356
6357 [role="img-100"]
6358 .PIDs 3, 7, 10, and 13 are removed from the whitelist.
6359 image::track-1-4-15-16.png[]
6360
6361 LTTng can track all possible PIDs again using the opt:track(1):--all
6362 option:
6363
6364 [role="term"]
6365 ----
6366 lttng track --pid --all
6367 ----
6368
6369 The result is, again:
6370
6371 [role="img-100"]
6372 .All PIDs are tracked.
6373 image::track-all.png[]
6374 ====
6375
6376 .Track only specific PIDs
6377 ====
6378 A very typical use case with PID tracking is to start with an empty
6379 whitelist, then <<basic-tracing-session-control,start the tracers>>, and
6380 then add PIDs manually while tracers are active. You can accomplish this
6381 by using the opt:lttng-untrack(1):--all option of the
6382 man:lttng-untrack(1) command to clear the whitelist after you
6383 <<creating-destroying-tracing-sessions,create a tracing session>>:
6384
6385 [role="term"]
6386 ----
6387 lttng untrack --pid --all
6388 ----
6389
6390 gives:
6391
6392 [role="img-100"]
6393 .No PIDs are tracked.
6394 image::untrack-all.png[]
6395
6396 If you trace with this whitelist configuration, the tracer records no
6397 events for this <<domain,tracing domain>> because no processes are
6398 tracked. You can use the man:lttng-track(1) command as usual to track
6399 specific PIDs, for example:
6400
6401 [role="term"]
6402 ----
6403 lttng track --pid=6,11
6404 ----
6405
6406 Result:
6407
6408 [role="img-100"]
6409 .PIDs 6 and 11 are tracked.
6410 image::track-6-11.png[]
6411 ====
6412
6413
6414 [role="since-2.5"]
6415 [[saving-loading-tracing-session]]
6416 === Save and load tracing session configurations
6417
6418 Configuring a <<tracing-session,tracing session>> can be long. Some of
6419 the tasks involved are:
6420
6421 * <<enabling-disabling-channels,Create channels>> with
6422 specific attributes.
6423 * <<adding-context,Add context fields>> to specific channels.
6424 * <<enabling-disabling-events,Create event rules>> with specific log
6425 level and filter conditions.
6426
6427 If you use LTTng to solve real world problems, chances are you have to
6428 record events using the same tracing session setup over and over,
6429 modifying a few variables each time in your instrumented program
6430 or environment. To avoid constant tracing session reconfiguration,
6431 the man:lttng(1) command-line tool can save and load tracing session
6432 configurations to/from XML files.
6433
6434 To save a given tracing session configuration:
6435
6436 * Use the man:lttng-save(1) command:
6437 +
6438 --
6439 [role="term"]
6440 ----
6441 lttng save my-session
6442 ----
6443 --
6444 +
6445 Replace `my-session` with the name of the tracing session to save.
6446
6447 LTTng saves tracing session configurations to
6448 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6449 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. Use
6450 the opt:lttng-save(1):--output-path option to change this destination
6451 directory.
6452
6453 LTTng saves all configuration parameters, for example:
6454
6455 * The tracing session name.
6456 * The trace data output path.
6457 * The channels with their state and all their attributes.
6458 * The context fields you added to channels.
6459 * The event rules with their state, log level and filter conditions.
6460
6461 To load a tracing session:
6462
6463 * Use the man:lttng-load(1) command:
6464 +
6465 --
6466 [role="term"]
6467 ----
6468 lttng load my-session
6469 ----
6470 --
6471 +
6472 Replace `my-session` with the name of the tracing session to load.
6473
6474 When LTTng loads a configuration, it restores your saved tracing session
6475 as if you just configured it manually.
6476
6477 See man:lttng(1) for the complete list of command-line options. You
6478 can also save and load all many sessions at a time, and decide in which
6479 directory to output the XML files.
6480
6481
6482 [[sending-trace-data-over-the-network]]
6483 === Send trace data over the network
6484
6485 LTTng can send the recorded trace data to a remote system over the
6486 network instead of writing it to the local file system.
6487
6488 To send the trace data over the network:
6489
6490 . On the _remote_ system (which can also be the target system),
6491 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6492 +
6493 --
6494 [role="term"]
6495 ----
6496 lttng-relayd
6497 ----
6498 --
6499
6500 . On the _target_ system, create a tracing session configured to
6501 send trace data over the network:
6502 +
6503 --
6504 [role="term"]
6505 ----
6506 lttng create my-session --set-url=net://remote-system
6507 ----
6508 --
6509 +
6510 Replace `remote-system` by the host name or IP address of the
6511 remote system. See man:lttng-create(1) for the exact URL format.
6512
6513 . On the target system, use the man:lttng(1) command-line tool as usual.
6514 When tracing is active, the target's consumer daemon sends sub-buffers
6515 to the relay daemon running on the remote system intead of flushing
6516 them to the local file system. The relay daemon writes the received
6517 packets to the local file system.
6518
6519 The relay daemon writes trace files to
6520 +$LTTNG_HOME/lttng-traces/__hostname__/__session__+ by default, where
6521 +__hostname__+ is the host name of the target system and +__session__+
6522 is the tracing session name. Note that the env:LTTNG_HOME environment
6523 variable defaults to `$HOME` if not set. Use the
6524 opt:lttng-relayd(8):--output option of man:lttng-relayd(8) to write
6525 trace files to another base directory.
6526
6527
6528 [role="since-2.4"]
6529 [[lttng-live]]
6530 === View events as LTTng emits them (noch:{LTTng} live)
6531
6532 LTTng live is a network protocol implemented by the <<lttng-relayd,relay
6533 daemon>> (man:lttng-relayd(8)) to allow compatible trace viewers to
6534 display events as LTTng emits them on the target system while tracing is
6535 active.
6536
6537 The relay daemon creates a _tee_: it forwards the trace data to both
6538 the local file system and to connected live viewers:
6539
6540 [role="img-90"]
6541 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live viewer.
6542 image::live.png[]
6543
6544 To use LTTng live:
6545
6546 . On the _target system_, create a <<tracing-session,tracing session>>
6547 in _live mode_:
6548 +
6549 --
6550 [role="term"]
6551 ----
6552 lttng create --live my-session
6553 ----
6554 --
6555 +
6556 This spawns a local relay daemon.
6557
6558 . Start the live viewer and configure it to connect to the relay
6559 daemon. For example, with http://diamon.org/babeltrace[Babeltrace]:
6560 +
6561 --
6562 [role="term"]
6563 ----
6564 babeltrace --input-format=lttng-live net://localhost/host/hostname/my-session
6565 ----
6566 --
6567 +
6568 Replace:
6569 +
6570 --
6571 * `hostname` with the host name of the target system.
6572 * `my-session` with the name of the tracing session to view.
6573 --
6574
6575 . Configure the tracing session as usual with the man:lttng(1)
6576 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6577
6578 You can list the available live tracing sessions with Babeltrace:
6579
6580 [role="term"]
6581 ----
6582 babeltrace --input-format=lttng-live net://localhost
6583 ----
6584
6585 You can start the relay daemon on another system. In this case, you need
6586 to specify the relay daemon's URL when you create the tracing session
6587 with the opt:lttng-create(1):--set-url option. You also need to replace
6588 `localhost` in the procedure above with the host name of the system on
6589 which the relay daemon is running.
6590
6591 See man:lttng-create(1) and man:lttng-relayd(8) for the complete list of
6592 command-line options.
6593
6594
6595 [role="since-2.3"]
6596 [[taking-a-snapshot]]
6597 === Take a snapshot of the current sub-buffers of a tracing session
6598
6599 The normal behavior of LTTng is to append full sub-buffers to growing
6600 trace data files. This is ideal to keep a full history of the events
6601 that occurred on the target system, but it can
6602 represent too much data in some situations. For example, you may wish
6603 to trace your application continuously until some critical situation
6604 happens, in which case you only need the latest few recorded
6605 events to perform the desired analysis, not multi-gigabyte trace files.
6606
6607 With the man:lttng-snapshot(1) command, you can take a snapshot of the
6608 current sub-buffers of a given <<tracing-session,tracing session>>.
6609 LTTng can write the snapshot to the local file system or send it over
6610 the network.
6611
6612 To take a snapshot:
6613
6614 . Create a tracing session in _snapshot mode_:
6615 +
6616 --
6617 [role="term"]
6618 ----
6619 lttng create --snapshot my-session
6620 ----
6621 --
6622 +
6623 The <<channel-overwrite-mode-vs-discard-mode,event loss mode>> of
6624 <<channel,channels>> created in this mode is automatically set to
6625 _overwrite_ (flight recorder mode).
6626
6627 . Configure the tracing session as usual with the man:lttng(1)
6628 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6629
6630 . **Optional**: When you need to take a snapshot,
6631 <<basic-tracing-session-control,stop tracing>>.
6632 +
6633 You can take a snapshot when the tracers are active, but if you stop
6634 them first, you are sure that the data in the sub-buffers does not
6635 change before you actually take the snapshot.
6636
6637 . Take a snapshot:
6638 +
6639 --
6640 [role="term"]
6641 ----
6642 lttng snapshot record --name=my-first-snapshot
6643 ----
6644 --
6645 +
6646 LTTng writes the current sub-buffers of all the current tracing
6647 session's channels to trace files on the local file system. Those trace
6648 files have `my-first-snapshot` in their name.
6649
6650 There is no difference between the format of a normal trace file and the
6651 format of a snapshot: viewers of LTTng traces also support LTTng
6652 snapshots.
6653
6654 By default, LTTng writes snapshot files to the path shown by
6655 `lttng snapshot list-output`. You can change this path or decide to send
6656 snapshots over the network using either:
6657
6658 . An output path or URL that you specify when you create the
6659 tracing session.
6660 . An snapshot output path or URL that you add using
6661 `lttng snapshot add-output`
6662 . An output path or URL that you provide directly to the
6663 `lttng snapshot record` command.
6664
6665 Method 3 overrides method 2, which overrides method 1. When you
6666 specify a URL, a relay daemon must listen on a remote system (see
6667 <<sending-trace-data-over-the-network,Send trace data over the network>>).
6668
6669
6670 [role="since-2.6"]
6671 [[mi]]
6672 === Use the machine interface
6673
6674 With any command of the man:lttng(1) command-line tool, you can set the
6675 opt:lttng(1):--mi option to `xml` (before the command name) to get an
6676 XML machine interface output, for example:
6677
6678 [role="term"]
6679 ----
6680 lttng --mi=xml enable-event --kernel --syscall open
6681 ----
6682
6683 A schema definition (XSD) is
6684 https://github.com/lttng/lttng-tools/blob/stable-2.8/src/common/mi-lttng-3.0.xsd[available]
6685 to ease the integration with external tools as much as possible.
6686
6687
6688 [role="since-2.8"]
6689 [[metadata-regenerate]]
6690 === Regenerate the metadata of an LTTng trace
6691
6692 An LTTng trace, which is a http://diamon.org/ctf[CTF] trace, has both
6693 data stream files and a metadata file. This metadata file contains,
6694 amongst other things, information about the offset of the clock sources
6695 used to timestamp <<event,event records>> when tracing.
6696
6697 If, once a <<tracing-session,tracing session>> is
6698 <<basic-tracing-session-control,started>>, a major
6699 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
6700 happens, the trace's clock offset also needs to be updated. You
6701 can use the man:lttng-metadata(1) command to do so.
6702
6703 The main use case of this command is to allow a system to boot with
6704 an incorrect wall time and trace it with LTTng before its wall time
6705 is corrected. Once the system is known to be in a state where its
6706 wall time is correct, it can run `lttng metadata regenerate`.
6707
6708 To regenerate the metadata of an LTTng trace:
6709
6710 * Use the `regenerate` action of the man:lttng-metadata(1) command:
6711 +
6712 --
6713 [role="term"]
6714 ----
6715 lttng metadata regenerate
6716 ----
6717 --
6718
6719 [IMPORTANT]
6720 ====
6721 `lttng metadata regenerate` has the following limitations:
6722
6723 * Tracing session <<creating-destroying-tracing-sessions,created>>
6724 in non-live mode.
6725 * User space <<channel,channels>>, if any, using
6726 <<channel-buffering-schemes,per-user buffering>>.
6727 ====
6728
6729
6730 [role="since-2.7"]
6731 [[persistent-memory-file-systems]]
6732 === Record trace data on persistent memory file systems
6733
6734 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory]
6735 (NVRAM) is random-access memory that retains its information when power
6736 is turned off (non-volatile). Systems with such memory can store data
6737 structures in RAM and retrieve them after a reboot, without flushing
6738 to typical _storage_.
6739
6740 Linux supports NVRAM file systems thanks to either
6741 http://pramfs.sourceforge.net/[PRAMFS] or
6742 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
6743 (requires Linux 4.1+).
6744
6745 This section does not describe how to operate such file systems;
6746 we assume that you have a working persistent memory file system.
6747
6748 When you create a <<tracing-session,tracing session>>, you can specify
6749 the path of the shared memory holding the sub-buffers. If you specify a
6750 location on an NVRAM file system, then you can retrieve the latest
6751 recorded trace data when the system reboots after a crash.
6752
6753 To record trace data on a persistent memory file system and retrieve the
6754 trace data after a system crash:
6755
6756 . Create a tracing session with a sub-buffer shared memory path located
6757 on an NVRAM file system:
6758 +
6759 --
6760 [role="term"]
6761 ----
6762 lttng create --shm-path=/path/to/shm
6763 ----
6764 --
6765
6766 . Configure the tracing session as usual with the man:lttng(1)
6767 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6768
6769 . After a system crash, use the man:lttng-crash(1) command-line tool to
6770 view the trace data recorded on the NVRAM file system:
6771 +
6772 --
6773 [role="term"]
6774 ----
6775 lttng-crash /path/to/shm
6776 ----
6777 --
6778
6779 The binary layout of the ring buffer files is not exactly the same as
6780 the trace files layout. This is why you need to use man:lttng-crash(1)
6781 instead of your preferred trace viewer directly.
6782
6783 To convert the ring buffer files to LTTng trace files:
6784
6785 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
6786 +
6787 --
6788 [role="term"]
6789 ----
6790 lttng-crash --extract=/path/to/trace /path/to/shm
6791 ----
6792 --
6793
6794
6795 [[reference]]
6796 == Reference
6797
6798 [[lttng-modules-ref]]
6799 === noch:{LTTng-modules}
6800
6801 [role="since-2.7"]
6802 [[lttng-modules-tp-fields]]
6803 ==== Tracepoint fields macros (for `TP_FIELDS()`)
6804
6805 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
6806 tracepoint fields, which must be listed within `TP_FIELDS()` in
6807 `LTTNG_TRACEPOINT_EVENT()`, are:
6808
6809 [role="func-desc growable",cols="asciidoc,asciidoc"]
6810 .Available macros to define LTTng-modules tracepoint fields
6811 |====
6812 |Macro |Description and parameters
6813
6814 |
6815 +ctf_integer(__t__, __n__, __e__)+
6816
6817 +ctf_integer_nowrite(__t__, __n__, __e__)+
6818
6819 +ctf_user_integer(__t__, __n__, __e__)+
6820
6821 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
6822 |
6823 Standard integer, displayed in base 10.
6824
6825 +__t__+::
6826 Integer C type (`int`, `long`, `size_t`, ...).
6827
6828 +__n__+::
6829 Field name.
6830
6831 +__e__+::
6832 Argument expression.
6833
6834 |
6835 +ctf_integer_hex(__t__, __n__, __e__)+
6836
6837 +ctf_user_integer_hex(__t__, __n__, __e__)+
6838 |
6839 Standard integer, displayed in base 16.
6840
6841 +__t__+::
6842 Integer C type.
6843
6844 +__n__+::
6845 Field name.
6846
6847 +__e__+::
6848 Argument expression.
6849
6850 |+ctf_integer_oct(__t__, __n__, __e__)+
6851 |
6852 Standard integer, displayed in base 8.
6853
6854 +__t__+::
6855 Integer C type.
6856
6857 +__n__+::
6858 Field name.
6859
6860 +__e__+::
6861 Argument expression.
6862
6863 |
6864 +ctf_integer_network(__t__, __n__, __e__)+
6865
6866 +ctf_user_integer_network(__t__, __n__, __e__)+
6867 |
6868 Integer in network byte order (big-endian), displayed in base 10.
6869
6870 +__t__+::
6871 Integer C type.
6872
6873 +__n__+::
6874 Field name.
6875
6876 +__e__+::
6877 Argument expression.
6878
6879 |
6880 +ctf_integer_network_hex(__t__, __n__, __e__)+
6881
6882 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
6883 |
6884 Integer in network byte order, displayed in base 16.
6885
6886 +__t__+::
6887 Integer C type.
6888
6889 +__n__+::
6890 Field name.
6891
6892 +__e__+::
6893 Argument expression.
6894
6895 |
6896 +ctf_string(__n__, __e__)+
6897
6898 +ctf_string_nowrite(__n__, __e__)+
6899
6900 +ctf_user_string(__n__, __e__)+
6901
6902 +ctf_user_string_nowrite(__n__, __e__)+
6903 |
6904 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
6905
6906 +__n__+::
6907 Field name.
6908
6909 +__e__+::
6910 Argument expression.
6911
6912 |
6913 +ctf_array(__t__, __n__, __e__, __s__)+
6914
6915 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
6916
6917 +ctf_user_array(__t__, __n__, __e__, __s__)+
6918
6919 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
6920 |
6921 Statically-sized array of integers.
6922
6923 +__t__+::
6924 Array element C type.
6925
6926 +__n__+::
6927 Field name.
6928
6929 +__e__+::
6930 Argument expression.
6931
6932 +__s__+::
6933 Number of elements.
6934
6935 |
6936 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
6937
6938 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
6939
6940 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
6941
6942 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
6943 |
6944 Statically-sized array of bits.
6945
6946 The type of +__e__+ must be an integer type. +__s__+ is the number
6947 of elements of such type in +__e__+, not the number of bits.
6948
6949 +__t__+::
6950 Array element C type.
6951
6952 +__n__+::
6953 Field name.
6954
6955 +__e__+::
6956 Argument expression.
6957
6958 +__s__+::
6959 Number of elements.
6960
6961 |
6962 +ctf_array_text(__t__, __n__, __e__, __s__)+
6963
6964 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
6965
6966 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
6967
6968 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
6969 |
6970 Statically-sized array, printed as text.
6971
6972 The string does not need to be null-terminated.
6973
6974 +__t__+::
6975 Array element C type (always `char`).
6976
6977 +__n__+::
6978 Field name.
6979
6980 +__e__+::
6981 Argument expression.
6982
6983 +__s__+::
6984 Number of elements.
6985
6986 |
6987 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
6988
6989 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
6990
6991 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
6992
6993 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
6994 |
6995 Dynamically-sized array of integers.
6996
6997 The type of +__E__+ must be unsigned.
6998
6999 +__t__+::
7000 Array element C type.
7001
7002 +__n__+::
7003 Field name.
7004
7005 +__e__+::
7006 Argument expression.
7007
7008 +__T__+::
7009 Length expression C type.
7010
7011 +__E__+::
7012 Length expression.
7013
7014 |
7015 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7016
7017 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7018 |
7019 Dynamically-sized array of integers, displayed in base 16.
7020
7021 The type of +__E__+ must be unsigned.
7022
7023 +__t__+::
7024 Array element C type.
7025
7026 +__n__+::
7027 Field name.
7028
7029 +__e__+::
7030 Argument expression.
7031
7032 +__T__+::
7033 Length expression C type.
7034
7035 +__E__+::
7036 Length expression.
7037
7038 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
7039 |
7040 Dynamically-sized array of integers in network byte order (big-endian),
7041 displayed in base 10.
7042
7043 The type of +__E__+ must be unsigned.
7044
7045 +__t__+::
7046 Array element C type.
7047
7048 +__n__+::
7049 Field name.
7050
7051 +__e__+::
7052 Argument expression.
7053
7054 +__T__+::
7055 Length expression C type.
7056
7057 +__E__+::
7058 Length expression.
7059
7060 |
7061 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7062
7063 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7064
7065 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7066
7067 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7068 |
7069 Dynamically-sized array of bits.
7070
7071 The type of +__e__+ must be an integer type. +__s__+ is the number
7072 of elements of such type in +__e__+, not the number of bits.
7073
7074 The type of +__E__+ must be unsigned.
7075
7076 +__t__+::
7077 Array element C type.
7078
7079 +__n__+::
7080 Field name.
7081
7082 +__e__+::
7083 Argument expression.
7084
7085 +__T__+::
7086 Length expression C type.
7087
7088 +__E__+::
7089 Length expression.
7090
7091 |
7092 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7093
7094 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7095
7096 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7097
7098 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7099 |
7100 Dynamically-sized array, displayed as text.
7101
7102 The string does not need to be null-terminated.
7103
7104 The type of +__E__+ must be unsigned.
7105
7106 The behaviour is undefined if +__e__+ is `NULL`.
7107
7108 +__t__+::
7109 Sequence element C type (always `char`).
7110
7111 +__n__+::
7112 Field name.
7113
7114 +__e__+::
7115 Argument expression.
7116
7117 +__T__+::
7118 Length expression C type.
7119
7120 +__E__+::
7121 Length expression.
7122 |====
7123
7124 Use the `_user` versions when the argument expression, `e`, is
7125 a user space address. In the cases of `ctf_user_integer*()` and
7126 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
7127 be addressable.
7128
7129 The `_nowrite` versions omit themselves from the session trace, but are
7130 otherwise identical. This means the `_nowrite` fields won't be written
7131 in the recorded trace. Their primary purpose is to make some
7132 of the event context available to the
7133 <<enabling-disabling-events,event filters>> without having to
7134 commit the data to sub-buffers.
7135
7136
7137 [[glossary]]
7138 == Glossary
7139
7140 Terms related to LTTng and to tracing in general:
7141
7142 Babeltrace::
7143 The http://diamon.org/babeltrace[Babeltrace] project, which includes
7144 the cmd:babeltrace command, some libraries, and Python bindings.
7145
7146 <<channel-buffering-schemes,buffering scheme>>::
7147 A layout of sub-buffers applied to a given channel.
7148
7149 <<channel,channel>>::
7150 An entity which is responsible for a set of ring buffers.
7151 +
7152 <<event,Event rules>> are always attached to a specific channel.
7153
7154 clock::
7155 A reference of time for a tracer.
7156
7157 <<lttng-consumerd,consumer daemon>>::
7158 A process which is responsible for consuming the full sub-buffers
7159 and write them to a file system or send them over the network.
7160
7161 <<channel-overwrite-mode-vs-discard-mode,discard mode>>:: The event loss
7162 mode in which the tracer _discards_ new event records when there's no
7163 sub-buffer space left to store them.
7164
7165 event::
7166 The consequence of the execution of an instrumentation
7167 point, like a tracepoint that you manually place in some source code,
7168 or a Linux kernel KProbe.
7169 +
7170 An event is said to _occur_ at a specific time. Different actions can
7171 be taken upon the occurance of an event, like record the event's payload
7172 to a sub-buffer.
7173
7174 <<channel-overwrite-mode-vs-discard-mode,event loss mode>>::
7175 The mechanism by which event records of a given channel are lost
7176 (not recorded) when there is no sub-buffer space left to store them.
7177
7178 [[def-event-name]]event name::
7179 The name of an event, which is also the name of the event record.
7180 This is also called the _instrumentation point name_.
7181
7182 event record::
7183 A record, in a trace, of the payload of an event which occured.
7184
7185 <<event,event rule>>::
7186 Set of conditions which must be satisfied for one or more occuring
7187 events to be recorded.
7188
7189 `java.util.logging`::
7190 Java platform's
7191 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities].
7192
7193 <<instrumenting,instrumentation>>::
7194 The use of LTTng probes to make a piece of software traceable.
7195
7196 instrumentation point::
7197 A point in the execution path of a piece of software that, when
7198 reached by this execution, can emit an event.
7199
7200 instrumentation point name::
7201 See _<<def-event-name,event name>>_.
7202
7203 log4j::
7204 A http://logging.apache.org/log4j/1.2/[logging library] for Java
7205 developed by the Apache Software Foundation.
7206
7207 log level::
7208 Level of severity of a log statement or user space
7209 instrumentation point.
7210
7211 LTTng::
7212 The _Linux Trace Toolkit: next generation_ project.
7213
7214 <<lttng-cli,cmd:lttng>>::
7215 A command-line tool provided by the LTTng-tools project which you
7216 can use to send and receive control messages to and from a
7217 session daemon.
7218
7219 LTTng analyses::
7220 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
7221 which is a set of analyzing programs that are used to obtain a
7222 higher level view of an LTTng trace.
7223
7224 cmd:lttng-consumerd::
7225 The name of the consumer daemon program.
7226
7227 cmd:lttng-crash::
7228 A utility provided by the LTTng-tools project which can convert
7229 ring buffer files (usually
7230 <<persistent-memory-file-systems,saved on a persistent memory file system>>)
7231 to trace files.
7232
7233 LTTng Documentation::
7234 This document.
7235
7236 <<lttng-live,LTTng live>>::
7237 A communication protocol between the relay daemon and live viewers
7238 which makes it possible to see events "live", as they are received by
7239 the relay daemon.
7240
7241 <<lttng-modules,LTTng-modules>>::
7242 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
7243 which contains the Linux kernel modules to make the Linux kernel
7244 instrumentation points available for LTTng tracing.
7245
7246 cmd:lttng-relayd::
7247 The name of the relay daemon program.
7248
7249 cmd:lttng-sessiond::
7250 The name of the session daemon program.
7251
7252 LTTng-tools::
7253 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
7254 contains the various programs and libraries used to
7255 <<controlling-tracing,control tracing>>.
7256
7257 <<lttng-ust,LTTng-UST>>::
7258 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
7259 contains libraries to instrument user applications.
7260
7261 <<lttng-ust-agents,LTTng-UST Java agent>>::
7262 A Java package provided by the LTTng-UST project to allow the
7263 LTTng instrumentation of `java.util.logging` and Apache log4j 1.2
7264 logging statements.
7265
7266 <<lttng-ust-agents,LTTng-UST Python agent>>::
7267 A Python package provided by the LTTng-UST project to allow the
7268 LTTng instrumentation of Python logging statements.
7269
7270 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
7271 The event loss mode in which new event records overwrite older
7272 event records when there's no sub-buffer space left to store them.
7273
7274 <<channel-buffering-schemes,per-process buffering>>::
7275 A buffering scheme in which each instrumented process has its own
7276 sub-buffers for a given user space channel.
7277
7278 <<channel-buffering-schemes,per-user buffering>>::
7279 A buffering scheme in which all the processes of a Unix user share the
7280 same sub-buffer for a given user space channel.
7281
7282 <<lttng-relayd,relay daemon>>::
7283 A process which is responsible for receiving the trace data sent by
7284 a distant consumer daemon.
7285
7286 ring buffer::
7287 A set of sub-buffers.
7288
7289 <<lttng-sessiond,session daemon>>::
7290 A process which receives control commands from you and orchestrates
7291 the tracers and various LTTng daemons.
7292
7293 <<taking-a-snapshot,snapshot>>::
7294 A copy of the current data of all the sub-buffers of a given tracing
7295 session, saved as trace files.
7296
7297 sub-buffer::
7298 One part of an LTTng ring buffer which contains event records.
7299
7300 timestamp::
7301 The time information attached to an event when it is emitted.
7302
7303 trace (_noun_)::
7304 A set of files which are the concatenations of one or more
7305 flushed sub-buffers.
7306
7307 trace (_verb_)::
7308 The action of recording the events emitted by an application
7309 or by a system, or to initiate such recording by controlling
7310 a tracer.
7311
7312 Trace Compass::
7313 The http://tracecompass.org[Trace Compass] project and application.
7314
7315 tracepoint::
7316 An instrumentation point using the tracepoint mechanism of the Linux
7317 kernel or of LTTng-UST.
7318
7319 tracepoint definition::
7320 The definition of a single tracepoint.
7321
7322 tracepoint name::
7323 The name of a tracepoint.
7324
7325 tracepoint provider::
7326 A set of functions providing tracepoints to an instrumented user
7327 application.
7328 +
7329 Not to be confused with a _tracepoint provider package_: many tracepoint
7330 providers can exist within a tracepoint provider package.
7331
7332 tracepoint provider package::
7333 One or more tracepoint providers compiled as an object file or as
7334 a shared library.
7335
7336 tracer::
7337 A software which records emitted events.
7338
7339 <<domain,tracing domain>>::
7340 A namespace for event sources.
7341
7342 tracing group::
7343 The Unix group in which a Unix user can be to be allowed to trace the
7344 Linux kernel.
7345
7346 <<tracing-session,tracing session>>::
7347 A stateful dialogue between you and a <<lttng-sessiond,session
7348 daemon>>.
7349
7350 user application::
7351 An application running in user space, as opposed to a Linux kernel
7352 module, for example.
This page took 0.180704 seconds and 4 git commands to generate.