Remove install instructions for specific distros for outdated versions
[lttng-docs.git] / 2.6 / lttng-docs-2.6.txt
1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
4 v2.6, May 26, 2016
5
6
7 include::../common/copyright.txt[]
8
9
10 include::../common/warning-not-maintained.txt[]
11
12
13 [[welcome]]
14 == Welcome!
15
16 Welcome to the **LTTng Documentation**!
17
18 The _Linux Trace Toolkit: next generation_ is an open source software
19 toolkit which you can use to simultaneously trace the Linux kernel, user
20 applications, and user libraries.
21
22 LTTng consists of:
23
24 * Kernel modules to trace the Linux kernel.
25 * Shared libraries to trace user applications written in C or C++.
26 * Java packages to trace Java applications which use `java.util.logging`
27 or Apache log4j 1.2.
28 * A kernel module to trace shell scripts and other user applications
29 without a dedicated instrumentation mechanism.
30 * Daemons and a command-line tool, cmd:lttng, to control the
31 LTTng tracers.
32
33 [NOTE]
34 .Open source documentation
35 ====
36 This is an **open documentation**: its source is available in a
37 https://github.com/lttng/lttng-docs[public Git repository].
38
39 Should you find any error in the content of this text, any grammatical
40 mistake, or any dead link, we would be very grateful if you would file a
41 GitHub issue for it or, even better, contribute a patch to this
42 documentation by creating a pull request.
43 ====
44
45
46 include::../common/audience.txt[]
47
48
49 [[chapters]]
50 === Chapter descriptions
51
52 What follows is a list of brief descriptions of this documentation's
53 chapters. The latter are ordered in such a way as to make the reading
54 as linear as possible.
55
56 . <<nuts-and-bolts,Nuts and bolts>> explains the
57 rudiments of software tracing and the rationale behind the
58 LTTng project.
59 . <<installing-lttng,Installing LTTng>> is divided into
60 sections describing the steps needed to get a working installation
61 of LTTng packages for common Linux distributions and from its
62 source.
63 . <<getting-started,Getting started>> is a very concise guide to
64 get started quickly with LTTng kernel and user space tracing. This
65 chapter is recommended if you're new to LTTng or software tracing
66 in general.
67 . <<understanding-lttng,Understanding LTTng>> deals with some
68 core concepts and components of the LTTng suite. Understanding
69 those is important since the next chapter assumes you're familiar
70 with them.
71 . <<using-lttng,Using LTTng>> is a complete user guide of the
72 LTTng project. It shows in great details how to instrument user
73 applications and the Linux kernel, how to control tracing sessions
74 using the `lttng` command line tool and miscellaneous practical use
75 cases.
76 . <<reference,Reference>> contains references of LTTng components,
77 like links to online manpages and various APIs.
78
79 We recommend that you read the above chapters in this order, although
80 some of them may be skipped depending on your situation. You may skip
81 <<nuts-and-bolts,Nuts and bolts>> if you're familiar with tracing
82 and LTTng. Also, you may jump over <<installing-lttng,Installing LTTng>>
83 if LTTng is already properly installed on your target system.
84
85
86 include::../common/convention.txt[]
87
88
89 include::../common/acknowledgements.txt[]
90
91
92 [[whats-new]]
93 == What's new in LTTng {revision}?
94
95 Most of the changes of LTTng {revision} are bug fixes, making the toolchain
96 more stable than ever before. Still, LTTng {revision} adds some interesting
97 features to the project.
98
99 LTTng 2.5 already supported the instrumentation and tracing of
100 <<java-application,Java applications>> through `java.util.logging`
101 (JUL). LTTng {revision} goes one step further by supporting
102 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2].
103 The new log4j domain is selected using the `--log4j` option in various
104 commands of the `lttng` tool.
105
106 LTTng-modules has supported system call tracing for a long time,
107 but until now, it was only possible to record either all of them,
108 or none of them. LTTng {revision} allows the user to record specific
109 system call events, for example:
110
111 [role="term"]
112 ----
113 lttng enable-event --kernel --syscall open,fork,chdir,pipe
114 ----
115
116 Finally, the `lttng` command line tool is not only able to communicate
117 with humans as it used to do, but also with machines thanks to its new
118 <<mi,machine interface>> feature.
119
120 To learn more about the new features of LTTng {revision}, see the
121 http://lttng.org/blog/2015/02/27/lttng-2.6-released/[release announcement].
122
123
124 [[nuts-and-bolts]]
125 == Nuts and bolts
126
127 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
128 generation_ is a modern toolkit for tracing Linux systems and
129 applications. So your first question might rather be: **what is
130 tracing?**
131
132
133 [[what-is-tracing]]
134 === What is tracing?
135
136 As the history of software engineering progressed and led to what
137 we now take for granted--complex, numerous and
138 interdependent software applications running in parallel on
139 sophisticated operating systems like Linux--the authors of such
140 components, or software developers, began feeling a natural
141 urge of having tools to ensure the robustness and good performance
142 of their masterpieces.
143
144 One major achievement in this field is, inarguably, the
145 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
146 which is an essential tool for developers to find and fix
147 bugs. But even the best debugger won't help make your software run
148 faster, and nowadays, faster software means either more work done by
149 the same hardware, or cheaper hardware for the same work.
150
151 A _profiler_ is often the tool of choice to identify performance
152 bottlenecks. Profiling is suitable to identify _where_ performance is
153 lost in a given software; the profiler outputs a profile, a
154 statistical summary of observed events, which you may use to discover
155 which functions took the most time to execute. However, a profiler
156 won't report _why_ some identified functions are the bottleneck.
157 Bottlenecks might only occur when specific conditions are met, sometimes
158 almost impossible to capture by a statistical profiler, or impossible to
159 reproduce with an application altered by the overhead of an event-based
160 profiler. For a thorough investigation of software performance issues,
161 a history of execution, with the recorded values of chosen variables
162 and context, is essential. This is where tracing comes in handy.
163
164 _Tracing_ is a technique used to understand what goes on in a running
165 software system. The software used for tracing is called a _tracer_,
166 which is conceptually similar to a tape recorder. When recording,
167 specific probes placed in the software source code generate events
168 that are saved on a giant tape: a _trace_ file. Both user applications
169 and the operating system may be traced at the same time, opening the
170 possibility of resolving a wide range of problems that are otherwise
171 extremely challenging.
172
173 Tracing is often compared to _logging_. However, tracers and loggers
174 are two different tools, serving two different purposes. Tracers are
175 designed to record much lower-level events that occur much more
176 frequently than log messages, often in the thousands per second range,
177 with very little execution overhead. Logging is more appropriate for
178 very high-level analysis of less frequent events: user accesses,
179 exceptional conditions (errors and warnings, for example), database
180 transactions, instant messaging communications, and such. More formally,
181 logging is one of several use cases that can be accomplished with
182 tracing.
183
184 The list of recorded events inside a trace file may be read manually
185 like a log file for the maximum level of detail, but it is generally
186 much more interesting to perform application-specific analyses to
187 produce reduced statistics and graphs that are useful to resolve a
188 given problem. Trace viewers and analysers are specialized tools
189 designed to do this.
190
191 So, in the end, this is what LTTng is: a powerful, open source set of
192 tools to trace the Linux kernel and user applications at the same time.
193 LTTng is composed of several components actively maintained and
194 developed by its link:/community/#where[community].
195
196
197 [[lttng-alternatives]]
198 === Alternatives to LTTng
199
200 Excluding proprietary solutions, a few competing software tracers
201 exist for Linux:
202
203 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
204 is the de facto function tracer of the Linux kernel. Its user
205 interface is a set of special files in sysfs.
206 * https://perf.wiki.kernel.org/[perf] is
207 a performance analyzing tool for Linux which supports hardware
208 performance counters, tracepoints, as well as other counters and
209 types of probes. perf's controlling utility is the `perf` command
210 line/curses tool.
211 * http://linux.die.net/man/1/strace[strace]
212 is a command line utility which records system calls made by a
213 user process, as well as signal deliveries and changes of process
214 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
215 to fulfill its function.
216 * https://sourceware.org/systemtap/[SystemTap]
217 is a Linux kernel and user space tracer which uses custom user scripts
218 to produce plain text traces. Scripts are converted to the C language,
219 then compiled as Linux kernel modules which are loaded to produce
220 trace data. SystemTap's primary user interface is the `stap`
221 command line tool.
222 * http://www.sysdig.org/[sysdig], like
223 SystemTap, uses scripts to analyze Linux kernel events. Scripts,
224 or _chisels_ in sysdig's jargon, are written in Lua and executed
225 while the system is being traced, or afterwards. sysdig's interface
226 is the `sysdig` command line tool as well as the curses-based
227 `csysdig` tool.
228
229 The main distinctive features of LTTng is that it produces correlated
230 kernel and user space traces, as well as doing so with the lowest
231 overhead amongst other solutions. It produces trace files in the
232 http://diamon.org/ctf[CTF] format, an optimized file format
233 for production and analyses of multi-gigabyte data. LTTng is the
234 result of close to 10 years of
235 active development by a community of passionate developers. LTTng {revision}
236 is currently available on some major desktop, server, and embedded Linux
237 distributions.
238
239 The main interface for tracing control is a single command line tool
240 named `lttng`. The latter can create several tracing sessions,
241 enable/disable events on the fly, filter them efficiently with custom
242 user expressions, start/stop tracing, and do much more. Traces can be
243 recorded on disk or sent over the network, kept totally or partially,
244 and viewed once tracing becomes inactive or in real-time.
245
246 <<installing-lttng,Install LTTng now>> and start tracing!
247
248
249 [[installing-lttng]]
250 == Installing LTTng
251
252 include::../common/warning-no-installation.txt[]
253
254 **LTTng** is a set of software components which interact to allow
255 instrumenting the Linux kernel and user applications as well as
256 controlling tracing sessions (starting/stopping tracing,
257 enabling/disabling events, and more). Those components are bundled into
258 the following packages:
259
260 LTTng-tools::
261 Libraries and command line interface to control tracing sessions.
262
263 LTTng-modules::
264 Linux kernel modules for tracing the kernel.
265
266 LTTng-UST::
267 User space tracing library.
268
269 Most distributions mark the LTTng-modules and LTTng-UST packages as
270 optional. Note that LTTng-modules is only required if you intend to
271 trace the Linux kernel and LTTng-UST is only required if you intend to
272 trace user space applications.
273
274
275 [[getting-started]]
276 == Getting started with LTTng
277
278 This is a small guide to get started quickly with LTTng kernel and user
279 space tracing. For a more thorough understanding of LTTng and intermediate
280 to advanced use cases and, see <<understanding-lttng,Understanding LTTng>>
281 and <<using-lttng,Using LTTng>>.
282
283 Before reading this guide, make sure LTTng
284 <<installing-lttng,is installed>>. LTTng-tools is required. Also install
285 LTTng-modules for
286 <<tracing-the-linux-kernel,tracing the Linux kernel>> and LTTng-UST
287 for
288 <<tracing-your-own-user-application,tracing your own user space applications>>.
289 When the traces are finally written and complete, the
290 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
291 section of this chapter will help you analyze your tracepoint events
292 to investigate.
293
294
295 [[tracing-the-linux-kernel]]
296 === Tracing the Linux kernel
297
298 Make sure LTTng-tools and LTTng-modules packages
299 <<installing-lttng,are installed>>.
300
301 Since you're about to trace the Linux kernel itself, let's look at the
302 available kernel events using the `lttng` tool, which has a
303 Git-like command line structure:
304
305 [role="term"]
306 ----
307 lttng list --kernel
308 ----
309
310 Before tracing, you need to create a session:
311
312 [role="term"]
313 ----
314 sudo lttng create
315 ----
316
317 TIP: You can avoid using `sudo` in the previous and following commands
318 if your user is a member of the <<lttng-sessiond,tracing group>>.
319
320 Let's now enable some events for this session:
321
322 [role="term"]
323 ----
324 sudo lttng enable-event --kernel sched_switch,sched_process_fork
325 ----
326
327 Or you might want to simply enable all available kernel events (beware
328 that trace files grow rapidly when doing this):
329
330 [role="term"]
331 ----
332 sudo lttng enable-event --kernel --all
333 ----
334
335 Start tracing:
336
337 [role="term"]
338 ----
339 sudo lttng start
340 ----
341
342 By default, traces are saved in
343 +\~/lttng-traces/__name__-__date__-__time__+,
344 where +__name__+ is the session name.
345
346 When you're done tracing:
347
348 [role="term"]
349 ----
350 sudo lttng stop
351 sudo lttng destroy
352 ----
353
354 Although `destroy` looks scary here, it doesn't actually destroy the
355 written trace files: it only destroys the tracing session.
356
357 What's next? Have a look at
358 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
359 to view and analyze the trace you just recorded.
360
361
362 [[tracing-your-own-user-application]]
363 === Tracing your own user application
364
365 The previous section helped you create a trace out of Linux kernel
366 events. This section steps you through a simple example showing you how
367 to trace a _Hello world_ program written in C.
368
369 Make sure the LTTng-tools and LTTng-UST packages
370 <<installing-lttng,are installed>>.
371
372 Tracing is just like having `printf()` calls at specific locations of
373 your source code, albeit LTTng is much faster and more flexible than
374 `printf()`. In the LTTng realm, **`tracepoint()`** is analogous to
375 `printf()`.
376
377 Unlike `printf()`, though, `tracepoint()` does not use a format string to
378 know the types of its arguments: the formats of all tracepoints must be
379 defined before using them. So before even writing our _Hello world_ program,
380 we need to define the format of our tracepoint. This is done by creating a
381 **tracepoint provider**, which consists of a tracepoint provider header
382 (`.h` file) and a tracepoint provider definition (`.c` file).
383
384 The tracepoint provider header contains some boilerplate as well as a
385 list of tracepoint definitions and other optional definition entries
386 which we skip for this quickstart. Each tracepoint is defined using the
387 `TRACEPOINT_EVENT()` macro. For each tracepoint, you must provide:
388
389 * a **provider name**, which is the "scope" or namespace of this
390 tracepoint (this usually includes the company and project names)
391 * a **tracepoint name**
392 * a **list of arguments** for the eventual `tracepoint()` call, each
393 item being:
394 ** the argument C type
395 ** the argument name
396 * a **list of fields**, which correspond to the actual fields of the
397 recorded events for this tracepoint
398
399 Here's an example of a simple tracepoint provider header with two
400 arguments: an integer and a string:
401
402 [source,c]
403 ----
404 #undef TRACEPOINT_PROVIDER
405 #define TRACEPOINT_PROVIDER hello_world
406
407 #undef TRACEPOINT_INCLUDE
408 #define TRACEPOINT_INCLUDE "./hello-tp.h"
409
410 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
411 #define _HELLO_TP_H
412
413 #include <lttng/tracepoint.h>
414
415 TRACEPOINT_EVENT(
416 hello_world,
417 my_first_tracepoint,
418 TP_ARGS(
419 int, my_integer_arg,
420 char*, my_string_arg
421 ),
422 TP_FIELDS(
423 ctf_string(my_string_field, my_string_arg)
424 ctf_integer(int, my_integer_field, my_integer_arg)
425 )
426 )
427
428 #endif /* _HELLO_TP_H */
429
430 #include <lttng/tracepoint-event.h>
431 ----
432
433 The exact syntax is well explained in the
434 <<c-application,C application>> instrumentation guide of the
435 <<using-lttng,Using LTTng>> chapter, as well as in
436 man:lttng-ust(3).
437
438 Save the above snippet as path:{hello-tp.h}.
439
440 Write the tracepoint provider definition as path:{hello-tp.c}:
441
442 [source,c]
443 ----
444 #define TRACEPOINT_CREATE_PROBES
445 #define TRACEPOINT_DEFINE
446
447 #include "hello-tp.h"
448 ----
449
450 Create the tracepoint provider:
451
452 [role="term"]
453 ----
454 gcc -c -I. hello-tp.c
455 ----
456
457 Now, by including path:{hello-tp.h} in your own application, you may use the
458 tracepoint defined above by properly refering to it when calling
459 `tracepoint()`:
460
461 [source,c]
462 ----
463 #include <stdio.h>
464 #include "hello-tp.h"
465
466 int main(int argc, char *argv[])
467 {
468 int x;
469
470 puts("Hello, World!\nPress Enter to continue...");
471
472 /*
473 * The following getchar() call is only placed here for the purpose
474 * of this demonstration, for pausing the application in order for
475 * you to have time to list its events. It's not needed otherwise.
476 */
477 getchar();
478
479 /*
480 * A tracepoint() call. Arguments, as defined in hello-tp.h:
481 *
482 * 1st: provider name (always)
483 * 2nd: tracepoint name (always)
484 * 3rd: my_integer_arg (first user-defined argument)
485 * 4th: my_string_arg (second user-defined argument)
486 *
487 * Notice the provider and tracepoint names are NOT strings;
488 * they are in fact parts of variables created by macros in
489 * hello-tp.h.
490 */
491 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
492
493 for (x = 0; x < argc; ++x) {
494 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
495 }
496
497 puts("Quitting now!");
498
499 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
500
501 return 0;
502 }
503 ----
504
505 Save this as path:{hello.c}, next to path:{hello-tp.c}.
506
507 Notice path:{hello-tp.h}, the tracepoint provider header, is included
508 by path:{hello.c}.
509
510 You are now ready to compile the application with LTTng-UST support:
511
512 [role="term"]
513 ----
514 gcc -c hello.c
515 gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
516 ----
517
518 Here's the whole build process:
519
520 [role="img-100"]
521 .User space tracing's build process.
522 image::ust-flow.png[]
523
524 If you followed the
525 <<tracing-the-linux-kernel,Tracing the Linux kernel>> tutorial, the
526 following steps should look familiar.
527
528 First, run the application with a few arguments:
529
530 [role="term"]
531 ----
532 ./hello world and beyond
533 ----
534
535 You should see
536
537 ----
538 Hello, World!
539 Press Enter to continue...
540 ----
541
542 Use the `lttng` tool to list all available user space events:
543
544 [role="term"]
545 ----
546 lttng list --userspace
547 ----
548
549 You should see the `hello_world:my_first_tracepoint` tracepoint listed
550 under the `./hello` process.
551
552 Create a tracing session:
553
554 [role="term"]
555 ----
556 lttng create
557 ----
558
559 Enable the `hello_world:my_first_tracepoint` tracepoint:
560
561 [role="term"]
562 ----
563 lttng enable-event --userspace hello_world:my_first_tracepoint
564 ----
565
566 Start tracing:
567
568 [role="term"]
569 ----
570 lttng start
571 ----
572
573 Go back to the running `hello` application and press Enter. All `tracepoint()`
574 calls are executed and the program finally exits.
575
576 Stop tracing:
577
578 [role="term"]
579 ----
580 lttng stop
581 ----
582
583 Done! You may use `lttng view` to list the recorded events. This command
584 starts http://diamon.org/babeltrace[`babeltrace`]
585 in the background, if it's installed:
586
587 [role="term"]
588 ----
589 lttng view
590 ----
591
592 should output something like:
593
594 ----
595 [18:10:27.684304496] (+?.?????????) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "hi there!", my_integer_field = 23 }
596 [18:10:27.684338440] (+0.000033944) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "./hello", my_integer_field = 0 }
597 [18:10:27.684340692] (+0.000002252) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "world", my_integer_field = 1 }
598 [18:10:27.684342616] (+0.000001924) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "and", my_integer_field = 2 }
599 [18:10:27.684343518] (+0.000000902) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "beyond", my_integer_field = 3 }
600 [18:10:27.684357978] (+0.000014460) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "x^2", my_integer_field = 16 }
601 ----
602
603 When you're done, you may destroy the tracing session, which does _not_
604 destroy the generated trace files, leaving them available for further
605 analysis:
606
607 [role="term"]
608 ----
609 lttng destroy
610 ----
611
612 The next section presents other alternatives to view and analyze your
613 LTTng traces.
614
615
616 [[viewing-and-analyzing-your-traces]]
617 === Viewing and analyzing your traces
618
619 This section describes how to visualize the data gathered after tracing
620 the Linux kernel or a user space application.
621
622 Many ways exist to read LTTng traces:
623
624 * **`babeltrace`** is a command line utility which converts trace formats;
625 it supports the format used by LTTng,
626 CTF, as well as a basic
627 text output which may be ++grep++ed. The `babeltrace` command is
628 part of the
629 http://diamon.org/babeltrace[Babeltrace] project.
630 * Babeltrace also includes **Python bindings** so that you may
631 easily open and read an LTTng trace with your own script, benefiting
632 from the power of Python.
633 * **http://tracecompass.org/[Trace Compass]**
634 is an Eclipse plugin used to visualize and analyze various types of
635 traces, including LTTng's. It also comes as a standalone application.
636
637 LTTng trace files are usually recorded in the dir:{~/lttng-traces} directory.
638 Let's now view the trace and perform a basic analysis using
639 `babeltrace`.
640
641 The simplest way to list all the recorded events of a trace is to pass its
642 path to `babeltrace` with no options:
643
644 [role="term"]
645 ----
646 babeltrace ~/lttng-traces/my-session
647 ----
648
649 `babeltrace` finds all traces recursively within the given path and
650 prints all their events, merging them in order of time.
651
652 Listing all the system calls of a Linux kernel trace with their arguments is
653 easy with `babeltrace` and `grep`:
654
655 [role="term"]
656 ----
657 babeltrace ~/lttng-traces/my-kernel-session | grep sys_
658 ----
659
660 Counting events is also straightforward:
661
662 [role="term"]
663 ----
664 babeltrace ~/lttng-traces/my-kernel-session | grep sys_read | wc --lines
665 ----
666
667 The text output of `babeltrace` is useful for isolating events by simple
668 matching using `grep` and similar utilities. However, more elaborate filters
669 such as keeping only events with a field value falling within a specific range
670 are not trivial to write using a shell. Moreover, reductions and even the
671 most basic computations involving multiple events are virtually impossible
672 to implement.
673
674 Fortunately, Babeltrace ships with Python 3 bindings which makes it
675 really easy to read the events of an LTTng trace sequentially and compute
676 the desired information.
677
678 Here's a simple example using the Babeltrace Python bindings. The following
679 script accepts an LTTng Linux kernel trace path as its first argument and
680 prints the short names of the top 5 running processes on CPU 0 during the
681 whole trace:
682
683 [source,python]
684 ----
685 import sys
686 from collections import Counter
687 import babeltrace
688
689
690 def top5proc():
691 if len(sys.argv) != 2:
692 msg = 'Usage: python {} TRACEPATH'.format(sys.argv[0])
693 raise ValueError(msg)
694
695 # a trace collection holds one to many traces
696 col = babeltrace.TraceCollection()
697
698 # add the trace provided by the user
699 # (LTTng traces always have the 'ctf' format)
700 if col.add_trace(sys.argv[1], 'ctf') is None:
701 raise RuntimeError('Cannot add trace')
702
703 # this counter dict will hold execution times:
704 #
705 # task command name -> total execution time (ns)
706 exec_times = Counter()
707
708 # this holds the last `sched_switch` timestamp
709 last_ts = None
710
711 # iterate events
712 for event in col.events:
713 # keep only `sched_switch` events
714 if event.name != 'sched_switch':
715 continue
716
717 # keep only events which happened on CPU 0
718 if event['cpu_id'] != 0:
719 continue
720
721 # event timestamp
722 cur_ts = event.timestamp
723
724 if last_ts is None:
725 # we start here
726 last_ts = cur_ts
727
728 # previous task command (short) name
729 prev_comm = event['prev_comm']
730
731 # initialize entry in our dict if not yet done
732 if prev_comm not in exec_times:
733 exec_times[prev_comm] = 0
734
735 # compute previous command execution time
736 diff = cur_ts - last_ts
737
738 # update execution time of this command
739 exec_times[prev_comm] += diff
740
741 # update last timestamp
742 last_ts = cur_ts
743
744 # display top 10
745 for name, ns in exec_times.most_common(5):
746 s = ns / 1000000000
747 print('{:20}{} s'.format(name, s))
748
749
750 if __name__ == '__main__':
751 top5proc()
752 ----
753
754 Save this script as path:{top5proc.py} and run it with Python 3, providing the
755 path to an LTTng Linux kernel trace as the first argument:
756
757 [role="term"]
758 ----
759 python3 top5proc.py ~/lttng-sessions/my-session-.../kernel
760 ----
761
762 Make sure the path you provide is the directory containing actual trace
763 files (`channel0_0`, `metadata`, and the rest): the `babeltrace` utility
764 recurses directories, but the Python bindings do not.
765
766 Here's an example of output:
767
768 ----
769 swapper/0 48.607245889 s
770 chromium 7.192738188 s
771 pavucontrol 0.709894415 s
772 Compositor 0.660867933 s
773 Xorg.bin 0.616753786 s
774 ----
775
776 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
777 weren't using the CPU that much when tracing, its first position in the list
778 makes sense.
779
780
781 [[understanding-lttng]]
782 == Understanding LTTng
783
784 If you're going to use LTTng in any serious way, it is fundamental that
785 you become familiar with its core concepts. Technical terms like
786 _tracing sessions_, _domains_, _channels_ and _events_ are used over
787 and over in the <<using-lttng,Using LTTng>> chapter,
788 and it is assumed that you understand what they mean when reading it.
789
790 LTTng, as you already know, is a _toolkit_. It would be wrong
791 to call it a simple _tool_ since it is composed of multiple interacting
792 components. This chapter also describes the latter, providing details
793 about their respective roles and how they connect together to form
794 the current LTTng ecosystem.
795
796
797 [[core-concepts]]
798 === Core concepts
799
800 This section explains the various elementary concepts a user has to deal
801 with when using LTTng. They are:
802
803 * <<tracing-session,tracing session>>
804 * <<domain,domain>>
805 * <<channel,channel>>
806 * <<event,event>>
807
808
809 [[tracing-session]]
810 ==== Tracing session
811
812 A _tracing session_ is--like any session--a container of
813 state. Anything that is done when tracing using LTTng happens in the
814 scope of a tracing session. In this regard, it is analogous to a bank
815 website's session: you can't interact online with your bank account
816 unless you are logged in a session, except for reading a few static
817 webpages (LTTng, too, can report some static information that does not
818 need a created tracing session).
819
820 A tracing session holds the following attributes and objects (some of
821 which are described in the following sections):
822
823 * a name
824 * the tracing state (tracing started or stopped)
825 * the trace data output path/URL (local path or sent over the network)
826 * a mode (normal, snapshot or live)
827 * the snapshot output paths/URLs (if applicable)
828 * for each <<domain,domain>>, a list of <<channel,channels>>
829 * for each channel:
830 ** a name
831 ** the channel state (enabled or disabled)
832 ** its parameters (event loss mode, sub-buffers size and count,
833 timer periods, output type, trace files size and count, and the rest)
834 ** a list of added context information
835 ** a list of <<event,events>>
836 * for each event:
837 ** its state (enabled or disabled)
838 ** a list of instrumentation points (tracepoints, system calls,
839 dynamic probes, other types of probes)
840 ** associated log levels
841 ** a filter expression
842
843 All this information is completely isolated between tracing sessions.
844 As you can see in the list above, even the tracing state
845 is a per-tracing session attribute, so that you may trace your target
846 system/application in a given tracing session with a specific
847 configuration while another one stays inactive.
848
849 [role="img-100"]
850 .A _tracing session_ is a container of domains, channels, and events.
851 image::concepts.png[]
852
853 Conceptually, a tracing session is a per-user object; the
854 <<plumbing,Plumbing>> section shows how this is actually
855 implemented. Any user may create as many concurrent tracing sessions
856 as desired.
857
858 [role="img-100"]
859 .Each user may create as many tracing sessions as desired.
860 image::many-sessions.png[]
861
862 The trace data generated in a tracing session may be either saved
863 to disk, sent over the network or not saved at all (in which case
864 snapshots may still be saved to disk or sent to a remote machine).
865
866
867 [[domain]]
868 ==== Domain
869
870 A tracing _domain_ is the official term the LTTng project uses to
871 designate a tracer category.
872
873 There are currently four known domains:
874
875 * Linux kernel
876 * user space
877 * `java.util.logging` (JUL)
878 * log4j
879
880 Different tracers expose common features in their own interfaces, but,
881 from a user's perspective, you still need to target a specific type of
882 tracer to perform some actions. For example, since both kernel and user
883 space tracers support named tracepoints (probes manually inserted in
884 source code), you need to specify which one is concerned when enabling
885 an event because both domains could have existing events with the same
886 name.
887
888 Some features are not available in all domains. Filtering enabled
889 events using custom expressions, for example, is currently not
890 supported in the kernel domain, but support could be added in the
891 future.
892
893
894 [[channel]]
895 ==== Channel
896
897 A _channel_ is a set of events with specific parameters and potential
898 added context information. Channels have unique names per domain within
899 a tracing session. A given event is always registered to at least one
900 channel; having the same enabled event in two channels makes
901 this event being recorded twice everytime it occurs.
902
903 Channels may be individually enabled or disabled. Occurring events of
904 a disabled channel never make it to recorded events.
905
906 The fundamental role of a channel is to keep a shared ring buffer, where
907 events are eventually recorded by the tracer and consumed by a consumer
908 daemon. This internal ring buffer is divided into many sub-buffers of
909 equal size.
910
911 Channels, when created, may be fine-tuned thanks to a few parameters,
912 many of them related to sub-buffers. The following subsections explain
913 what those parameters are and in which situations you should manually
914 adjust them.
915
916
917 [[channel-overwrite-mode-vs-discard-mode]]
918 ===== Overwrite and discard event loss modes
919
920 As previously mentioned, a channel's ring buffer is divided into many
921 equally sized sub-buffers.
922
923 As events occur, they are serialized as trace data into a specific
924 sub-buffer (yellow arc in the following animation) until it is full:
925 when this happens, the sub-buffer is marked as consumable (red) and
926 another, _empty_ (white) sub-buffer starts receiving the following
927 events. The marked sub-buffer is eventually consumed by a consumer
928 daemon (returns to white).
929
930 [NOTE]
931 [role="docsvg-channel-subbuf-anim"]
932 ====
933 {note-no-anim}
934 ====
935
936 In an ideal world, sub-buffers are consumed faster than filled, like it
937 is the case above. In the real world, however, all sub-buffers could be
938 full at some point, leaving no space to record the following events. By
939 design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer
940 exists, losing events is acceptable when the alternative would be to
941 cause substantial delays in the instrumented application's execution.
942 LTTng privileges performance over integrity, aiming at perturbing the
943 traced system as little as possible in order to make tracing of subtle
944 race conditions and rare interrupt cascades possible.
945
946 When it comes to losing events because no empty sub-buffer is available,
947 the channel's _event loss mode_ determines what to do amongst:
948
949 Discard::
950 Drop the newest events until a sub-buffer is released.
951
952 Overwrite::
953 Clear the sub-buffer containing the oldest recorded
954 events and start recording the newest events there. This mode is
955 sometimes called _flight recorder mode_ because it behaves like a
956 flight recorder: always keep a fixed amount of the latest data.
957
958 Which mechanism you should choose depends on your context: prioritize
959 the newest or the oldest events in the ring buffer?
960
961 Beware that, in overwrite mode, a whole sub-buffer is abandoned as soon
962 as a new event doesn't find an empty sub-buffer, whereas in discard
963 mode, only the event that doesn't fit is discarded.
964
965 Also note that a count of lost events is incremented and saved in
966 the trace itself when an event is lost in discard mode, whereas no
967 information is kept when a sub-buffer gets overwritten before being
968 committed.
969
970 There are known ways to decrease your probability of losing events. The
971 next section shows how tuning the sub-buffers count and size can be
972 used to virtually stop losing events.
973
974
975 [[channel-subbuf-size-vs-subbuf-count]]
976 ===== Sub-buffers count and size
977
978 For each channel, an LTTng user may set its number of sub-buffers and
979 their size.
980
981 Note that there is a noticeable tracer's CPU overhead introduced when
982 switching sub-buffers (marking a full one as consumable and switching
983 to an empty one for the following events to be recorded). Knowing this,
984 the following list presents a few practical situations along with how
985 to configure sub-buffers for them:
986
987 High event throughput::
988 In general, prefer bigger sub-buffers to
989 lower the risk of losing events. Having bigger sub-buffers
990 also ensures a lower sub-buffer switching frequency. The number of
991 sub-buffers is only meaningful if the channel is enabled in
992 overwrite mode: in this case, if a sub-buffer overwrite happens, the
993 other sub-buffers are left unaltered.
994
995 Low event throughput::
996 In general, prefer smaller sub-buffers
997 since the risk of losing events is already low. Since events
998 happen less frequently, the sub-buffer switching frequency should
999 remain low and thus the tracer's overhead should not be a problem.
1000
1001 Low memory system::
1002 If your target system has a low memory
1003 limit, prefer fewer first, then smaller sub-buffers. Even if the
1004 system is limited in memory, you want to keep the sub-buffers as
1005 big as possible to avoid a high sub-buffer switching frequency.
1006
1007 You should know that LTTng uses CTF as its trace format, which means
1008 event data is very compact. For example, the average LTTng Linux kernel
1009 event weights about 32{nbsp}bytes. A sub-buffer size of 1{nbsp}MiB is
1010 thus considered big.
1011
1012 The previous situations highlight the major trade-off between a few big
1013 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1014 frequency vs. how much data is lost in overwrite mode. Assuming a
1015 constant event throughput and using the overwrite mode, the two
1016 following configurations have the same ring buffer total size:
1017
1018 [NOTE]
1019 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1020 ====
1021 {note-no-anim}
1022 ====
1023
1024 * **2 sub-buffers of 4 MiB each** lead to a very low sub-buffer
1025 switching frequency, but if a sub-buffer overwrite happens, half of
1026 the recorded events so far (4{nbsp}MiB) are definitely lost.
1027 * **8 sub-buffers of 1 MiB each** lead to 4{nbsp}times the tracer's
1028 overhead as the previous configuration, but if a sub-buffer
1029 overwrite happens, only the eighth of events recorded so far are
1030 definitely lost.
1031
1032 In discard mode, the sub-buffers count parameter is pointless: use two
1033 sub-buffers and set their size according to the requirements of your
1034 situation.
1035
1036
1037 [[channel-switch-timer]]
1038 ===== Switch timer
1039
1040 The _switch timer_ period is another important configurable feature of
1041 channels to ensure periodic sub-buffer flushing.
1042
1043 When the _switch timer_ fires, a sub-buffer switch happens. This timer
1044 may be used to ensure that event data is consumed and committed to
1045 trace files periodically in case of a low event throughput:
1046
1047 [NOTE]
1048 [role="docsvg-channel-switch-timer"]
1049 ====
1050 {note-no-anim}
1051 ====
1052
1053 It's also convenient when big sub-buffers are used to cope with
1054 sporadic high event throughput, even if the throughput is normally
1055 lower.
1056
1057
1058 [[channel-buffering-schemes]]
1059 ===== Buffering schemes
1060
1061 In the user space tracing domain, two **buffering schemes** are
1062 available when creating a channel:
1063
1064 Per-PID buffering::
1065 Keep one ring buffer per process.
1066
1067 Per-UID buffering::
1068 Keep one ring buffer for all processes of a single user.
1069
1070 The per-PID buffering scheme consumes more memory than the per-UID
1071 option if more than one process is instrumented for LTTng-UST. However,
1072 per-PID buffering ensures that one process having a high event
1073 throughput won't fill all the shared sub-buffers, only its own.
1074
1075 The Linux kernel tracing domain only has one available buffering scheme
1076 which is to use a single ring buffer for the whole system.
1077
1078
1079 [[event]]
1080 ==== Event
1081
1082 An _event_, in LTTng's realm, is a term often used metonymically,
1083 having multiple definitions depending on the context:
1084
1085 . When tracing, an event is a _point in space-time_. Space, in a
1086 tracing context, is the set of all executable positions of a
1087 compiled application by a logical processor. When a program is
1088 executed by a processor and some instrumentation point, or
1089 _probe_, is encountered, an event occurs. This event is accompanied
1090 by some contextual payload (values of specific variables at this
1091 point of execution) which may or may not be recorded.
1092 . In the context of a recorded trace file, the term _event_ implies
1093 a _recorded event_.
1094 . When configuring a tracing session, _enabled events_ refer to
1095 specific rules which could lead to the transfer of actual
1096 occurring events (1) to recorded events (2).
1097
1098 The whole <<core-concepts,Core concepts>> section focuses on the
1099 third definition. An event is always registered to _one or more_
1100 channels and may be enabled or disabled at will per channel. A disabled
1101 event never leads to a recorded event, even if its channel is enabled.
1102
1103 An event (3) is enabled with a few conditions that must _all_ be met
1104 when an event (1) happens in order to generate a recorded event (2):
1105
1106 . A _probe_ or group of probes in the traced application must be
1107 executed.
1108 . **Optionally**, the probe must have a log level matching a
1109 log level range specified when enabling the event.
1110 . **Optionally**, the occurring event must satisfy a custom
1111 expression, or _filter_, specified when enabling the event.
1112
1113
1114 [[plumbing]]
1115 === Plumbing
1116
1117 The previous section described the concepts at the heart of LTTng.
1118 This section summarizes LTTng's implementation: how those objects are
1119 managed by different applications and libraries working together to
1120 form the toolkit.
1121
1122
1123 [[plumbing-overview]]
1124 ==== Overview
1125
1126 As <<installing-lttng,mentioned previously>>, the whole LTTng suite
1127 is made of the LTTng-tools, LTTng-UST, and
1128 LTTng-modules packages. Together, they provide different daemons, libraries,
1129 kernel modules and command line interfaces. The following tree shows
1130 which usable component belongs to which package:
1131
1132 * **LTTng-tools**:
1133 ** session daemon (`lttng-sessiond`)
1134 ** consumer daemon (`lttng-consumerd`)
1135 ** relay daemon (`lttng-relayd`)
1136 ** tracing control library (`liblttng-ctl`)
1137 ** tracing control command line tool (`lttng`)
1138 * **LTTng-UST**:
1139 ** user space tracing library (`liblttng-ust`) and its headers
1140 ** preloadable user space tracing helpers
1141 (`liblttng-ust-libc-wrapper`, `liblttng-ust-pthread-wrapper`,
1142 `liblttng-ust-cyg-profile`, `liblttng-ust-cyg-profile-fast`
1143 and `liblttng-ust-dl`)
1144 ** user space tracepoint code generator command line tool
1145 (`lttng-gen-tp`)
1146 ** `java.util.logging`/log4j tracepoint providers
1147 (`liblttng-ust-jul-jni` and `liblttng-ust-log4j-jni`) and JAR
1148 file (path:{liblttng-ust-agent.jar})
1149 * **LTTng-modules**:
1150 ** LTTng Linux kernel tracer module
1151 ** tracing ring buffer kernel modules
1152 ** many LTTng probe kernel modules
1153
1154 The following diagram shows how the most important LTTng components
1155 interact. Plain purple arrows represent trace data paths while dashed
1156 red arrows indicate control communications. The LTTng relay daemon is
1157 shown running on a remote system, although it could as well run on the
1158 target (monitored) system.
1159
1160 [role="img-100"]
1161 .Control and data paths between LTTng components.
1162 image::plumbing-26.png[]
1163
1164 Each component is described in the following subsections.
1165
1166
1167 [[lttng-sessiond]]
1168 ==== Session daemon
1169
1170 At the heart of LTTng's plumbing is the _session daemon_, often called
1171 by its command name, `lttng-sessiond`.
1172
1173 The session daemon is responsible for managing tracing sessions and
1174 what they logically contain (channel properties, enabled/disabled
1175 events, and the rest). By communicating locally with instrumented
1176 applications (using LTTng-UST) and with the LTTng Linux kernel modules
1177 (LTTng-modules), it oversees all tracing activities.
1178
1179 One of the many things that `lttng-sessiond` does is to keep
1180 track of the available event types. User space applications and
1181 libraries actively connect and register to the session daemon when they
1182 start. By contrast, `lttng-sessiond` seeks out and loads the appropriate
1183 LTTng kernel modules as part of its own initialization. Kernel event
1184 types are _pulled_ by `lttng-sessiond`, whereas user space event types
1185 are _pushed_ to it by the various user space tracepoint providers.
1186
1187 Using a specific inter-process communication protocol with Linux kernel
1188 and user space tracers, the session daemon can send channel information
1189 so that they are initialized, enable/disable specific probes based on
1190 enabled/disabled events by the user, send event filters information to
1191 LTTng tracers so that filtering actually happens at the tracer site,
1192 start/stop tracing a specific application or the Linux kernel, and more.
1193
1194 The session daemon is not useful without some user controlling it,
1195 because it's only a sophisticated control interchange and thus
1196 doesn't make any decision on its own. `lttng-sessiond` opens a local
1197 socket for controlling it, albeit the preferred way to control it is
1198 using `liblttng-ctl`, an installed C library hiding the communication
1199 protocol behind an easy-to-use API. The `lttng` tool makes use of
1200 `liblttng-ctl` to implement a user-friendly command line interface.
1201
1202 `lttng-sessiond` does not receive any trace data from instrumented
1203 applications; the _consumer daemons_ are the programs responsible for
1204 collecting trace data using shared ring buffers. However, the session
1205 daemon is the one that must spawn a consumer daemon and establish
1206 a control communication with it.
1207
1208 Session daemons run on a per-user basis. Knowing this, multiple
1209 instances of `lttng-sessiond` may run simultaneously, each belonging
1210 to a different user and each operating independently of the others.
1211 Only `root`'s session daemon, however, may control LTTng kernel modules
1212 (that is, the kernel tracer). With that in mind, if a user has no root
1213 access on the target system, he cannot trace the system's kernel, but
1214 should still be able to trace its own instrumented applications.
1215
1216 It has to be noted that, although only `root`'s session daemon may
1217 control the kernel tracer, the `lttng-sessiond` command has a `--group`
1218 option which may be used to specify the name of a special user group
1219 allowed to communicate with `root`'s session daemon and thus record
1220 kernel traces. By default, this group is named `tracing`.
1221
1222 If not done yet, the `lttng` tool, by default, automatically starts a
1223 session daemon. `lttng-sessiond` may also be started manually:
1224
1225 [role="term"]
1226 ----
1227 lttng-sessiond
1228 ----
1229
1230 This starts the session daemon in foreground. Use
1231
1232 [role="term"]
1233 ----
1234 lttng-sessiond --daemonize
1235 ----
1236
1237 to start it as a true daemon.
1238
1239 To kill the current user's session daemon, `pkill` may be used:
1240
1241 [role="term"]
1242 ----
1243 pkill lttng-sessiond
1244 ----
1245
1246 The default `SIGTERM` signal terminates it cleanly.
1247
1248 Several other options are available and described in
1249 man:lttng-sessiond(8) or by running `lttng-sessiond --help`.
1250
1251
1252 [[lttng-consumerd]]
1253 ==== Consumer daemon
1254
1255 The _consumer daemon_, or `lttng-consumerd`, is a program sharing some
1256 ring buffers with user applications or the LTTng kernel modules to
1257 collect trace data and output it at some place (on disk or sent over
1258 the network to an LTTng relay daemon).
1259
1260 Consumer daemons are created by a session daemon as soon as events are
1261 enabled within a tracing session, well before tracing is activated
1262 for the latter. Entirely managed by session daemons,
1263 consumer daemons survive session destruction to be reused later,
1264 should a new tracing session be created. Consumer daemons are always
1265 owned by the same user as their session daemon. When its owner session
1266 daemon is killed, the consumer daemon also exits. This is because
1267 the consumer daemon is always the child process of a session daemon.
1268 Consumer daemons should never be started manually. For this reason,
1269 they are not installed in one of the usual locations listed in the
1270 `PATH` environment variable. `lttng-sessiond` has, however, a
1271 bunch of options (see man:lttng-sessiond(8)) to
1272 specify custom consumer daemon paths if, for some reason, a consumer
1273 daemon other than the default installed one is needed.
1274
1275 There are up to two running consumer daemons per user, whereas only one
1276 session daemon may run per user. This is because each process has
1277 independent bitness: if the target system runs a mixture of 32-bit and
1278 64-bit processes, it is more efficient to have separate corresponding
1279 32-bit and 64-bit consumer daemons. The `root` user is an exception: it
1280 may have up to _three_ running consumer daemons: 32-bit and 64-bit
1281 instances for its user space applications and one more reserved for
1282 collecting kernel trace data.
1283
1284 As new tracing domains are added to LTTng, the development community's
1285 intent is to minimize the need for additionnal consumer daemon instances
1286 dedicated to them. For instance, the `java.util.logging` (JUL) domain
1287 events are in fact mapped to the user space domain, thus tracing this
1288 particular domain is handled by existing user space domain consumer
1289 daemons.
1290
1291
1292 [[lttng-relayd]]
1293 ==== Relay daemon
1294
1295 When a tracing session is configured to send its trace data over the
1296 network, an LTTng _relay daemon_ must be used at the other end to
1297 receive trace packets and serialize them to trace files. This setup
1298 makes it possible to trace a target system without ever committing trace
1299 data to its local storage, a feature which is useful for embedded
1300 systems, amongst others. The command implementing the relay daemon
1301 is `lttng-relayd`.
1302
1303 The basic use case of `lttng-relayd` is to transfer trace data received
1304 over the network to trace files on the local file system. The relay
1305 daemon must listen on two TCP ports to achieve this: one control port,
1306 used by the target session daemon, and one data port, used by the
1307 target consumer daemon. The relay and session daemons agree on common
1308 default ports when custom ones are not specified.
1309
1310 Since the communication transport protocol for both ports is standard
1311 TCP, the relay daemon may be started either remotely or locally (on the
1312 target system).
1313
1314 While two instances of consumer daemons (32-bit and 64-bit) may run
1315 concurrently for a given user, `lttng-relayd` needs only be of its
1316 host operating system's bitness.
1317
1318 The other important feature of LTTng's relay daemon is the support of
1319 _LTTng live_. LTTng live is an application protocol to view events as
1320 they arrive. The relay daemon still records events in trace files,
1321 but a _tee_ allows to inspect incoming events.
1322
1323 [role="img-100"]
1324 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a live viewer.
1325 image::lttng-live.png[]
1326
1327 Using LTTng live locally thus requires to run a local relay daemon.
1328
1329
1330 [[liblttng-ctl-lttng]]
1331 ==== [[lttng-cli]]Control library and command line interface
1332
1333 The LTTng control library, `liblttng-ctl`, can be used to communicate
1334 with the session daemon using a C API that hides the underlying
1335 protocol's details. `liblttng-ctl` is part of LTTng-tools.
1336
1337 `liblttng-ctl` may be used by including its "master" header:
1338
1339 [source,c]
1340 ----
1341 #include <lttng/lttng.h>
1342 ----
1343
1344 Some objects are referred by name (C string), such as tracing sessions,
1345 but most of them require creating a handle first using
1346 `lttng_create_handle()`. The best available developer documentation for
1347 `liblttng-ctl` is, for the moment, its installed header files as such.
1348 Every function/structure is thoroughly documented.
1349
1350 The `lttng` program is the _de facto_ standard user interface to
1351 control LTTng tracing sessions. `lttng` uses `liblttng-ctl` to
1352 communicate with session daemons behind the scenes.
1353 Its man page, man:lttng(1), is exhaustive, as well as its command
1354 line help (+lttng _cmd_ --help+, where +_cmd_+ is the command name).
1355
1356 The <<controlling-tracing,Controlling tracing>> section is a feature
1357 tour of the `lttng` tool.
1358
1359
1360 [[lttng-ust]]
1361 ==== User space tracing library
1362
1363 The user space tracing part of LTTng is possible thanks to the user
1364 space tracing library, `liblttng-ust`, which is part of the LTTng-UST
1365 package.
1366
1367 `liblttng-ust` provides header files containing macros used to define
1368 tracepoints and create tracepoint providers, as well as a shared object
1369 that must be linked to individual applications to connect to and
1370 communicate with a session daemon and a consumer daemon as soon as the
1371 application starts.
1372
1373 The exact mechanism by which an application is registered to the
1374 session daemon is beyond the scope of this documentation. The only thing
1375 you need to know is that, since the library constructor does this job
1376 automatically, tracepoints may be safely inserted anywhere in the source
1377 code without prior manual initialization of `liblttng-ust`.
1378
1379 The `liblttng-ust`-session daemon collaboration also provides an
1380 interesting feature: user space events may be enabled _before_
1381 applications actually start. By doing this and starting tracing before
1382 launching the instrumented application, you make sure that even the
1383 earliest occurring events can be recorded.
1384
1385 The <<c-application,C application>> instrumenting guide of the
1386 <<using-lttng,Using LTTng>> chapter focuses on using `liblttng-ust`:
1387 instrumenting, building/linking and running a user application.
1388
1389
1390 [[lttng-modules]]
1391 ==== LTTng kernel modules
1392
1393 The LTTng Linux kernel modules provide everything needed to trace the
1394 Linux kernel: various probes, a ring buffer implementation for a
1395 consumer daemon to read trace data and the tracer itself.
1396
1397 Only in exceptional circumstances should you ever need to load the
1398 LTTng kernel modules manually: it is normally the responsability of
1399 `root`'s session daemon to do so. Even if you were to develop your
1400 own LTTng probe module--for tracing a custom kernel or some kernel
1401 module (this topic is covered in the
1402 <<instrumenting-linux-kernel,Linux kernel>> instrumenting guide of
1403 the <<using-lttng,Using LTTng>> chapter)&#8212;you
1404 should use the `--extra-kmod-probes` option of the session daemon to
1405 append your probe to the default list. The session and consumer daemons
1406 of regular users do not interact with the LTTng kernel modules at all.
1407
1408 LTTng kernel modules are installed, by default, in
1409 +/usr/lib/modules/_release_/extra+, where +_release_+ is the
1410 kernel release (see `uname --kernel-release`).
1411
1412
1413 [[using-lttng]]
1414 == Using LTTng
1415
1416 Using LTTng involves two main activities: **instrumenting** and
1417 **controlling tracing**.
1418
1419 _<<instrumenting,Instrumenting>>_ is the process of inserting probes
1420 into some source code. It can be done manually, by writing tracepoint
1421 calls at specific locations in the source code of the program to trace,
1422 or more automatically using dynamic probes (address in assembled code,
1423 symbol name, function entry/return, and others).
1424
1425 It has to be noted that, as an LTTng user, you may not have to worry
1426 about the instrumentation process. Indeed, you may want to trace a
1427 program already instrumented. As an example, the Linux kernel is
1428 thoroughly instrumented, which is why you can trace it without caring
1429 about adding probes.
1430
1431 _<<controlling-tracing,Controlling tracing>>_ is everything
1432 that can be done by the LTTng session daemon, which is controlled using
1433 `liblttng-ctl` or its command line utility, `lttng`: creating tracing
1434 sessions, listing tracing sessions and events, enabling/disabling
1435 events, starting/stopping the tracers, taking snapshots, amongst many
1436 other commands.
1437
1438 This chapter is a complete user guide of both activities,
1439 with common use cases of LTTng exposed throughout the text. It is
1440 assumed that you are familiar with LTTng's concepts (events, channels,
1441 domains, tracing sessions) and that you understand the roles of its
1442 components (daemons, libraries, command line tools); if not, we invite
1443 you to read the <<understanding-lttng,Understanding LTTng>> chapter
1444 before you begin reading this one.
1445
1446 If you're new to LTTng, we suggest that you rather start with the
1447 <<getting-started,Getting started>> small guide first, then come
1448 back here to broaden your knowledge.
1449
1450 If you're only interested in tracing the Linux kernel with its current
1451 instrumentation, you may skip the
1452 <<instrumenting,Instrumenting>> section.
1453
1454
1455 [[instrumenting]]
1456 === Instrumenting
1457
1458 There are many examples of tracing and monitoring in our everyday life.
1459 You have access to real-time and historical weather reports and forecasts
1460 thanks to weather stations installed around the country. You know your
1461 possibly hospitalized friends' and family's hearts are safe thanks to
1462 electrocardiography. You make sure not to drive your car too fast
1463 and have enough fuel to reach your destination thanks to gauges visible
1464 on your dashboard.
1465
1466 All the previous examples have something in common: they rely on
1467 **probes**. Without electrodes attached to the surface of a body's
1468 skin, cardiac monitoring would be futile.
1469
1470 LTTng, as a tracer, is no different from the real life examples above.
1471 If you're about to trace a software system or, put in other words, record its
1472 history of execution, you better have probes in the subject you're
1473 tracing: the actual software. Various ways were developed to do this.
1474 The most straightforward one is to manually place probes, called
1475 _tracepoints_, in the software's source code. The Linux kernel tracing
1476 domain also allows probes added dynamically.
1477
1478 If you're only interested in tracing the Linux kernel, it may very well
1479 be that your tracing needs are already appropriately covered by LTTng's
1480 built-in Linux kernel tracepoints and other probes. Or you may be in
1481 possession of a user space application which has already been
1482 instrumented. In such cases, the work resides entirely in the design
1483 and execution of tracing sessions, allowing you to jump to
1484 <<controlling-tracing,Controlling tracing>> right now.
1485
1486 This chapter focuses on the following use cases of instrumentation:
1487
1488 * <<c-application,C>> and <<cxx-application,$$C++$$>> applications
1489 * <<prebuilt-ust-helpers,prebuilt user space tracing helpers>>
1490 * <<java-application,Java application>>
1491 * <<instrumenting-linux-kernel,Linux kernel>> module or the
1492 kernel itself
1493 * the <<proc-lttng-logger-abi,path:{/proc/lttng-logger} ABI>>
1494
1495 Some advanced techniques are also presented at the very end of this
1496 chapter.
1497
1498
1499 [[c-application]]
1500 ==== C application
1501
1502 Instrumenting a C (or $$C++$$) application, be it an executable program
1503 or a library, implies using LTTng-UST, the
1504 user space tracing component of LTTng. For C/$$C++$$ applications, the
1505 LTTng-UST package includes a dynamically loaded library
1506 (`liblttng-ust`), C headers and the `lttng-gen-tp` command line utility.
1507
1508 Since C and $$C++$$ are the base languages of virtually all other
1509 programming languages
1510 (Java virtual machine, Python, Perl, PHP and Node.js interpreters, to
1511 name a few), implementing user space tracing for an unsupported language
1512 is just a matter of using the LTTng-UST C API at the right places.
1513
1514 The usual work flow to instrument a user space C application with
1515 LTTng-UST is:
1516
1517 . Define tracepoints (actual probes)
1518 . Write tracepoint providers
1519 . Insert tracepoints into target source code
1520 . Package (build) tracepoint providers
1521 . Build user application and link it with tracepoint providers
1522
1523 The steps above are discussed in greater detail in the following
1524 subsections.
1525
1526
1527 [[tracepoint-provider]]
1528 ===== Tracepoint provider
1529
1530 Before jumping into defining tracepoints and inserting
1531 them into the application source code, you must understand what a
1532 _tracepoint provider_ is.
1533
1534 For the sake of this guide, consider the following two files:
1535
1536 [source,c]
1537 .path:{tp.h}
1538 ----
1539 #undef TRACEPOINT_PROVIDER
1540 #define TRACEPOINT_PROVIDER my_provider
1541
1542 #undef TRACEPOINT_INCLUDE
1543 #define TRACEPOINT_INCLUDE "./tp.h"
1544
1545 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1546 #define _TP_H
1547
1548 #include <lttng/tracepoint.h>
1549
1550 TRACEPOINT_EVENT(
1551 my_provider,
1552 my_first_tracepoint,
1553 TP_ARGS(
1554 int, my_integer_arg,
1555 char*, my_string_arg
1556 ),
1557 TP_FIELDS(
1558 ctf_string(my_string_field, my_string_arg)
1559 ctf_integer(int, my_integer_field, my_integer_arg)
1560 )
1561 )
1562
1563 TRACEPOINT_EVENT(
1564 my_provider,
1565 my_other_tracepoint,
1566 TP_ARGS(
1567 int, my_int
1568 ),
1569 TP_FIELDS(
1570 ctf_integer(int, some_field, my_int)
1571 )
1572 )
1573
1574 #endif /* _TP_H */
1575
1576 #include <lttng/tracepoint-event.h>
1577 ----
1578
1579 [source,c]
1580 .path:{tp.c}
1581 ----
1582 #define TRACEPOINT_CREATE_PROBES
1583
1584 #include "tp.h"
1585 ----
1586
1587 The two files above are defining a _tracepoint provider_. A tracepoint
1588 provider is some sort of namespace for _tracepoint definitions_. Tracepoint
1589 definitions are written above with the `TRACEPOINT_EVENT()` macro, and allow
1590 eventual `tracepoint()` calls respecting their definitions to be inserted
1591 into the user application's C source code (we explore this in a
1592 later section).
1593
1594 Many tracepoint definitions may be part of the same tracepoint provider
1595 and many tracepoint providers may coexist in a user space application. A
1596 tracepoint provider is packaged either:
1597
1598 * directly into an existing user application's C source file
1599 * as an object file
1600 * as a static library
1601 * as a shared library
1602
1603 The two files above, path:{tp.h} and path:{tp.c}, show a typical template for
1604 writing a tracepoint provider. LTTng-UST was designed so that two
1605 tracepoint providers should not be defined in the same header file.
1606
1607 We will now go through the various parts of the above files and
1608 give them a meaning. As you may have noticed, the LTTng-UST API for
1609 C/$$C++$$ applications is some preprocessor sorcery. The LTTng-UST macros
1610 used in your application and those in the LTTng-UST headers are
1611 combined to produce actual source code needed to make tracing possible
1612 using LTTng.
1613
1614 Let's start with the header file, path:{tp.h}. It begins with
1615
1616 [source,c]
1617 ----
1618 #undef TRACEPOINT_PROVIDER
1619 #define TRACEPOINT_PROVIDER my_provider
1620 ----
1621
1622 `TRACEPOINT_PROVIDER` defines the name of the provider to which the
1623 following tracepoint definitions belong. It is used internally by
1624 LTTng-UST headers and _must_ be defined. Since `TRACEPOINT_PROVIDER`
1625 could have been defined by another header file also included by the same
1626 C source file, the best practice is to undefine it first.
1627
1628 NOTE: Names in LTTng-UST follow the C
1629 _identifier_ syntax (starting with a letter and containing either
1630 letters, numbers or underscores); they are _not_ C strings
1631 (not surrounded by double quotes). This is because LTTng-UST macros
1632 use those identifier-like strings to create symbols (named types and
1633 variables).
1634
1635 The tracepoint provider is a group of tracepoint definitions; its chosen
1636 name should reflect this. A hierarchy like Java packages is recommended,
1637 using underscores instead of dots, for example,
1638 `org_company_project_component`.
1639
1640 Next is `TRACEPOINT_INCLUDE`:
1641
1642 [source,c]
1643 ----
1644 #undef TRACEPOINT_INCLUDE
1645 #define TRACEPOINT_INCLUDE "./tp.h"
1646 ----
1647
1648 This little bit of instrospection is needed by LTTng-UST to include
1649 your header at various predefined places.
1650
1651 Include guard follows:
1652
1653 [source,c]
1654 ----
1655 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1656 #define _TP_H
1657 ----
1658
1659 Add these precompiler conditionals to ensure the tracepoint event
1660 generation can include this file more than once.
1661
1662 The `TRACEPOINT_EVENT()` macro is defined in a LTTng-UST header file which
1663 must be included:
1664
1665 [source,c]
1666 ----
1667 #include <lttng/tracepoint.h>
1668 ----
1669
1670 This also allows the application to use the `tracepoint()` macro.
1671
1672 Next is a list of `TRACEPOINT_EVENT()` macro calls which create the
1673 actual tracepoint definitions. We skip this for the moment and
1674 come back to how to use `TRACEPOINT_EVENT()`
1675 <<defining-tracepoints,in a later section>>. Just pay attention to
1676 the first argument: it's always the name of the tracepoint provider
1677 being defined in this header file.
1678
1679 End of include guard:
1680
1681 [source,c]
1682 ----
1683 #endif /* _TP_H */
1684 ----
1685
1686 Finally, include `<lttng/tracepoint-event.h>` to expand the macros:
1687
1688 [source,c]
1689 ----
1690 #include <lttng/tracepoint-event.h>
1691 ----
1692
1693 That's it for path:{tp.h}. Of course, this is only a header file; it must be
1694 included in some C source file to actually use it. This is the job of
1695 path:{tp.c}:
1696
1697 [source,c]
1698 ----
1699 #define TRACEPOINT_CREATE_PROBES
1700
1701 #include "tp.h"
1702 ----
1703
1704 When `TRACEPOINT_CREATE_PROBES` is defined, the macros used in path:{tp.h},
1705 which is included just after, actually create the source code for
1706 LTTng-UST probes (global data structures and functions) out of your
1707 tracepoint definitions. How exactly this is done is out of this text's scope.
1708 `TRACEPOINT_CREATE_PROBES` is discussed further
1709 in
1710 <<building-tracepoint-providers-and-user-application,Building/linking
1711 tracepoint providers and the user application>>.
1712
1713 You could include other header files like path:{tp.h} here to create the probes
1714 of different tracepoint providers, for example:
1715
1716 [source,c]
1717 ----
1718 #define TRACEPOINT_CREATE_PROBES
1719
1720 #include "tp1.h"
1721 #include "tp2.h"
1722 ----
1723
1724 The rule is: probes of a given tracepoint provider
1725 must be created in exactly one source file. This source file could be one
1726 of your project's; it doesn't have to be on its own like
1727 path:{tp.c}, although
1728 <<building-tracepoint-providers-and-user-application,a later section>>
1729 shows that doing so allows packaging the tracepoint providers
1730 independently and keep them out of your application, also making it
1731 possible to reuse them between projects.
1732
1733 The following sections explain how to define tracepoints, how to use the
1734 `tracepoint()` macro to instrument your user space C application and how
1735 to build/link tracepoint providers and your application with LTTng-UST
1736 support.
1737
1738
1739 [[lttng-gen-tp]]
1740 ===== Using `lttng-gen-tp`
1741
1742 LTTng-UST ships with `lttng-gen-tp`, a handy command line utility for
1743 generating most of the stuff discussed above. It takes a _template file_,
1744 with a name usually ending with the `.tp` extension, containing only
1745 tracepoint definitions, and outputs a tracepoint provider (either a C
1746 source file or a precompiled object file) with its header file.
1747
1748 `lttng-gen-tp` should suffice in <<static-linking,static linking>>
1749 situations. When using it, write a template file containing a list of
1750 `TRACEPOINT_EVENT()` macro calls. The tool finds the provider names
1751 used and generate the appropriate files which are going to look a lot
1752 like path:{tp.h} and path:{tp.c} above.
1753
1754 Just call `lttng-gen-tp` like this:
1755
1756 [role="term"]
1757 ----
1758 lttng-gen-tp my-template.tp
1759 ----
1760
1761 path:{my-template.c}, path:{my-template.o} and path:{my-template.h}
1762 are created in the same directory.
1763
1764 You may specify custom C flags passed to the compiler invoked by
1765 `lttng-gen-tp` using the `CFLAGS` environment variable:
1766
1767 [role="term"]
1768 ----
1769 CFLAGS=-I/custom/include/path lttng-gen-tp my-template.tp
1770 ----
1771
1772 For more information on `lttng-gen-tp`, see man:lttng-gen-tp(1).
1773
1774
1775 [[defining-tracepoints]]
1776 ===== Defining tracepoints
1777
1778 As written in <<tracepoint-provider,Tracepoint provider>>,
1779 tracepoints are defined using the
1780 `TRACEPOINT_EVENT()` macro. Each tracepoint, when called using the
1781 `tracepoint()` macro in the actual application's source code, generates
1782 a specific event type with its own fields.
1783
1784 Let's have another look at the example above, with a few added comments:
1785
1786 [source,c]
1787 ----
1788 TRACEPOINT_EVENT(
1789 /* tracepoint provider name */
1790 my_provider,
1791
1792 /* tracepoint/event name */
1793 my_first_tracepoint,
1794
1795 /* list of tracepoint arguments */
1796 TP_ARGS(
1797 int, my_integer_arg,
1798 char*, my_string_arg
1799 ),
1800
1801 /* list of fields of eventual event */
1802 TP_FIELDS(
1803 ctf_string(my_string_field, my_string_arg)
1804 ctf_integer(int, my_integer_field, my_integer_arg)
1805 )
1806 )
1807 ----
1808
1809 The tracepoint provider name must match the name of the tracepoint
1810 provider in which this tracepoint is defined
1811 (see <<tracepoint-provider,Tracepoint provider>>). In other words,
1812 always use the same string as the value of `TRACEPOINT_PROVIDER` above.
1813
1814 The tracepoint name becomes the event name once events are recorded
1815 by the LTTng-UST tracer. It must follow the tracepoint provider name
1816 syntax: start with a letter and contain either letters, numbers or
1817 underscores. Two tracepoints under the same provider cannot have the
1818 same name. In other words, you cannot overload a tracepoint like you
1819 would overload functions and methods in $$C++$$/Java.
1820
1821 NOTE: The concatenation of the tracepoint
1822 provider name and the tracepoint name cannot exceed 254 characters. If
1823 it does, the instrumented application compiles and runs, but LTTng
1824 issues multiple warnings and you could experience serious problems.
1825
1826 The list of tracepoint arguments gives this tracepoint its signature:
1827 see it like the declaration of a C function. The format of `TP_ARGS()`
1828 arguments is: C type, then argument name; repeat as needed, up to ten
1829 times. For example, if we were to replicate the signature of C standard
1830 library's `fseek()`, the `TP_ARGS()` part would look like:
1831
1832 [source,c]
1833 ----
1834 TP_ARGS(
1835 FILE*, stream,
1836 long int, offset,
1837 int, origin
1838 ),
1839 ----
1840
1841 Of course, you need to include appropriate header files before
1842 the `TRACEPOINT_EVENT()` macro calls if any argument has a complex type.
1843
1844 `TP_ARGS()` may not be omitted, but may be empty. `TP_ARGS(void)` is
1845 also accepted.
1846
1847 The list of fields is where the fun really begins. The fields defined
1848 in this list are the fields of the events generated by the execution
1849 of this tracepoint. Each tracepoint field definition has a C
1850 _argument expression_ which is evaluated when the execution reaches
1851 the tracepoint. Tracepoint arguments _may be_ used freely in those
1852 argument expressions, but they _don't_ have to.
1853
1854 There are several types of tracepoint fields available. The macros to
1855 define them are given and explained in the
1856 <<liblttng-ust-tp-fields,LTTng-UST library reference>> section.
1857
1858 Field names must follow the standard C identifier syntax: letter, then
1859 optional sequence of letters, numbers or underscores. Each field must have
1860 a different name.
1861
1862 Those `ctf_*()` macros are added to the `TP_FIELDS()` part of
1863 `TRACEPOINT_EVENT()`. Note that they are not delimited by commas.
1864 `TP_FIELDS()` may be empty, but the `TP_FIELDS(void)` form is _not_
1865 accepted.
1866
1867 The following snippet shows how argument expressions may be used in
1868 tracepoint fields and how they may refer freely to tracepoint arguments.
1869
1870 [source,c]
1871 ----
1872 /* for struct stat */
1873 #include <sys/types.h>
1874 #include <sys/stat.h>
1875 #include <unistd.h>
1876
1877 TRACEPOINT_EVENT(
1878 my_provider,
1879 my_tracepoint,
1880 TP_ARGS(
1881 int, my_int_arg,
1882 char*, my_str_arg,
1883 struct stat*, st
1884 ),
1885 TP_FIELDS(
1886 /* simple integer field with constant value */
1887 ctf_integer(
1888 int, /* field C type */
1889 my_constant_field, /* field name */
1890 23 + 17 /* argument expression */
1891 )
1892
1893 /* my_int_arg tracepoint argument */
1894 ctf_integer(
1895 int,
1896 my_int_arg_field,
1897 my_int_arg
1898 )
1899
1900 /* my_int_arg squared */
1901 ctf_integer(
1902 int,
1903 my_int_arg_field2,
1904 my_int_arg * my_int_arg
1905 )
1906
1907 /* sum of first 4 characters of my_str_arg */
1908 ctf_integer(
1909 int,
1910 sum4,
1911 my_str_arg[0] + my_str_arg[1] +
1912 my_str_arg[2] + my_str_arg[3]
1913 )
1914
1915 /* my_str_arg as string field */
1916 ctf_string(
1917 my_str_arg_field, /* field name */
1918 my_str_arg /* argument expression */
1919 )
1920
1921 /* st_size member of st tracepoint argument, hexadecimal */
1922 ctf_integer_hex(
1923 off_t, /* field C type */
1924 size_field, /* field name */
1925 st->st_size /* argument expression */
1926 )
1927
1928 /* st_size member of st tracepoint argument, as double */
1929 ctf_float(
1930 double, /* field C type */
1931 size_dbl_field, /* field name */
1932 (double) st->st_size /* argument expression */
1933 )
1934
1935 /* half of my_str_arg string as text sequence */
1936 ctf_sequence_text(
1937 char, /* element C type */
1938 half_my_str_arg_field, /* field name */
1939 my_str_arg, /* argument expression */
1940 size_t, /* length expression C type */
1941 strlen(my_str_arg) / 2 /* length expression */
1942 )
1943 )
1944 )
1945 ----
1946
1947 As you can see, having a custom argument expression for each field
1948 makes tracepoints very flexible for tracing a user space C application.
1949 This tracepoint definition is reused later in this guide, when
1950 actually using tracepoints in a user space application.
1951
1952
1953 [[using-tracepoint-classes]]
1954 ===== Using tracepoint classes
1955
1956 In LTTng-UST, a _tracepoint class_ is a class of tracepoints sharing the
1957 same field types and names. A _tracepoint instance_ is one instance of
1958 such a declared tracepoint class, with its own event name and tracepoint
1959 provider name.
1960
1961 What is documented in <<defining-tracepoints,Defining tracepoints>>
1962 is actually how to declare a _tracepoint class_ and define a
1963 _tracepoint instance_ at the same time. Without revealing the internals
1964 of LTTng-UST too much, it has to be noted that one serialization
1965 function is created for each tracepoint class. A serialization
1966 function is responsible for serializing the fields of a tracepoint
1967 into a sub-buffer when tracing. For various performance reasons, when
1968 your situation requires multiple tracepoints with different names, but
1969 with the same fields layout, the best practice is to manually create
1970 a tracepoint class and instantiate as many tracepoint instances as
1971 needed. One positive effect of such a design, amongst other advantages,
1972 is that all tracepoint instances of the same tracepoint class
1973 reuse the same serialization function, thus reducing cache pollution.
1974
1975 As an example, here are three tracepoint definitions as we know them:
1976
1977 [source,c]
1978 ----
1979 TRACEPOINT_EVENT(
1980 my_app,
1981 get_account,
1982 TP_ARGS(
1983 int, userid,
1984 size_t, len
1985 ),
1986 TP_FIELDS(
1987 ctf_integer(int, userid, userid)
1988 ctf_integer(size_t, len, len)
1989 )
1990 )
1991
1992 TRACEPOINT_EVENT(
1993 my_app,
1994 get_settings,
1995 TP_ARGS(
1996 int, userid,
1997 size_t, len
1998 ),
1999 TP_FIELDS(
2000 ctf_integer(int, userid, userid)
2001 ctf_integer(size_t, len, len)
2002 )
2003 )
2004
2005 TRACEPOINT_EVENT(
2006 my_app,
2007 get_transaction,
2008 TP_ARGS(
2009 int, userid,
2010 size_t, len
2011 ),
2012 TP_FIELDS(
2013 ctf_integer(int, userid, userid)
2014 ctf_integer(size_t, len, len)
2015 )
2016 )
2017 ----
2018
2019 In this case, three tracepoint classes are created, with one tracepoint
2020 instance for each of them: `get_account`, `get_settings` and
2021 `get_transaction`. However, they all share the same field names and
2022 types. Declaring one tracepoint class and three tracepoint instances of
2023 the latter is a better design choice:
2024
2025 [source,c]
2026 ----
2027 /* the tracepoint class */
2028 TRACEPOINT_EVENT_CLASS(
2029 /* tracepoint provider name */
2030 my_app,
2031
2032 /* tracepoint class name */
2033 my_class,
2034
2035 /* arguments */
2036 TP_ARGS(
2037 int, userid,
2038 size_t, len
2039 ),
2040
2041 /* fields */
2042 TP_FIELDS(
2043 ctf_integer(int, userid, userid)
2044 ctf_integer(size_t, len, len)
2045 )
2046 )
2047
2048 /* the tracepoint instances */
2049 TRACEPOINT_EVENT_INSTANCE(
2050 /* tracepoint provider name */
2051 my_app,
2052
2053 /* tracepoint class name */
2054 my_class,
2055
2056 /* tracepoint/event name */
2057 get_account,
2058
2059 /* arguments */
2060 TP_ARGS(
2061 int, userid,
2062 size_t, len
2063 )
2064 )
2065 TRACEPOINT_EVENT_INSTANCE(
2066 my_app,
2067 my_class,
2068 get_settings,
2069 TP_ARGS(
2070 int, userid,
2071 size_t, len
2072 )
2073 )
2074 TRACEPOINT_EVENT_INSTANCE(
2075 my_app,
2076 my_class,
2077 get_transaction,
2078 TP_ARGS(
2079 int, userid,
2080 size_t, len
2081 )
2082 )
2083 ----
2084
2085 Of course, all those names and `TP_ARGS()` invocations are redundant,
2086 but some C preprocessor magic can solve this:
2087
2088 [source,c]
2089 ----
2090 #define MY_TRACEPOINT_ARGS \
2091 TP_ARGS( \
2092 int, userid, \
2093 size_t, len \
2094 )
2095
2096 TRACEPOINT_EVENT_CLASS(
2097 my_app,
2098 my_class,
2099 MY_TRACEPOINT_ARGS,
2100 TP_FIELDS(
2101 ctf_integer(int, userid, userid)
2102 ctf_integer(size_t, len, len)
2103 )
2104 )
2105
2106 #define MY_APP_TRACEPOINT_INSTANCE(name) \
2107 TRACEPOINT_EVENT_INSTANCE( \
2108 my_app, \
2109 my_class, \
2110 name, \
2111 MY_TRACEPOINT_ARGS \
2112 )
2113
2114 MY_APP_TRACEPOINT_INSTANCE(get_account)
2115 MY_APP_TRACEPOINT_INSTANCE(get_settings)
2116 MY_APP_TRACEPOINT_INSTANCE(get_transaction)
2117 ----
2118
2119
2120 [[assigning-log-levels]]
2121 ===== Assigning log levels to tracepoints
2122
2123 Optionally, a log level can be assigned to a defined tracepoint.
2124 Assigning different levels of importance to tracepoints can be useful;
2125 when controlling tracing sessions,
2126 <<controlling-tracing,you can choose>> to only enable tracepoints
2127 falling into a specific log level range.
2128
2129 Log levels are assigned to defined tracepoints using the
2130 `TRACEPOINT_LOGLEVEL()` macro. The latter must be used _after_ having
2131 used `TRACEPOINT_EVENT()` for a given tracepoint. The
2132 `TRACEPOINT_LOGLEVEL()` macro has the following construct:
2133
2134 [source,c]
2135 ----
2136 TRACEPOINT_LOGLEVEL(PROVIDER_NAME, TRACEPOINT_NAME, LOG_LEVEL)
2137 ----
2138
2139 where the first two arguments are the same as the first two arguments
2140 of `TRACEPOINT_EVENT()` and `LOG_LEVEL` is one
2141 of the values given in the
2142 <<liblttng-ust-tracepoint-loglevel,LTTng-UST library reference>>
2143 section.
2144
2145 As an example, let's assign a `TRACE_DEBUG_UNIT` log level to our
2146 previous tracepoint definition:
2147
2148 [source,c]
2149 ----
2150 TRACEPOINT_LOGLEVEL(my_provider, my_tracepoint, TRACE_DEBUG_UNIT)
2151 ----
2152
2153
2154 [[probing-the-application-source-code]]
2155 ===== Probing the application's source code
2156
2157 Once tracepoints are properly defined within a tracepoint provider,
2158 they may be inserted into the user application to be instrumented
2159 using the `tracepoint()` macro. Its first argument is the tracepoint
2160 provider name and its second is the tracepoint name. The next, optional
2161 arguments are defined by the `TP_ARGS()` part of the definition of
2162 the tracepoint to use.
2163
2164 As an example, let us again take the following tracepoint definition:
2165
2166 [source,c]
2167 ----
2168 TRACEPOINT_EVENT(
2169 /* tracepoint provider name */
2170 my_provider,
2171
2172 /* tracepoint/event name */
2173 my_first_tracepoint,
2174
2175 /* list of tracepoint arguments */
2176 TP_ARGS(
2177 int, my_integer_arg,
2178 char*, my_string_arg
2179 ),
2180
2181 /* list of fields of eventual event */
2182 TP_FIELDS(
2183 ctf_string(my_string_field, my_string_arg)
2184 ctf_integer(int, my_integer_field, my_integer_arg)
2185 )
2186 )
2187 ----
2188
2189 Assuming this is part of a file named path:{tp.h} which defines the tracepoint
2190 provider and which is included by path:{tp.c}, here's a complete C application
2191 calling this tracepoint (multiple times):
2192
2193 [source,c]
2194 ----
2195 #define TRACEPOINT_DEFINE
2196 #include "tp.h"
2197
2198 int main(int argc, char* argv[])
2199 {
2200 int i;
2201
2202 tracepoint(my_provider, my_first_tracepoint, 23, "Hello, World!");
2203
2204 for (i = 0; i < argc; ++i) {
2205 tracepoint(my_provider, my_first_tracepoint, i, argv[i]);
2206 }
2207
2208 return 0;
2209 }
2210 ----
2211
2212 For each tracepoint provider, `TRACEPOINT_DEFINE` must be defined into
2213 exactly one translation unit (C source file) of the user application,
2214 before including the tracepoint provider header file. In other words,
2215 for a given tracepoint provider, you cannot define `TRACEPOINT_DEFINE`,
2216 and then include its header file in two separate C source files of
2217 the same application. `TRACEPOINT_DEFINE` is discussed further in
2218 <<building-tracepoint-providers-and-user-application,Building/linking
2219 tracepoint providers and the user application>>.
2220
2221 As another example, remember this definition we wrote in a previous
2222 section (comments are stripped):
2223
2224 [source,c]
2225 ----
2226 /* for struct stat */
2227 #include <sys/types.h>
2228 #include <sys/stat.h>
2229 #include <unistd.h>
2230
2231 TRACEPOINT_EVENT(
2232 my_provider,
2233 my_tracepoint,
2234 TP_ARGS(
2235 int, my_int_arg,
2236 char*, my_str_arg,
2237 struct stat*, st
2238 ),
2239 TP_FIELDS(
2240 ctf_integer(int, my_constant_field, 23 + 17)
2241 ctf_integer(int, my_int_arg_field, my_int_arg)
2242 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2243 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2244 my_str_arg[2] + my_str_arg[3])
2245 ctf_string(my_str_arg_field, my_str_arg)
2246 ctf_integer_hex(off_t, size_field, st->st_size)
2247 ctf_float(double, size_dbl_field, (double) st->st_size)
2248 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2249 size_t, strlen(my_str_arg) / 2)
2250 )
2251 )
2252 ----
2253
2254 Here's an example of calling it:
2255
2256 [source,c]
2257 ----
2258 #define TRACEPOINT_DEFINE
2259 #include "tp.h"
2260
2261 int main(void)
2262 {
2263 struct stat s;
2264
2265 stat("/etc/fstab", &s);
2266
2267 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2268
2269 return 0;
2270 }
2271 ----
2272
2273 When viewing the trace, assuming the file size of path:{/etc/fstab} is
2274 301{nbsp}bytes, the event generated by the execution of this tracepoint
2275 should have the following fields, in this order:
2276
2277 ----
2278 my_constant_field 40
2279 my_int_arg_field 23
2280 my_int_arg_field2 529
2281 sum4_field 389
2282 my_str_arg_field "Hello, World!"
2283 size_field 0x12d
2284 size_dbl_field 301.0
2285 half_my_str_arg_field "Hello,"
2286 ----
2287
2288
2289 [[building-tracepoint-providers-and-user-application]]
2290 ===== Building/linking tracepoint providers and the user application
2291
2292 The final step of using LTTng-UST for tracing a user space C application
2293 (beside running the application) is building and linking tracepoint
2294 providers and the application itself.
2295
2296 As discussed above, the macros used by the user-written tracepoint provider
2297 header file are useless until actually used to create probes code
2298 (global data structures and functions) in a translation unit (C source file).
2299 This is accomplished by defining `TRACEPOINT_CREATE_PROBES` in a translation
2300 unit and then including the tracepoint provider header file.
2301 When `TRACEPOINT_CREATE_PROBES` is defined, macros used and included by
2302 the tracepoint provider header produce actual source code needed by any
2303 application using the defined tracepoints. Defining
2304 `TRACEPOINT_CREATE_PROBES` produces code used when registering
2305 tracepoint providers when the tracepoint provider package loads.
2306
2307 The other important definition is `TRACEPOINT_DEFINE`. This one creates
2308 global, per-tracepoint structures referencing the tracepoint providers
2309 data. Those structures are required by the actual functions inserted
2310 where `tracepoint()` macros are placed and need to be defined by the
2311 instrumented application.
2312
2313 Both `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` need to be defined
2314 at some places in order to trace a user space C application using LTTng.
2315 Although explaining their exact mechanism is beyond the scope of this
2316 document, the reason they both exist separately is to allow the trace
2317 providers to be packaged as a shared object (dynamically loaded library).
2318
2319 There are two ways to compile and link the tracepoint providers
2320 with the application: _<<static-linking,statically>>_ or
2321 _<<dynamic-linking,dynamically>>_. Both methods are covered in the
2322 following subsections.
2323
2324
2325 [[static-linking]]
2326 ===== Static linking the tracepoint providers to the application
2327
2328 With the static linking method, compiled tracepoint providers are copied
2329 into the target application. There are three ways to do this:
2330
2331 . Use one of your **existing C source files** to create probes.
2332 . Create probes in a separate C source file and build it as an
2333 **object file** to be linked with the application (more decoupled).
2334 . Create probes in a separate C source file, build it as an
2335 object file and archive it to create a **static library**
2336 (more decoupled, more portable).
2337
2338 The first approach is to define `TRACEPOINT_CREATE_PROBES` and include
2339 your tracepoint provider(s) header file(s) directly into an existing C
2340 source file. Here's an example:
2341
2342 [source,c]
2343 ----
2344 #include <stdlib.h>
2345 #include <stdio.h>
2346 /* ... */
2347
2348 #define TRACEPOINT_CREATE_PROBES
2349 #define TRACEPOINT_DEFINE
2350 #include "tp.h"
2351
2352 /* ... */
2353
2354 int my_func(int a, const char* b)
2355 {
2356 /* ... */
2357
2358 tracepoint(my_provider, my_tracepoint, buf, sz, limit, &tt)
2359
2360 /* ... */
2361 }
2362
2363 /* ... */
2364 ----
2365
2366 Again, before including a given tracepoint provider header file,
2367 `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` must be defined in
2368 one, **and only one**, translation unit. Other C source files of the
2369 same application may include path:{tp.h} to use tracepoints with
2370 the `tracepoint()` macro, but must not define
2371 `TRACEPOINT_CREATE_PROBES`/`TRACEPOINT_DEFINE` again.
2372
2373 This translation unit may be built as an object file by making sure to
2374 add `.` to the include path:
2375
2376 [role="term"]
2377 ----
2378 gcc -c -I. file.c
2379 ----
2380
2381 The second approach is to isolate the tracepoint provider code into a
2382 separate object file by using a dedicated C source file to create probes:
2383
2384 [source,c]
2385 ----
2386 #define TRACEPOINT_CREATE_PROBES
2387
2388 #include "tp.h"
2389 ----
2390
2391 `TRACEPOINT_DEFINE` must be defined by a translation unit of the
2392 application. Since we're talking about static linking here, it could as
2393 well be defined directly in the file above, before `#include "tp.h"`:
2394
2395 [source,c]
2396 ----
2397 #define TRACEPOINT_CREATE_PROBES
2398 #define TRACEPOINT_DEFINE
2399
2400 #include "tp.h"
2401 ----
2402
2403 This is actually what <<lttng-gen-tp,`lttng-gen-tp`>> does, and is
2404 the recommended practice.
2405
2406 Build the tracepoint provider:
2407
2408 [role="term"]
2409 ----
2410 gcc -c -I. tp.c
2411 ----
2412
2413 Finally, the resulting object file may be archived to create a
2414 more portable tracepoint provider static library:
2415
2416 [role="term"]
2417 ----
2418 ar rc tp.a tp.o
2419 ----
2420
2421 Using a static library does have the advantage of centralising the
2422 tracepoint providers objects so they can be shared between multiple
2423 applications. This way, when the tracepoint provider is modified, the
2424 source code changes don't have to be patched into each application's source
2425 code tree. The applications need to be relinked after each change, but need
2426 not to be otherwise recompiled (unless the tracepoint provider's API
2427 changes).
2428
2429 Regardless of which method you choose, you end up with an object file
2430 (potentially archived) containing the trace providers assembled code.
2431 To link this code with the rest of your application, you must also link
2432 with `liblttng-ust` and `libdl`:
2433
2434 [role="term"]
2435 ----
2436 gcc -o app tp.o other.o files.o of.o your.o app.o -llttng-ust -ldl
2437 ----
2438
2439 or
2440
2441 [role="term"]
2442 ----
2443 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -ldl
2444 ----
2445
2446 If you're using a BSD
2447 system, replace `-ldl` with `-lc`:
2448
2449 [role="term"]
2450 ----
2451 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -lc
2452 ----
2453
2454 The application can be started as usual, for example:
2455
2456 [role="term"]
2457 ----
2458 ./app
2459 ----
2460
2461 The `lttng` command line tool can be used to
2462 <<controlling-tracing,control tracing>>.
2463
2464
2465 [[dynamic-linking]]
2466 ===== Dynamic linking the tracepoint providers to the application
2467
2468 The second approach to package the tracepoint providers is to use
2469 dynamic linking: the library and its member functions are explicitly
2470 sought, loaded and unloaded at runtime using `libdl`.
2471
2472 It has to be noted that, for a variety of reasons, the created shared
2473 library is be dynamically _loaded_, as opposed to dynamically
2474 _linked_. The tracepoint provider shared object is, however, linked
2475 with `liblttng-ust`, so that `liblttng-ust` is guaranteed to be loaded
2476 as soon as the tracepoint provider is. If the tracepoint provider is
2477 not loaded, since the application itself is not linked with
2478 `liblttng-ust`, the latter is not loaded at all and the tracepoint calls
2479 become inert.
2480
2481 The process to create the tracepoint provider shared object is pretty
2482 much the same as the static library method, except that:
2483
2484 * since the tracepoint provider is not part of the application
2485 anymore, `TRACEPOINT_DEFINE` _must_ be defined, for each tracepoint
2486 provider, in exactly one translation unit (C source file) of the
2487 _application_;
2488 * `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` must be defined next to
2489 `TRACEPOINT_DEFINE`.
2490
2491 Regarding `TRACEPOINT_DEFINE` and `TRACEPOINT_PROBE_DYNAMIC_LINKAGE`,
2492 the recommended practice is to use a separate C source file in your
2493 application to define them, then include the tracepoint provider
2494 header files afterwards. For example:
2495
2496 [source,c]
2497 ----
2498 #define TRACEPOINT_DEFINE
2499 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2500
2501 /* include the header files of one or more tracepoint providers below */
2502 #include "tp1.h"
2503 #include "tp2.h"
2504 #include "tp3.h"
2505 ----
2506
2507 `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` makes the macros included afterwards
2508 (by including the tracepoint provider header, which itself includes
2509 LTTng-UST headers) aware that the tracepoint provider is to be loaded
2510 dynamically and not part of the application's executable.
2511
2512 The tracepoint provider object file used to create the shared library
2513 is built like it is using the static library method, only with the
2514 `-fpic` option added:
2515
2516 [role="term"]
2517 ----
2518 gcc -c -fpic -I. tp.c
2519 ----
2520
2521 It is then linked as a shared library like this:
2522
2523 [role="term"]
2524 ----
2525 gcc -shared -Wl,--no-as-needed -o tp.so -llttng-ust tp.o
2526 ----
2527
2528 As previously stated, this tracepoint provider shared object isn't
2529 linked with the user application: it's loaded manually. This is
2530 why the application is built with no mention of this tracepoint
2531 provider, but still needs `libdl`:
2532
2533 [role="term"]
2534 ----
2535 gcc -o app other.o files.o of.o your.o app.o -ldl
2536 ----
2537
2538 Now, to make LTTng-UST tracing available to the application, the
2539 `LD_PRELOAD` environment variable is used to preload the tracepoint
2540 provider shared library _before_ the application actually starts:
2541
2542 [role="term"]
2543 ----
2544 LD_PRELOAD=/path/to/tp.so ./app
2545 ----
2546
2547 [NOTE]
2548 ====
2549 It is not safe to use
2550 `dlclose()` on a tracepoint provider shared object that
2551 is being actively used for tracing, due to a lack of reference
2552 counting from LTTng-UST to the shared object.
2553
2554 For example, statically linking a tracepoint provider to a
2555 shared object which is to be dynamically loaded by an application
2556 (a plugin, for example) is not safe: the shared object, which
2557 contains the tracepoint provider, could be dynamically closed
2558 (`dlclose()`) at any time by the application.
2559
2560 To instrument a shared object, either:
2561
2562 * Statically link the tracepoint provider to the _application_, or
2563 * Build the tracepoint provider as a shared object (following
2564 the procedure shown in this section), and preload it when
2565 tracing is needed using the `LD_PRELOAD`
2566 environment variable.
2567 ====
2568
2569 Your application will still work without this preloading, albeit without
2570 LTTng-UST tracing support:
2571
2572 [role="term"]
2573 ----
2574 ./app
2575 ----
2576
2577
2578 [[using-lttng-ust-with-daemons]]
2579 ===== Using LTTng-UST with daemons
2580
2581 Some extra care is needed when using `liblttng-ust` with daemon
2582 applications that call `fork()`, `clone()` or BSD's `rfork()` without
2583 a following `exec()` family system call. The `liblttng-ust-fork`
2584 library must be preloaded for the application.
2585
2586 Example:
2587
2588 [role="term"]
2589 ----
2590 LD_PRELOAD=liblttng-ust-fork.so ./app
2591 ----
2592
2593 Or, if you're using a tracepoint provider shared library:
2594
2595 [role="term"]
2596 ----
2597 LD_PRELOAD="liblttng-ust-fork.so /path/to/tp.so" ./app
2598 ----
2599
2600
2601 [[lttng-ust-pkg-config]]
2602 ===== Using pkg-config
2603
2604 On some distributions, LTTng-UST is shipped with a pkg-config metadata
2605 file, so that you may use the `pkg-config` tool:
2606
2607 [role="term"]
2608 ----
2609 pkg-config --libs lttng-ust
2610 ----
2611
2612 This prints `-llttng-ust -ldl` on Linux systems.
2613
2614 You may also check the LTTng-UST version using `pkg-config`:
2615
2616 [role="term"]
2617 ----
2618 pkg-config --modversion lttng-ust
2619 ----
2620
2621 For more information about pkg-config, see
2622 http://linux.die.net/man/1/pkg-config[its manpage].
2623
2624
2625 [role="since-2.5"]
2626 [[tracef]]
2627 ===== Using `tracef()`
2628
2629 `tracef()` is a small LTTng-UST API to avoid defining your own
2630 tracepoints and tracepoint providers. The signature of `tracef()` is
2631 the same as `printf()`'s.
2632
2633 The `tracef()` utility function was developed to make user space tracing
2634 super simple, albeit with notable disadvantages compared to custom,
2635 full-fledged tracepoint providers:
2636
2637 * All generated events have the same provider/event names, respectively
2638 `lttng_ust_tracef` and `event`.
2639 * There's no static type checking.
2640 * The only event field you actually get, named `msg`, is a string
2641 potentially containing the values you passed to the function
2642 using your own format. This also means that you cannot use filtering
2643 using a custom expression at runtime because there are no isolated
2644 fields.
2645 * Since `tracef()` uses C standard library's `vasprintf()` function
2646 in the background to format the strings at runtime, its
2647 expected performance is lower than using custom tracepoint providers
2648 with typed fields, which do not require a conversion to a string.
2649
2650 Thus, `tracef()` is useful for quick prototyping and debugging, but
2651 should not be considered for any permanent/serious application
2652 instrumentation.
2653
2654 To use `tracef()`, first include `<lttng/tracef.h>` in the C source file
2655 where you need to insert probes:
2656
2657 [source,c]
2658 ----
2659 #include <lttng/tracef.h>
2660 ----
2661
2662 Use `tracef()` like you would use `printf()` in your source code, for
2663 example:
2664
2665 [source,c]
2666 ----
2667 /* ... */
2668
2669 tracef("my message, my integer: %d", my_integer);
2670
2671 /* ... */
2672 ----
2673
2674 Link your application with `liblttng-ust`:
2675
2676 [role="term"]
2677 ----
2678 gcc -o app app.c -llttng-ust
2679 ----
2680
2681 Execute the application as usual:
2682
2683 [role="term"]
2684 ----
2685 ./app
2686 ----
2687
2688 Voilà! Use the `lttng` command line tool to
2689 <<controlling-tracing,control tracing>>. You can enable `tracef()`
2690 events like this:
2691
2692 [role="term"]
2693 ----
2694 lttng enable-event --userspace 'lttng_ust_tracef:*'
2695 ----
2696
2697
2698 [[lttng-ust-environment-variables-compiler-flags]]
2699 ===== LTTng-UST environment variables and special compilation flags
2700
2701 A few special environment variables and compile flags may affect the
2702 behavior of LTTng-UST.
2703
2704 LTTng-UST's debugging can be activated by setting the environment
2705 variable `LTTNG_UST_DEBUG` to `1` when launching the application. It
2706 can also be enabled at compile time by defining `LTTNG_UST_DEBUG` when
2707 compiling LTTng-UST (using the `-DLTTNG_UST_DEBUG` compiler option).
2708
2709 The environment variable `LTTNG_UST_REGISTER_TIMEOUT` can be used to
2710 specify how long the application should wait for the
2711 <<lttng-sessiond,session daemon>>'s _registration done_ command
2712 before proceeding to execute the main program. The timeout value is
2713 specified in milliseconds. 0 means _don't wait_. -1 means
2714 _wait forever_. Setting this environment variable to 0 is recommended
2715 for applications with time contraints on the process startup time.
2716
2717 The default value of `LTTNG_UST_REGISTER_TIMEOUT` (when not defined)
2718 is **3000{nbsp}ms**.
2719
2720 The compilation definition `LTTNG_UST_DEBUG_VALGRIND` should be enabled
2721 at build time (`-DLTTNG_UST_DEBUG_VALGRIND`) to allow `liblttng-ust`
2722 to be used with http://valgrind.org/[Valgrind].
2723 The side effect of defining `LTTNG_UST_DEBUG_VALGRIND` is that per-CPU
2724 buffering is disabled.
2725
2726
2727 [[cxx-application]]
2728 ==== $$C++$$ application
2729
2730 Because of $$C++$$'s cross-compatibility with the C language, $$C++$$
2731 applications can be readily instrumented with the LTTng-UST C API.
2732
2733 Follow the <<c-application,C application>> user guide above. It
2734 should be noted that, in this case, tracepoint providers should have
2735 the typical `.cpp`, `.cxx` or `.cc` extension and be built with `g++`
2736 instead of `gcc`. This is the easiest way of avoiding linking errors
2737 due to symbol name mangling incompatibilities between both languages.
2738
2739
2740 [[prebuilt-ust-helpers]]
2741 ==== Prebuilt user space tracing helpers
2742
2743 The LTTng-UST package provides a few helpers that one may find
2744 useful in some situations. They all work the same way: you must
2745 preload the appropriate shared object before running the user
2746 application (using the `LD_PRELOAD` environment variable).
2747
2748 The shared objects are normally found in dir:{/usr/lib}.
2749
2750 The current installed helpers are:
2751
2752 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}::
2753 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
2754 and POSIX threads tracing>>.
2755
2756 path:{liblttng-ust-cyg-profile.so} and path:{liblttng-ust-cyg-profile-fast.so}::
2757 <<liblttng-ust-cyg-profile,Function tracing>>.
2758
2759 path:{liblttng-ust-dl.so}::
2760 <<liblttng-ust-dl,Dynamic linker tracing>>.
2761
2762 The following subsections document what helpers instrument exactly
2763 and how to use them.
2764
2765
2766 [role="since-2.3"]
2767 [[liblttng-ust-libc-pthread-wrapper]]
2768 ===== C standard library and POSIX threads tracing
2769
2770 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}
2771 can add instrumentation to respectively some C standard library and
2772 POSIX threads functions.
2773
2774 The following functions are traceable by path:{liblttng-ust-libc-wrapper.so}:
2775
2776 [role="growable"]
2777 .Functions instrumented by path:{liblttng-ust-libc-wrapper.so}
2778 |====
2779 |TP provider name |TP name |Instrumented function
2780
2781 .6+|`ust_libc` |`malloc` |`malloc()`
2782 |`calloc` |`calloc()`
2783 |`realloc` |`realloc()`
2784 |`free` |`free()`
2785 |`memalign` |`memalign()`
2786 |`posix_memalign` |`posix_memalign()`
2787 |====
2788
2789 The following functions are traceable by
2790 path:{liblttng-ust-pthread-wrapper.so}:
2791
2792 [role="growable"]
2793 .Functions instrumented by path:{liblttng-ust-pthread-wrapper.so}
2794 |====
2795 |TP provider name |TP name |Instrumented function
2796
2797 .4+|`ust_pthread` |`pthread_mutex_lock_req` |`pthread_mutex_lock()` (request time)
2798 |`pthread_mutex_lock_acq` |`pthread_mutex_lock()` (acquire time)
2799 |`pthread_mutex_trylock` |`pthread_mutex_trylock()`
2800 |`pthread_mutex_unlock` |`pthread_mutex_unlock()`
2801 |====
2802
2803 All tracepoints have fields corresponding to the arguments of the
2804 function they instrument.
2805
2806 To use one or the other with any user application, independently of
2807 how the latter is built, do:
2808
2809 [role="term"]
2810 ----
2811 LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
2812 ----
2813
2814 or
2815
2816 [role="term"]
2817 ----
2818 LD_PRELOAD=liblttng-ust-pthread-wrapper.so my-app
2819 ----
2820
2821 To use both, do:
2822
2823 [role="term"]
2824 ----
2825 LD_PRELOAD="liblttng-ust-libc-wrapper.so liblttng-ust-pthread-wrapper.so" my-app
2826 ----
2827
2828 When the shared object is preloaded, it effectively replaces the
2829 functions listed in the above tables by wrappers which add tracepoints
2830 and call the replaced functions.
2831
2832 Of course, like any other tracepoint, the ones above need to be enabled
2833 in order for LTTng-UST to generate events. This is done using the
2834 `lttng` command line tool
2835 (see <<controlling-tracing,Controlling tracing>>).
2836
2837
2838 [[liblttng-ust-cyg-profile]]
2839 ===== Function tracing
2840
2841 Function tracing is the recording of which functions are entered and
2842 left during the execution of an application. Like with any LTTng event,
2843 the precise time at which this happens is also kept.
2844
2845 GCC and clang have an option named
2846 https://gcc.gnu.org/onlinedocs/gcc-4.9.1/gcc/Code-Gen-Options.html[`-finstrument-functions`]
2847 which generates instrumentation calls for entry and exit to functions.
2848 The LTTng-UST function tracing helpers, path:{liblttng-ust-cyg-profile.so}
2849 and path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
2850 to add instrumentation to the two generated functions (which contain
2851 `cyg_profile` in their names, hence the shared object's name).
2852
2853 In order to use LTTng-UST function tracing, the translation units to
2854 instrument must be built using the `-finstrument-functions` compiler
2855 flag.
2856
2857 LTTng-UST function tracing comes in two flavors, each providing
2858 different trade-offs: path:{liblttng-ust-cyg-profile-fast.so} and
2859 path:{liblttng-ust-cyg-profile.so}.
2860
2861 **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant that
2862 should only be used where it can be _guaranteed_ that the complete event
2863 stream is recorded without any missing events. Any kind of duplicate
2864 information is left out. This version registers the following
2865 tracepoints:
2866
2867 [role="growable",options="header,autowidth"]
2868 .Functions instrumented by path:{liblttng-ust-cyg-profile-fast.so}
2869 |====
2870 |TP provider name |TP name |Instrumented function
2871
2872 .2+|`lttng_ust_cyg_profile_fast`
2873
2874 |`func_entry`
2875 a|Function entry
2876
2877 `addr`::
2878 Address of called function.
2879
2880 |`func_exit`
2881 |Function exit
2882 |====
2883
2884 Assuming no event is lost, having only the function addresses on entry
2885 is enough for creating a call graph (remember that a recorded event
2886 always contains the ID of the CPU that generated it). A tool like
2887 https://sourceware.org/binutils/docs/binutils/addr2line.html[`addr2line`]
2888 may be used to convert function addresses back to source files names
2889 and line numbers.
2890
2891 The other helper,
2892 **path:{liblttng-ust-cyg-profile.so}**,
2893 is a more robust variant which also works for use cases where
2894 events might get discarded or not recorded from application startup.
2895 In these cases, the trace analyzer needs extra information to be
2896 able to reconstruct the program flow. This version registers the
2897 following tracepoints:
2898
2899 [role="growable",options="header,autowidth"]
2900 .Functions instrumented by path:{liblttng-ust-cyg-profile.so}
2901 |====
2902 |TP provider name |TP name |Instrumented function
2903
2904 .2+|`lttng_ust_cyg_profile`
2905
2906 |`func_entry`
2907 a|Function entry
2908
2909 `addr`::
2910 Address of called function.
2911
2912 `call_site`::
2913 Call site address.
2914
2915 |`func_exit`
2916 a|Function exit
2917
2918 `addr`::
2919 Address of called function.
2920
2921 `call_site`::
2922 Call site address.
2923 |====
2924
2925 To use one or the other variant with any user application, assuming at
2926 least one translation unit of the latter is compiled with the
2927 `-finstrument-functions` option, do:
2928
2929 [role="term"]
2930 ----
2931 LD_PRELOAD=liblttng-ust-cyg-profile-fast.so my-app
2932 ----
2933
2934 or
2935
2936 [role="term"]
2937 ----
2938 LD_PRELOAD=liblttng-ust-cyg-profile.so my-app
2939 ----
2940
2941 It might be necessary to limit the number of source files where
2942 `-finstrument-functions` is used to prevent excessive amount of trace
2943 data to be generated at runtime.
2944
2945 TIP: When using GCC, at least, you can use
2946 the `-finstrument-functions-exclude-function-list`
2947 option to avoid instrumenting entries and exits of specific
2948 symbol names.
2949
2950 All events generated from LTTng-UST function tracing are provided on
2951 log level `TRACE_DEBUG_FUNCTION`, which is useful to easily enable
2952 function tracing events in your tracing session using the
2953 `--loglevel-only` option of `lttng enable-event`
2954 (see <<controlling-tracing,Controlling tracing>>).
2955
2956
2957 [role="since-2.4"]
2958 [[liblttng-ust-dl]]
2959 ===== Dynamic linker tracing
2960
2961 This LTTng-UST helper causes all calls to `dlopen()` and `dlclose()`
2962 in the target application to be traced with LTTng.
2963
2964 The helper's shared object, path:{liblttng-ust-dl.so}, registers the
2965 following tracepoints when preloaded:
2966
2967 [role="growable",options="header,autowidth"]
2968 .Functions instrumented by path:{liblttng-ust-dl.so}
2969 |====
2970 |TP provider name |TP name |Instrumented function
2971
2972 .2+|`ust_baddr`
2973
2974 |`push`
2975 a|`dlopen()` call
2976
2977 `baddr`::
2978 Memory base address (where the dynamic linker placed the shared
2979 object).
2980
2981 `sopath`::
2982 File system path to the loaded shared object.
2983
2984 `size`::
2985 File size of the the loaded shared object.
2986
2987 `mtime`::
2988 Last modification time (seconds since Epoch time) of the loaded shared
2989 object.
2990
2991 |`pop`
2992 a|Function exit
2993
2994 `baddr`::
2995 Memory base address (where the dynamic linker placed the shared
2996 object).
2997 |====
2998
2999 To use this LTTng-UST helper with any user application, independently of
3000 how the latter is built, do:
3001
3002 [role="term"]
3003 ----
3004 LD_PRELOAD=liblttng-ust-dl.so my-app
3005 ----
3006
3007 Of course, like any other tracepoint, the ones above need to be enabled
3008 in order for LTTng-UST to generate events. This is done using the
3009 `lttng` command line tool
3010 (see <<controlling-tracing,Controlling tracing>>).
3011
3012
3013 [role="since-2.4"]
3014 [[java-application]]
3015 ==== Java application
3016
3017 LTTng-UST provides a _logging_ back-end for Java applications using either
3018 http://docs.oracle.com/javase/7/docs/api/java/util/logging/Logger.html[`java.util.logging`]
3019 (JUL) or
3020 http://logging.apache.org/log4j/1.2/[Apache log4j 1.2]
3021 This back-end is called the _LTTng-UST Java agent_, and it is responsible
3022 for the communications with an LTTng session daemon.
3023
3024 From the user's point of view, once the LTTng-UST Java agent has been
3025 initialized, JUL and log4j loggers may be created and used as usual.
3026 The agent adds its own handler to the _root logger_, so that all
3027 loggers may generate LTTng events with no effort.
3028
3029 Common JUL/log4j features are supported using the `lttng` tool
3030 (see <<controlling-tracing,Controlling tracing>>):
3031
3032 * listing all logger names
3033 * enabling/disabling events per logger name
3034 * JUL/log4j log levels
3035
3036
3037 [role="since-2.1"]
3038 [[jul]]
3039 ===== `java.util.logging`
3040
3041 Here's an example of tracing a Java application which is using
3042 **`java.util.logging`**:
3043
3044 [source,java]
3045 ----
3046 import java.util.logging.Logger;
3047 import org.lttng.ust.agent.LTTngAgent;
3048
3049 public class Test
3050 {
3051 private static final int answer = 42;
3052
3053 public static void main(String[] argv) throws Exception
3054 {
3055 // create a logger
3056 Logger logger = Logger.getLogger("jello");
3057
3058 // call this as soon as possible (before logging)
3059 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3060
3061 // log at will!
3062 logger.info("some info");
3063 logger.warning("some warning");
3064 Thread.sleep(500);
3065 logger.finer("finer information; the answer is " + answer);
3066 Thread.sleep(123);
3067 logger.severe("error!");
3068
3069 // not mandatory, but cleaner
3070 lttngAgent.dispose();
3071 }
3072 }
3073 ----
3074
3075 The LTTng-UST Java agent is packaged in a JAR file named
3076 `liblttng-ust-agent.jar` It is typically located in
3077 dir:{/usr/lib/lttng/java}. To compile the snippet above
3078 (saved as `Test.java`), do:
3079
3080 [role="term"]
3081 ----
3082 javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar Test.java
3083 ----
3084
3085 You can run the resulting compiled class like this:
3086
3087 [role="term"]
3088 ----
3089 java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:. Test
3090 ----
3091
3092 NOTE: http://openjdk.java.net/[OpenJDK] 7 is used for development and
3093 continuous integration, thus this version is directly supported.
3094 However, the LTTng-UST Java agent has also been tested with OpenJDK 6.
3095
3096
3097 [role="since-2.6"]
3098 [[log4j]]
3099 ===== Apache log4j 1.2
3100
3101 LTTng features an Apache log4j 1.2 agent, which means your existing
3102 Java applications using log4j 1.2 for logging can record events to
3103 LTTng traces with just a minor source code modification.
3104
3105 NOTE: This version of LTTng does not support Log4j 2.
3106
3107 Here's an example:
3108
3109 [source,java]
3110 ----
3111 import org.apache.log4j.Logger;
3112 import org.apache.log4j.BasicConfigurator;
3113 import org.lttng.ust.agent.LTTngAgent;
3114
3115 public class Test
3116 {
3117 private static final int answer = 42;
3118
3119 public static void main(String[] argv) throws Exception
3120 {
3121 // create and configure a logger
3122 Logger logger = Logger.getLogger(Test.class);
3123 BasicConfigurator.configure();
3124
3125 // call this as soon as possible (before logging)
3126 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3127
3128 // log at will!
3129 logger.info("some info");
3130 logger.warn("some warning");
3131 Thread.sleep(500);
3132 logger.debug("debug information; the answer is " + answer);
3133 Thread.sleep(123);
3134 logger.error("error!");
3135 logger.fatal("fatal error!");
3136
3137 // not mandatory, but cleaner
3138 lttngAgent.dispose();
3139 }
3140 }
3141 ----
3142
3143 To compile the snippet above, do:
3144
3145 [role="term"]
3146 ----
3147 javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP Test.java
3148 ----
3149
3150 where `$LOG4JCP` is the log4j 1.2 JAR file path.
3151
3152 You can run the resulting compiled class like this:
3153
3154 [role="term"]
3155 ----
3156 java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP:. Test
3157 ----
3158
3159
3160 [[instrumenting-linux-kernel]]
3161 ==== Linux kernel
3162
3163 The Linux kernel can be instrumented for LTTng tracing, either its core
3164 source code or a kernel module. It has to be noted that Linux is
3165 readily traceable using LTTng since many parts of its source code are
3166 already instrumented: this is the job of the upstream
3167 http://git.lttng.org/?p=lttng-modules.git[LTTng-modules]
3168 package. This section presents how to add LTTng instrumentation where it
3169 does not currently exist and how to instrument custom kernel modules.
3170
3171 All LTTng instrumentation in the Linux kernel is based on an existing
3172 infrastructure which bears the name of its main macro, `TRACE_EVENT()`.
3173 This macro is used to define tracepoints,
3174 each tracepoint having a name, usually with the
3175 +__subsys__&#95;__name__+ format,
3176 +_subsys_+ being the subsystem name and
3177 +_name_+ the specific event name.
3178
3179 Tracepoints defined with `TRACE_EVENT()` may be inserted anywhere in
3180 the Linux kernel source code, after what callbacks, called _probes_,
3181 may be registered to execute some action when a tracepoint is
3182 executed. This mechanism is directly used by ftrace and perf,
3183 but cannot be used as is by LTTng: an adaptation layer is added to
3184 satisfy LTTng's specific needs.
3185
3186 With that in mind, this documentation does not cover the `TRACE_EVENT()`
3187 format and how to use it, but it is mandatory to understand it and use
3188 it to instrument Linux for LTTng. A series of
3189 LWN articles explain
3190 `TRACE_EVENT()` in details:
3191 http://lwn.net/Articles/379903/[part 1],
3192 http://lwn.net/Articles/381064/[part 2], and
3193 http://lwn.net/Articles/383362/[part 3].
3194 Once you master `TRACE_EVENT()` enough for your use case, continue
3195 reading this section so that you can add the LTTng adaptation layer of
3196 instrumentation.
3197
3198 This section first discusses the general method of instrumenting the
3199 Linux kernel for LTTng. This method is then reused for the specific
3200 case of instrumenting a kernel module.
3201
3202
3203 [[instrumenting-linux-kernel-itself]]
3204 ===== Instrumenting the Linux kernel for LTTng
3205
3206 The following subsections explain strictly how to add custom LTTng
3207 instrumentation to the Linux kernel. They do not explain how the
3208 macros actually work and the internal mechanics of the tracer.
3209
3210 You should have a Linux kernel source code tree to work with.
3211 Throughout this section, all file paths are relative to the root of
3212 this tree unless otherwise stated.
3213
3214 You need a copy of the LTTng-modules Git repository:
3215
3216 [role="term"]
3217 ----
3218 git clone git://git.lttng.org/lttng-modules.git
3219 ----
3220
3221 The steps to add custom LTTng instrumentation to a Linux kernel
3222 involves defining and using the mainline `TRACE_EVENT()` tracepoints
3223 first, then writing and using the LTTng adaptation layer.
3224
3225
3226 [[mainline-trace-event]]
3227 ===== Defining/using tracepoints with mainline `TRACE_EVENT()` infrastructure
3228
3229 The first step is to define tracepoints using the mainline Linux
3230 `TRACE_EVENT()` macro and insert tracepoints where you want them.
3231 Your tracepoint definitions reside in a header file in
3232 dir:{include/trace/events}. If you're adding tracepoints to an existing
3233 subsystem, edit its appropriate header file.
3234
3235 As an example, the following header file (let's call it
3236 dir:{include/trace/events/hello.h}) defines one tracepoint using
3237 `TRACE_EVENT()`:
3238
3239 [source,c]
3240 ----
3241 /* subsystem name is "hello" */
3242 #undef TRACE_SYSTEM
3243 #define TRACE_SYSTEM hello
3244
3245 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3246 #define _TRACE_HELLO_H
3247
3248 #include <linux/tracepoint.h>
3249
3250 TRACE_EVENT(
3251 /* "hello" is the subsystem name, "world" is the event name */
3252 hello_world,
3253
3254 /* tracepoint function prototype */
3255 TP_PROTO(int foo, const char* bar),
3256
3257 /* arguments for this tracepoint */
3258 TP_ARGS(foo, bar),
3259
3260 /* LTTng doesn't need those */
3261 TP_STRUCT__entry(),
3262 TP_fast_assign(),
3263 TP_printk("", 0)
3264 );
3265
3266 #endif
3267
3268 /* this part must be outside protection */
3269 #include <trace/define_trace.h>
3270 ----
3271
3272 Notice that we don't use any of the last three arguments: they
3273 are left empty here because LTTng doesn't need them. You would only fill
3274 `TP_STRUCT__entry()`, `TP_fast_assign()` and `TP_printk()` if you were
3275 to also use this tracepoint for ftrace/perf.
3276
3277 Once this is done, you may place calls to `trace_hello_world()`
3278 wherever you want in the Linux source code. As an example, let us place
3279 such a tracepoint in the `usb_probe_device()` static function
3280 (path:{drivers/usb/core/driver.c}):
3281
3282 [source,c]
3283 ----
3284 /* called from driver core with dev locked */
3285 static int usb_probe_device(struct device *dev)
3286 {
3287 struct usb_device_driver *udriver = to_usb_device_driver(dev->driver);
3288 struct usb_device *udev = to_usb_device(dev);
3289 int error = 0;
3290
3291 trace_hello_world(udev->devnum, udev->product);
3292
3293 /* ... */
3294 }
3295 ----
3296
3297 This tracepoint should fire every time a USB device is plugged in.
3298
3299 At the top of path:{driver.c}, we need to include our actual tracepoint
3300 definition and, in this case (one place per subsystem), define
3301 `CREATE_TRACE_POINTS`, which creates our tracepoint:
3302
3303 [source,c]
3304 ----
3305 /* ... */
3306
3307 #include "usb.h"
3308
3309 #define CREATE_TRACE_POINTS
3310 #include <trace/events/hello.h>
3311
3312 /* ... */
3313 ----
3314
3315 Build your custom Linux kernel. In order to use LTTng, make sure the
3316 following kernel configuration options are enabled:
3317
3318 * `CONFIG_MODULES` (loadable module support)
3319 * `CONFIG_KALLSYMS` (load all symbols for debugging/kksymoops)
3320 * `CONFIG_HIGH_RES_TIMERS` (high resolution timer support)
3321 * `CONFIG_TRACEPOINTS` (kernel tracepoint instrumentation)
3322
3323 Boot the custom kernel. The directory
3324 dir:{/sys/kernel/debug/tracing/events/hello} should exist if everything
3325 went right, with a dir:{hello_world} subdirectory.
3326
3327
3328 [[lttng-adaptation-layer]]
3329 ===== Adding the LTTng adaptation layer
3330
3331 The steps to write the LTTng adaptation layer are, in your
3332 LTTng-modules copy's source code tree:
3333
3334 . In dir:{instrumentation/events/lttng-module},
3335 add a header +__subsys__.h+ for your custom
3336 subsystem +__subsys__+ and write your
3337 tracepoint definitions using LTTng-modules macros in it.
3338 Those macros look like the mainline kernel equivalents,
3339 but they present subtle, yet important differences.
3340 . In dir:{probes}, create the C source file of the LTTng probe kernel
3341 module for your subsystem. It should be named
3342 +lttng-probe-__subsys__.c+.
3343 . Edit path:{probes/Makefile} so that the LTTng-modules project
3344 builds your custom LTTng probe kernel module.
3345 . Build and install LTTng kernel modules.
3346
3347 Following our `hello_world` event example, here's the content of
3348 path:{instrumentation/events/lttng-module/hello.h}:
3349
3350 [source,c]
3351 ----
3352 #undef TRACE_SYSTEM
3353 #define TRACE_SYSTEM hello
3354
3355 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3356 #define _TRACE_HELLO_H
3357
3358 #include "../../../probes/lttng-tracepoint-event.h"
3359 #include <linux/tracepoint.h>
3360
3361 LTTNG_TRACEPOINT_EVENT(
3362 /* format identical to mainline version for those */
3363 hello_world,
3364 TP_PROTO(int foo, const char* bar),
3365 TP_ARGS(foo, bar),
3366
3367 /* possible differences */
3368 TP_STRUCT__entry(
3369 __field(int, my_int)
3370 __field(char, char0)
3371 __field(char, char1)
3372 __string(product, bar)
3373 ),
3374
3375 /* notice the use of tp_assign()/tp_strcpy() and no semicolons */
3376 TP_fast_assign(
3377 tp_assign(my_int, foo)
3378 tp_assign(char0, bar[0])
3379 tp_assign(char1, bar[1])
3380 tp_strcpy(product, bar)
3381 ),
3382
3383 /* This one is actually not used by LTTng either, but must be
3384 * present for the moment.
3385 */
3386 TP_printk("", 0)
3387
3388 /* no semicolon after this either */
3389 )
3390
3391 #endif
3392
3393 /* other difference: do NOT include <trace/define_trace.h> */
3394 #include "../../../probes/define_trace.h"
3395 ----
3396
3397 Some possible entries for `TP_STRUCT__entry()` and `TP_fast_assign()`,
3398 in the case of LTTng-modules, are shown in the
3399 <<lttng-modules-ref,LTTng-modules reference>> section.
3400
3401 The best way to learn how to use the above macros is to inspect
3402 existing LTTng tracepoint definitions in
3403 dir:{instrumentation/events/lttng-module} header files. Compare
3404 them with the Linux kernel mainline versions in
3405 dir:{include/trace/events}.
3406
3407 The next step is writing the LTTng probe kernel module C source file.
3408 This one is named +lttng-probe-__subsys__.c+
3409 in dir:{probes}. You may always use the following template:
3410
3411 [source,c]
3412 ----
3413 #include <linux/module.h>
3414 #include "../lttng-tracer.h"
3415
3416 /* Build time verification of mismatch between mainline TRACE_EVENT()
3417 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3418 */
3419 #include <trace/events/hello.h>
3420
3421 /* create LTTng tracepoint probes */
3422 #define LTTNG_PACKAGE_BUILD
3423 #define CREATE_TRACE_POINTS
3424 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
3425
3426 #include "../instrumentation/events/lttng-module/hello.h"
3427
3428 MODULE_LICENSE("GPL and additional rights");
3429 MODULE_AUTHOR("Your name <your-email>");
3430 MODULE_DESCRIPTION("LTTng hello probes");
3431 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
3432 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
3433 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
3434 LTTNG_MODULES_EXTRAVERSION);
3435 ----
3436
3437 Just replace `hello` with your subsystem name. In this example,
3438 `<trace/events/hello.h>`, which is the original mainline tracepoint
3439 definition header, is included for verification purposes: the
3440 LTTng-modules build system is able to emit an error at build time when
3441 the arguments of the mainline `TRACE_EVENT()` definitions do not match
3442 the ones of the LTTng-modules adaptation layer
3443 (`LTTNG_TRACEPOINT_EVENT()`).
3444
3445 Edit path:{probes/Makefile} and add your new kernel module object
3446 next to existing ones:
3447
3448 [source,make]
3449 ----
3450 # ...
3451
3452 obj-m += lttng-probe-module.o
3453 obj-m += lttng-probe-power.o
3454
3455 obj-m += lttng-probe-hello.o
3456
3457 # ...
3458 ----
3459
3460 Time to build! Point to your custom Linux kernel source tree using
3461 the `KERNELDIR` variable:
3462
3463 [role="term"]
3464 ----
3465 make KERNELDIR=/path/to/custom/linux
3466 ----
3467
3468 Finally, install modules:
3469
3470 [role="term"]
3471 ----
3472 sudo make modules_install
3473 ----
3474
3475
3476 [[instrumenting-linux-kernel-tracing]]
3477 ===== Tracing
3478
3479 The <<controlling-tracing,Controlling tracing>> section explains
3480 how to use the `lttng` tool to create and control tracing sessions.
3481 Although the `lttng` tool loads the appropriate _known_ LTTng kernel
3482 modules when needed (by launching `root`'s session daemon), it won't
3483 load your custom `lttng-probe-hello` module by default. You need to
3484 manually start an LTTng session daemon as `root` and use the
3485 `--extra-kmod-probes` option to append your custom probe module to the
3486 default list:
3487
3488 [role="term"]
3489 ----
3490 sudo pkill -u root lttng-sessiond
3491 sudo lttng-sessiond --extra-kmod-probes=hello
3492 ----
3493
3494 The first command makes sure any existing instance is killed. If
3495 you're not interested in using the default probes, or if you only
3496 want to use a few of them, you could use `--kmod-probes` instead,
3497 which specifies an absolute list:
3498
3499 [role="term"]
3500 ----
3501 sudo lttng-sessiond --kmod-probes=hello,ext4,net,block,signal,sched
3502 ----
3503
3504 Confirm the custom probe module is loaded:
3505
3506 [role="term"]
3507 ----
3508 lsmod | grep lttng_probe_hello
3509 ----
3510
3511 The `hello_world` event should appear in the list when doing
3512
3513 [role="term"]
3514 ----
3515 lttng list --kernel | grep hello
3516 ----
3517
3518 You may now create an LTTng tracing session, enable the `hello_world`
3519 kernel event (and others if you wish) and start tracing:
3520
3521 [role="term"]
3522 ----
3523 sudo lttng create my-session
3524 sudo lttng enable-event --kernel hello_world
3525 sudo lttng start
3526 ----
3527
3528 Plug a few USB devices, then stop tracing and inspect the trace (if
3529 http://diamon.org/babeltrace[Babeltrace]
3530 is installed):
3531
3532 [role="term"]
3533 ----
3534 sudo lttng stop
3535 sudo lttng view
3536 ----
3537
3538 Here's a sample output:
3539
3540 ----
3541 [15:30:34.835895035] (+?.?????????) hostname hello_world: { cpu_id = 1 }, { my_int = 8, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3542 [15:30:42.262781421] (+7.426886386) hostname hello_world: { cpu_id = 1 }, { my_int = 9, char0 = 80, char1 = 97, product = "Patriot Memory" }
3543 [15:30:48.175621778] (+5.912840357) hostname hello_world: { cpu_id = 1 }, { my_int = 10, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3544 ----
3545
3546 Two USB flash drives were used for this test.
3547
3548 You may change your LTTng custom probe, rebuild it and reload it at
3549 any time when not tracing. Make sure you remove the old module
3550 (either by killing the root LTTng session daemon which loaded the
3551 module in the first place, or by using `modprobe --remove` directly)
3552 before loading the updated one.
3553
3554
3555 [[instrumenting-out-of-tree-linux-kernel]]
3556 ===== Advanced: Instrumenting an out-of-tree Linux kernel module for LTTng
3557
3558 Instrumenting a custom Linux kernel module for LTTng follows the exact
3559 same steps as
3560 <<instrumenting-linux-kernel-itself,adding instrumentation
3561 to the Linux kernel itself>>,
3562 the only difference being that your mainline tracepoint definition
3563 header doesn't reside in the mainline source tree, but in your
3564 kernel module source tree.
3565
3566 The only reference to this mainline header is in the LTTng custom
3567 probe's source code (path:{probes/lttng-probe-hello.c} in our example),
3568 for build time verification:
3569
3570 [source,c]
3571 ----
3572 /* ... */
3573
3574 /* Build time verification of mismatch between mainline TRACE_EVENT()
3575 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3576 */
3577 #include <trace/events/hello.h>
3578
3579 /* ... */
3580 ----
3581
3582 The preferred, flexible way to include your module's mainline
3583 tracepoint definition header is to put it in a specific directory
3584 relative to your module's root (`tracepoints`, for example) and include it
3585 relative to your module's root directory in the LTTng custom probe's
3586 source:
3587
3588 [source,c]
3589 ----
3590 #include <tracepoints/hello.h>
3591 ----
3592
3593 You may then build LTTng-modules by adding your module's root
3594 directory as an include path to the extra C flags:
3595
3596 [role="term"]
3597 ----
3598 make ccflags-y=-I/path/to/kernel/module KERNELDIR=/path/to/custom/linux
3599 ----
3600
3601 Using `ccflags-y` allows you to move your kernel module to another
3602 directory and rebuild the LTTng-modules project with no change to
3603 source files.
3604
3605
3606 [role="since-2.5"]
3607 [[proc-lttng-logger-abi]]
3608 ==== LTTng logger ABI
3609
3610 The `lttng-tracer` Linux kernel module, installed by the LTTng-modules
3611 package, creates a special LTTng logger ABI file path:{/proc/lttng-logger}
3612 when loaded. Writing text data to this file generates an LTTng kernel
3613 domain event named `lttng_logger`.
3614
3615 Unlike other kernel domain events, `lttng_logger` may be enabled by
3616 any user, not only root users or members of the tracing group.
3617
3618 To use the LTTng logger ABI, simply write a string to
3619 path:{/proc/lttng-logger}:
3620
3621 [role="term"]
3622 ----
3623 echo -n 'Hello, World!' > /proc/lttng-logger
3624 ----
3625
3626 The `msg` field of the `lttng_logger` event contains the recorded
3627 message.
3628
3629 NOTE: Messages are split in chunks of 1024{nbsp}bytes.
3630
3631 The LTTng logger ABI is a quick and easy way to trace some events from
3632 user space through the kernel tracer. However, it is much more basic
3633 than LTTng-UST: it's slower (involves system call round-trip to the
3634 kernel and only supports logging strings). The LTTng logger ABI is
3635 particularly useful for recording logs as LTTng traces from shell
3636 scripts, potentially combining them with other Linux kernel/user space
3637 events.
3638
3639
3640 [[instrumenting-32-bit-app-on-64-bit-system]]
3641 ==== Advanced: Instrumenting a 32-bit application on a 64-bit system
3642
3643 [[advanced-instrumenting-techniques]]In order to trace a 32-bit
3644 application running on a 64-bit system,
3645 LTTng must use a dedicated 32-bit
3646 <<lttng-consumerd,consumer daemon>>. This section discusses how to
3647 build that daemon (which is _not_ part of the default 64-bit LTTng
3648 build) and the LTTng 32-bit tracing libraries, and how to instrument
3649 a 32-bit application in that context.
3650
3651 Make sure you install all 32-bit versions of LTTng dependencies.
3652 Their names can be found in the `README.md` files of each LTTng package
3653 source. How to find and install them depends on your target's
3654 Linux distribution. `gcc-multilib` is a common package name for the
3655 multilib version of GCC, which you also need.
3656
3657 The following packages will be built for 32-bit support on a 64-bit
3658 system: http://urcu.so/[Userspace RCU],
3659 LTTng-UST and LTTng-tools.
3660
3661
3662 [[building-32-bit-userspace-rcu]]
3663 ===== Building 32-bit Userspace RCU
3664
3665 Follow this:
3666
3667 [role="term"]
3668 ----
3669 git clone git://git.urcu.so/urcu.git
3670 cd urcu
3671 ./bootstrap
3672 ./configure --libdir=/usr/lib32 CFLAGS=-m32
3673 make
3674 sudo make install
3675 sudo ldconfig
3676 ----
3677
3678 The `-m32` C compiler flag creates 32-bit object files and `--libdir`
3679 indicates where to install the resulting libraries.
3680
3681
3682 [[building-32-bit-lttng-ust]]
3683 ===== Building 32-bit LTTng-UST
3684
3685 Follow this:
3686
3687 [role="term"]
3688 ----
3689 git clone http://git.lttng.org/lttng-ust.git
3690 cd lttng-ust
3691 ./bootstrap
3692 ./configure --prefix=/usr \
3693 --libdir=/usr/lib32 \
3694 CFLAGS=-m32 CXXFLAGS=-m32 \
3695 LDFLAGS=-L/usr/lib32
3696 make
3697 sudo make install
3698 sudo ldconfig
3699 ----
3700
3701 `-L/usr/lib32` is required for the build to find the 32-bit versions
3702 of Userspace RCU and other dependencies.
3703
3704 [NOTE]
3705 ====
3706 Depending on your Linux distribution,
3707 32-bit libraries could be installed at a different location than
3708 dir:{/usr/lib32}. For example, Debian is known to install
3709 some 32-bit libraries in dir:{/usr/lib/i386-linux-gnu}.
3710
3711 In this case, make sure to set `LDFLAGS` to all the
3712 relevant 32-bit library paths, for example,
3713 `LDFLAGS="-L/usr/lib32 -L/usr/lib/i386-linux-gnu"`.
3714 ====
3715
3716 NOTE: You may add options to path:{./configure} if you need them, e.g., for
3717 Java and SystemTap support. Look at `./configure --help` for more
3718 information.
3719
3720
3721 [[building-32-bit-lttng-tools]]
3722 ===== Building 32-bit LTTng-tools
3723
3724 Since the host is a 64-bit system, most 32-bit binaries and libraries of
3725 LTTng-tools are not needed; the host uses their 64-bit counterparts.
3726 The required step here is building and installing a 32-bit consumer
3727 daemon.
3728
3729 Follow this:
3730
3731 [role="term"]
3732 ----
3733 git clone http://git.lttng.org/lttng-tools.git
3734 cd lttng-ust
3735 ./bootstrap
3736 ./configure --prefix=/usr \
3737 --libdir=/usr/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3738 LDFLAGS=-L/usr/lib32
3739 make
3740 cd src/bin/lttng-consumerd
3741 sudo make install
3742 sudo ldconfig
3743 ----
3744
3745 The above commands build all the LTTng-tools project as 32-bit
3746 applications, but only installs the 32-bit consumer daemon.
3747
3748
3749 [[building-64-bit-lttng-tools]]
3750 ===== Building 64-bit LTTng-tools
3751
3752 Finally, you need to build a 64-bit version of LTTng-tools which is
3753 aware of the 32-bit consumer daemon previously built and installed:
3754
3755 [role="term"]
3756 ----
3757 make clean
3758 ./bootstrap
3759 ./configure --prefix=/usr \
3760 --with-consumerd32-libdir=/usr/lib32 \
3761 --with-consumerd32-bin=/usr/lib32/lttng/libexec/lttng-consumerd
3762 make
3763 sudo make install
3764 sudo ldconfig
3765 ----
3766
3767 Henceforth, the 64-bit session daemon automatically finds the
3768 32-bit consumer daemon if required.
3769
3770
3771 [[building-instrumented-32-bit-c-application]]
3772 ===== Building an instrumented 32-bit C application
3773
3774 Let us reuse the _Hello world_ example of
3775 <<tracing-your-own-user-application,Tracing your own user application>>
3776 (<<getting-started,Getting started>> chapter).
3777
3778 The instrumentation process is unaltered.
3779
3780 First, a typical 64-bit build (assuming you're running a 64-bit system):
3781
3782 [role="term"]
3783 ----
3784 gcc -o hello64 -I. hello.c hello-tp.c -ldl -llttng-ust
3785 ----
3786
3787 Now, a 32-bit build:
3788
3789 [role="term"]
3790 ----
3791 gcc -o hello32 -I. -m32 hello.c hello-tp.c -L/usr/lib32 \
3792 -ldl -llttng-ust -Wl,-rpath,/usr/lib32
3793 ----
3794
3795 The `-rpath` option, passed to the linker, makes the dynamic loader
3796 check for libraries in dir:{/usr/lib32} before looking in its default paths,
3797 where it should find the 32-bit version of `liblttng-ust`.
3798
3799
3800 [[running-32-bit-and-64-bit-c-applications]]
3801 ===== Running 32-bit and 64-bit versions of an instrumented C application
3802
3803 Now, both 32-bit and 64-bit versions of the _Hello world_ example above
3804 can be traced in the same tracing session. Use the `lttng` tool as usual
3805 to create a tracing session and start tracing:
3806
3807 [role="term"]
3808 ----
3809 lttng create session-3264
3810 lttng enable-event -u -a
3811 ./hello32
3812 ./hello64
3813 lttng stop
3814 ----
3815
3816 Use `lttng view` to verify both processes were
3817 successfully traced.
3818
3819
3820 [[controlling-tracing]]
3821 === Controlling tracing
3822
3823 Once you're in possession of a software that is properly
3824 <<instrumenting,instrumented>> for LTTng tracing, be it thanks to
3825 the built-in LTTng probes for the Linux kernel, a custom user
3826 application or a custom Linux kernel, all that is left is actually
3827 tracing it. As a user, you control LTTng tracing using a single command
3828 line interface: the `lttng` tool. This tool uses `liblttng-ctl` behind
3829 the scene to connect to and communicate with session daemons. LTTng
3830 session daemons may either be started manually (`lttng-sessiond`) or
3831 automatically by the `lttng` command when needed. Trace data may
3832 be forwarded to the network and used elsewhere using an LTTng relay
3833 daemon (`lttng-relayd`).
3834
3835 The manpages of `lttng`, `lttng-sessiond` and `lttng-relayd` are pretty
3836 complete, thus this section is not an online copy of the latter (we
3837 leave this contents for the
3838 <<online-lttng-manpages,Online LTTng manpages>> section).
3839 This section is rather a tour of LTTng
3840 features through practical examples and tips.
3841
3842 If not already done, make sure you understand the core concepts
3843 and how LTTng components connect together by reading the
3844 <<understanding-lttng,Understanding LTTng>> chapter; this section
3845 assumes you are familiar with them.
3846
3847
3848 [[creating-destroying-tracing-sessions]]
3849 ==== Creating and destroying tracing sessions
3850
3851 Whatever you want to do with `lttng`, it has to happen inside a
3852 **tracing session**, created beforehand. A session, in general, is a
3853 per-user container of state. A tracing session is no different; it
3854 keeps a specific state of stuff like:
3855
3856 * session name
3857 * enabled/disabled channels with associated parameters
3858 * enabled/disabled events with associated log levels and filters
3859 * context information added to channels
3860 * tracing activity (started or stopped)
3861
3862 and more.
3863
3864 A single user may have many active tracing sessions. LTTng session
3865 daemons are the ultimate owners and managers of tracing sessions. For
3866 user space tracing, each user has its own session daemon. Since Linux
3867 kernel tracing requires root privileges, only `root`'s session daemon
3868 may enable and trace kernel events. However, `lttng` has a `--group`
3869 option (which is passed to `lttng-sessiond` when starting it) to
3870 specify the name of a _tracing group_ which selected users may be part
3871 of to be allowed to communicate with `root`'s session daemon. By
3872 default, the tracing group name is `tracing`.
3873
3874 To create a tracing session, do:
3875
3876 [role="term"]
3877 ----
3878 lttng create my-session
3879 ----
3880
3881 This creates a new tracing session named `my-session` and make it
3882 the current one. If you don't specify a name (running only
3883 `lttng create`), your tracing session is named `auto` followed by the
3884 current date and time. Traces
3885 are written in +\~/lttng-traces/__session__-+ followed
3886 by the tracing session's creation date/time by default, where
3887 +__session__+ is the tracing session name. To save them
3888 at a different location, use the `--output` option:
3889
3890 [role="term"]
3891 ----
3892 lttng create --output /tmp/some-directory my-session
3893 ----
3894
3895 You may create as many tracing sessions as you wish:
3896
3897 [role="term"]
3898 ----
3899 lttng create other-session
3900 lttng create yet-another-session
3901 ----
3902
3903 You may view all existing tracing sessions using the `list` command:
3904
3905 [role="term"]
3906 ----
3907 lttng list
3908 ----
3909
3910 The state of a _current tracing session_ is kept in path:{~/.lttngrc}. Each
3911 invocation of `lttng` reads this file to set its current tracing
3912 session name so that you don't have to specify a session name for each
3913 command. You could edit this file manually, but the preferred way to
3914 set the current tracing session is to use the `set-session` command:
3915
3916 [role="term"]
3917 ----
3918 lttng set-session other-session
3919 ----
3920
3921 Most `lttng` commands accept a `--session` option to specify the name
3922 of the target tracing session.
3923
3924 Any existing tracing session may be destroyed using the `destroy`
3925 command:
3926
3927 [role="term"]
3928 ----
3929 lttng destroy my-session
3930 ----
3931
3932 Providing no argument to `lttng destroy` destroys the current
3933 tracing session. Destroying a tracing session stops any tracing
3934 running within the latter. Destroying a tracing session frees resources
3935 acquired by the session daemon and tracer side, making sure to flush
3936 all trace data.
3937
3938 You can't do much with LTTng using only the `create`, `set-session`
3939 and `destroy` commands of `lttng`, but it is essential to know them in
3940 order to control LTTng tracing, which always happen within the scope of
3941 a tracing session.
3942
3943
3944 [[enabling-disabling-events]]
3945 ==== Enabling and disabling events
3946
3947 Inside a tracing session, individual events may be enabled or disabled
3948 so that tracing them may or may not generate trace data.
3949
3950 We sometimes use the term _event_ metonymically throughout this text to
3951 refer to a specific condition, or _rule_, that could lead, when
3952 satisfied, to an actual occurring event (a point at a specific position
3953 in source code/binary program, logical processor and time capturing
3954 some payload) being recorded as trace data. This specific condition is
3955 composed of:
3956
3957 . A **domain** (kernel, user space, `java.util.logging`, or log4j)
3958 (required).
3959 . One or many **instrumentation points** in source code or binary
3960 program (tracepoint name, address, symbol name, function name,
3961 logger name, amongst other types of probes) to be executed (required).
3962 . A **log level** (each instrumentation point declares its own log
3963 level) or log level range to match (optional; only valid for user
3964 space domain).
3965 . A **custom user expression**, or **filter**, that must evaluate to
3966 _true_ when a tracepoint is executed (optional; only valid for user
3967 space domain).
3968
3969 All conditions are specified using arguments passed to the
3970 `enable-event` command of the `lttng` tool.
3971
3972 Condition 1 is specified using either `--kernel`/`-k` (kernel),
3973 `--userspace`/`-u` (user space), `--jul`/`-j`
3974 (JUL), or `--log4j`/`-l` (log4j).
3975 Exactly one of those four arguments must be specified.
3976
3977 Condition 2 is specified using one of:
3978
3979 `--tracepoint`::
3980 Tracepoint.
3981
3982 `--probe`::
3983 Dynamic probe (address, symbol name or combination
3984 of both in binary program; only valid for kernel domain).
3985
3986 `--function`::
3987 function entry/exit (address, symbol name or
3988 combination of both in binary program; only valid for kernel domain).
3989
3990 `--syscall`::
3991 System call entry/exit (only valid for kernel domain).
3992
3993 When none of the above is specified, `enable-event` defaults to
3994 using `--tracepoint`.
3995
3996 Condition 3 is specified using one of:
3997
3998 `--loglevel`::
3999 Log level range from the specified level to the most severe
4000 level.
4001
4002 `--loglevel-only`::
4003 Specific log level.
4004
4005 See `lttng enable-event --help` for the complete list of log level
4006 names.
4007
4008 Condition 4 is specified using the `--filter` option. This filter is
4009 a C-like expression, potentially reading real-time values of event
4010 fields, that has to evaluate to _true_ for the condition to be satisfied.
4011 Event fields are read using plain identifiers while context fields
4012 must be prefixed with `$ctx.`. See `lttng enable-event --help` for
4013 all usage details.
4014
4015 The aforementioned arguments are combined to create and enable events.
4016 Each unique combination of arguments leads to a different
4017 _enabled event_. The log level and filter arguments are optional, their
4018 default values being respectively all log levels and a filter which
4019 always returns _true_.
4020
4021 Here are a few examples (you must
4022 <<creating-destroying-tracing-sessions,create a tracing session>>
4023 first):
4024
4025 [role="term"]
4026 ----
4027 lttng enable-event -u --tracepoint my_app:hello_world
4028 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_WARNING
4029 lttng enable-event -u --tracepoint 'my_other_app:*'
4030 lttng enable-event -u --tracepoint my_app:foo_bar \
4031 --filter 'some_field <= 23 && !other_field'
4032 lttng enable-event -k --tracepoint sched_switch
4033 lttng enable-event -k --tracepoint gpio_value
4034 lttng enable-event -k --function usb_probe_device usb_probe_device
4035 lttng enable-event -k --syscall --all
4036 ----
4037
4038 The wildcard symbol, `*`, matches _anything_ and may only be used at
4039 the end of the string when specifying a _tracepoint_. Make sure to
4040 use it between single quotes in your favorite shell to avoid
4041 undesired shell expansion.
4042
4043 System call events can be enabled individually, too:
4044
4045 [role="term"]
4046 ----
4047 lttng enable-event -k --syscall open
4048 lttng enable-event -k --syscall read
4049 lttng enable-event -k --syscall fork,chdir,pipe
4050 ----
4051
4052 The complete list of available system call events can be
4053 obtained using
4054
4055 [role="term"]
4056 ----
4057 lttng list --kernel --syscall
4058 ----
4059
4060 You can see a list of events (enabled or disabled) using
4061
4062 [role="term"]
4063 ----
4064 lttng list some-session
4065 ----
4066
4067 where `some-session` is the name of the desired tracing session.
4068
4069 What you're actually doing when enabling events with specific conditions
4070 is creating a **whitelist** of traceable events for a given channel.
4071 Thus, the following case presents redundancy:
4072
4073 [role="term"]
4074 ----
4075 lttng enable-event -u --tracepoint my_app:hello_you
4076 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_DEBUG
4077 ----
4078
4079 The second command, matching a log level range, is useless since the first
4080 command enables all tracepoints matching the same name,
4081 `my_app:hello_you`.
4082
4083 Disabling an event is simpler: you only need to provide the event
4084 name to the `disable-event` command:
4085
4086 [role="term"]
4087 ----
4088 lttng disable-event --userspace my_app:hello_you
4089 ----
4090
4091 This name has to match a name previously given to `enable-event` (it
4092 has to be listed in the output of `lttng list some-session`).
4093 The `*` wildcard is supported, as long as you also used it in a
4094 previous `enable-event` invocation.
4095
4096 Disabling an event does not add it to some blacklist: it simply removes
4097 it from its channel's whitelist. This is why you cannot disable an event
4098 which wasn't previously enabled.
4099
4100 A disabled event doesn't generate any trace data, even if all its
4101 specified conditions are met.
4102
4103 Events may be enabled and disabled at will, either when LTTng tracers
4104 are active or not. Events may be enabled before a user space application
4105 is even started.
4106
4107
4108 [[basic-tracing-session-control]]
4109 ==== Basic tracing session control
4110
4111 Once you have
4112 <<creating-destroying-tracing-sessions,created a tracing session>>
4113 and <<enabling-disabling-events,enabled one or more events>>,
4114 you may activate the LTTng tracers for the current tracing session at
4115 any time:
4116
4117 [role="term"]
4118 ----
4119 lttng start
4120 ----
4121
4122 Subsequently, you may stop the tracers:
4123
4124 [role="term"]
4125 ----
4126 lttng stop
4127 ----
4128
4129 LTTng is very flexible: user space applications may be launched before
4130 or after the tracers are started. Events are only recorded if they
4131 are properly enabled and if they occur while tracers are active.
4132
4133 A tracing session name may be passed to both the `start` and `stop`
4134 commands to start/stop tracing a session other than the current one.
4135
4136
4137 [[enabling-disabling-channels]]
4138 ==== Enabling and disabling channels
4139
4140 <<event,As mentioned>> in the
4141 <<understanding-lttng,Understanding LTTng>> chapter, enabled
4142 events are contained in a specific channel, itself contained in a
4143 specific tracing session. A channel is a group of events with
4144 tunable parameters (event loss mode, sub-buffer size, number of
4145 sub-buffers, trace file sizes and count, to name a few). A given channel
4146 may only be responsible for enabled events belonging to one domain:
4147 either kernel or user space.
4148
4149 If you only used the `create`, `enable-event` and `start`/`stop`
4150 commands of the `lttng` tool so far, one or two channels were
4151 automatically created for you (one for the kernel domain and/or one
4152 for the user space domain). The default channels are both named
4153 `channel0`; channels from different domains may have the same name.
4154
4155 The current channels of a given tracing session can be viewed with
4156
4157 [role="term"]
4158 ----
4159 lttng list some-session
4160 ----
4161
4162 where `some-session` is the name of the desired tracing session.
4163
4164 To create and enable a channel, use the `enable-channel` command:
4165
4166 [role="term"]
4167 ----
4168 lttng enable-channel --kernel my-channel
4169 ----
4170
4171 This creates a kernel domain channel named `my-channel` with
4172 default parameters in the current tracing session.
4173
4174 [NOTE]
4175 ====
4176 Because of a current limitation, all
4177 channels must be _created_ prior to beginning tracing in a
4178 given tracing session, that is before the first time you do
4179 `lttng start`.
4180
4181 Since a channel is automatically created by
4182 `enable-event` only for the specified domain, you cannot,
4183 for example, enable a kernel domain event, start tracing and then
4184 enable a user space domain event because no user space channel
4185 exists yet and it's too late to create one.
4186
4187 For this reason, make sure to configure your channels properly
4188 before starting the tracers for the first time!
4189 ====
4190
4191 Here's another example:
4192
4193 [role="term"]
4194 ----
4195 lttng enable-channel --userspace --session other-session --overwrite \
4196 --tracefile-size 1048576 1mib-channel
4197 ----
4198
4199 This creates a user space domain channel named `1mib-channel` in
4200 the tracing session named `other-session` that loses new events by
4201 overwriting previously recorded events (instead of the default mode of
4202 discarding newer ones) and saves trace files with a maximum size of
4203 1{nbsp}MiB each.
4204
4205 Note that channels may also be created using the `--channel` option of
4206 the `enable-event` command when the provided channel name doesn't exist
4207 for the specified domain:
4208
4209 [role="term"]
4210 ----
4211 lttng enable-event --kernel --channel some-channel sched_switch
4212 ----
4213
4214 If no kernel domain channel named `some-channel` existed before calling
4215 the above command, it would be created with default parameters.
4216
4217 You may enable the same event in two different channels:
4218
4219 [role="term"]
4220 ----
4221 lttng enable-event --userspace --channel my-channel app:tp
4222 lttng enable-event --userspace --channel other-channel app:tp
4223 ----
4224
4225 If both channels are enabled, the occurring `app:tp` event
4226 generates two recorded events, one for each channel.
4227
4228 Disabling a channel is done with the `disable-event` command:
4229
4230 [role="term"]
4231 ----
4232 lttng disable-event --kernel some-channel
4233 ----
4234
4235 The state of a channel precedes the individual states of events within
4236 it: events belonging to a disabled channel, even if they are
4237 enabled, won't be recorded.
4238
4239
4240
4241 [[fine-tuning-channels]]
4242 ===== Fine-tuning channels
4243
4244 There are various parameters that may be fine-tuned with the
4245 `enable-channel` command. The latter are well documented in
4246 man:lttng(1) and in the <<channel,Channel>> section of the
4247 <<understanding-lttng,Understanding LTTng>> chapter. For basic
4248 tracing needs, their default values should be just fine, but here are a
4249 few examples to break the ice.
4250
4251 As the frequency of recorded events increases--either because the
4252 event throughput is actually higher or because you enabled more events
4253 than usual&#8212;__event loss__ might be experienced. Since LTTng never
4254 waits, by design, for sub-buffer space availability (non-blocking
4255 tracer), when a sub-buffer is full and no empty sub-buffers are left,
4256 there are two possible outcomes: either the new events that do not fit
4257 are rejected, or they start replacing the oldest recorded events.
4258 The choice of which algorithm to use is a per-channel parameter, the
4259 default being discarding the newest events until there is some space
4260 left. If your situation always needs the latest events at the expense
4261 of writing over the oldest ones, create a channel with the `--overwrite`
4262 option:
4263
4264 [role="term"]
4265 ----
4266 lttng enable-channel --kernel --overwrite my-channel
4267 ----
4268
4269 When an event is lost, it means no space was available in any
4270 sub-buffer to accommodate it. Thus, if you want to cope with sporadic
4271 high event throughput situations and avoid losing events, you need to
4272 allocate more room for storing them in memory. This can be done by
4273 either increasing the size of sub-buffers or by adding sub-buffers.
4274 The following example creates a user space domain channel with
4275 16{nbsp}sub-buffers of 512{nbsp}kiB each:
4276
4277 [role="term"]
4278 ----
4279 lttng enable-channel --userspace --num-subbuf 16 --subbuf-size 512k big-channel
4280 ----
4281
4282 Both values need to be powers of two, otherwise they are rounded up
4283 to the next one.
4284
4285 Two other interesting available parameters of `enable-channel` are
4286 `--tracefile-size` and `--tracefile-count`, which respectively limit
4287 the size of each trace file and the their count for a given channel.
4288 When the number of written trace files reaches its limit for a given
4289 channel-CPU pair, the next trace file overwrites the very first
4290 one. The following example creates a kernel domain channel with a
4291 maximum of three trace files of 1{nbsp}MiB each:
4292
4293 [role="term"]
4294 ----
4295 lttng enable-channel --kernel --tracefile-size 1M --tracefile-count 3 my-channel
4296 ----
4297
4298 An efficient way to make sure lots of events are generated is enabling
4299 all kernel events in this channel and starting the tracer:
4300
4301 [role="term"]
4302 ----
4303 lttng enable-event --kernel --all --channel my-channel
4304 lttng start
4305 ----
4306
4307 After a few seconds, look at trace files in your tracing session
4308 output directory. For two CPUs, it should look like:
4309
4310 ----
4311 my-channel_0_0 my-channel_1_0
4312 my-channel_0_1 my-channel_1_1
4313 my-channel_0_2 my-channel_1_2
4314 ----
4315
4316 Amongst the files above, you might see one in each group with a size
4317 lower than 1{nbsp}MiB: they are the files currently being written.
4318
4319 Since all those small files are valid LTTng trace files, LTTng trace
4320 viewers may read them. It is the viewer's responsibility to properly
4321 merge the streams so as to present an ordered list to the user.
4322 http://diamon.org/babeltrace[Babeltrace]
4323 merges LTTng trace files correctly and is fast at doing it.
4324
4325
4326 [[adding-context]]
4327 ==== Adding some context to channels
4328
4329 If you read all the sections of
4330 <<controlling-tracing,Controlling tracing>> so far, you should be
4331 able to create tracing sessions, create and enable channels and events
4332 within them and start/stop the LTTng tracers. Event fields recorded in
4333 trace files provide important information about occurring events, but
4334 sometimes external context may help you solve a problem faster. This
4335 section discusses how to add context information to events of a
4336 specific channel using the `lttng` tool.
4337
4338 There are various available context values which can accompany events
4339 recorded by LTTng, for example:
4340
4341 * **process information**:
4342 ** identifier (PID)
4343 ** name
4344 ** priority
4345 ** scheduling priority (niceness)
4346 ** thread identifier (TID)
4347 * the **hostname** of the system on which the event occurred
4348 * plenty of **performance counters** using perf, for example:
4349 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types
4350 ** cache misses
4351 ** branch instructions, misses, loads
4352 ** CPU faults
4353
4354 The full list is available in the output of `lttng add-context --help`.
4355 Some of them are reserved for a specific domain (kernel or
4356 user space) while others are available for both.
4357
4358 To add context information to one or all channels of a given tracing
4359 session, use the `add-context` command:
4360
4361 [role="term"]
4362 ----
4363 lttng add-context --userspace --type vpid --type perf:thread:cpu-cycles
4364 ----
4365
4366 The above example adds the virtual process identifier and per-thread
4367 CPU cycles count values to all recorded user space domain events of the
4368 current tracing session. Use the `--channel` option to select a specific
4369 channel:
4370
4371 [role="term"]
4372 ----
4373 lttng add-context --kernel --channel my-channel --type tid
4374 ----
4375
4376 adds the thread identifier value to all recorded kernel domain events
4377 in the channel `my-channel` of the current tracing session.
4378
4379 Beware that context information cannot be removed from channels once
4380 it's added for a given tracing session.
4381
4382
4383 [role="since-2.5"]
4384 [[saving-loading-tracing-session]]
4385 ==== Saving and loading tracing session configurations
4386
4387 Configuring a tracing session may be long: creating and enabling
4388 channels with specific parameters, enabling kernel and user space
4389 domain events with specific log levels and filters, and adding context
4390 to some channels are just a few of the many possible operations using
4391 the `lttng` command line tool. If you're going to use LTTng to solve real
4392 world problems, chances are you're going to have to record events using
4393 the same tracing session setup over and over, modifying a few variables
4394 each time in your instrumented program or environment. To avoid
4395 constant tracing session reconfiguration, the `lttng` tool is able to
4396 save and load tracing session configurations to/from XML files.
4397
4398 To save a given tracing session configuration, do:
4399
4400 [role="term"]
4401 ----
4402 lttng save my-session
4403 ----
4404
4405 where `my-session` is the name of the tracing session to save. Tracing
4406 session configurations are saved to dir:{~/.lttng/sessions} by default;
4407 use the `--output-path` option to change this destination directory.
4408
4409 All configuration parameters are saved:
4410
4411 * tracing session name
4412 * trace data output path
4413 * channels with their state and all their parameters
4414 * context information added to channels
4415 * events with their state, log level and filter
4416 * tracing activity (started or stopped)
4417
4418 To load a tracing session, simply do:
4419
4420 [role="term"]
4421 ----
4422 lttng load my-session
4423 ----
4424
4425 or, if you used a custom path:
4426
4427 [role="term"]
4428 ----
4429 lttng load --input-path /path/to/my-session.lttng
4430 ----
4431
4432 Your saved tracing session is restored as if you just configured
4433 it manually.
4434
4435
4436 [[sending-trace-data-over-the-network]]
4437 ==== Sending trace data over the network
4438
4439 The possibility of sending trace data over the network comes as a
4440 built-in feature of LTTng-tools. For this to be possible, an LTTng
4441 _relay daemon_ must be executed and listening on the machine where
4442 trace data is to be received, and the user must create a tracing
4443 session using appropriate options to forward trace data to the remote
4444 relay daemon.
4445
4446 The relay daemon listens on two different TCP ports: one for control
4447 information and the other for actual trace data.
4448
4449 Starting the relay daemon on the remote machine is easy:
4450
4451 [role="term"]
4452 ----
4453 lttng-relayd
4454 ----
4455
4456 This makes it listen to its default ports: 5342 for control and
4457 5343 for trace data. The `--control-port` and `--data-port` options may
4458 be used to specify different ports.
4459
4460 Traces written by `lttng-relayd` are written to
4461 +\~/lttng-traces/__hostname__/__session__+ by
4462 default, where +__hostname__+ is the host name of the
4463 traced (monitored) system and +__session__+ is the
4464 tracing session name. Use the `--output` option to write trace data
4465 outside dir:{~/lttng-traces}.
4466
4467 On the sending side, a tracing session must be created using the
4468 `lttng` tool with the `--set-url` option to connect to the distant
4469 relay daemon:
4470
4471 [role="term"]
4472 ----
4473 lttng create my-session --set-url net://distant-host
4474 ----
4475
4476 The URL format is described in the output of `lttng create --help`.
4477 The above example uses the default ports; the `--ctrl-url` and
4478 `--data-url` options may be used to set the control and data URLs
4479 individually.
4480
4481 Once this basic setup is completed and the connection is established,
4482 you may use the `lttng` tool on the target machine as usual; everything
4483 you do is transparently forwarded to the remote machine if needed.
4484 For example, a parameter changing the maximum size of trace files
4485 only has an effect on the distant relay daemon actually writing
4486 the trace.
4487
4488
4489 [role="since-2.4"]
4490 [[lttng-live]]
4491 ==== Viewing events as they arrive
4492
4493 We have seen how trace files may be produced by LTTng out of generated
4494 application and Linux kernel events. We have seen that those trace files
4495 may be either recorded locally by consumer daemons or remotely using
4496 a relay daemon. And we have seen that the maximum size and count of
4497 trace files is configurable for each channel. With all those features,
4498 it's still not possible to read a trace file as it is being written
4499 because it could be incomplete and appear corrupted to the viewer.
4500 There is a way to view events as they arrive, however: using
4501 _LTTng live_.
4502
4503 LTTng live is implemented, in LTTng, solely on the relay daemon side.
4504 As trace data is sent over the network to a relay daemon by a (possibly
4505 remote) consumer daemon, a _tee_ is created: trace data is recorded to
4506 trace files _as well as_ being transmitted to a connected live viewer:
4507
4508 [role="img-90"]
4509 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a live viewer.
4510 image::lttng-live.png[]
4511
4512 In order to use this feature, a tracing session must created in live
4513 mode on the target system:
4514
4515 [role="term"]
4516 ----
4517 lttng create --live
4518 ----
4519
4520 An optional parameter may be passed to `--live` to set the period
4521 (in microseconds) between flushes to the network
4522 (1{nbsp}second is the default). With:
4523
4524 [role="term"]
4525 ----
4526 lttng create --live 100000
4527 ----
4528
4529 the daemons flush their data every 100{nbsp}ms.
4530
4531 If no network output is specified to the `create` command, a local
4532 relay daemon is spawned. In this very common case, viewing a live
4533 trace is easy: enable events and start tracing as usual, then use
4534 `lttng view` to start the default live viewer:
4535
4536 [role="term"]
4537 ----
4538 lttng view
4539 ----
4540
4541 The correct arguments are passed to the live viewer so that it
4542 may connect to the local relay daemon and start reading live events.
4543
4544 You may also wish to use a live viewer not running on the target
4545 system. In this case, you should specify a network output when using
4546 the `create` command (`--set-url` or `--ctrl-url`/`--data-url` options).
4547 A distant LTTng relay daemon should also be started to receive control
4548 and trace data. By default, `lttng-relayd` listens on 127.0.0.1:5344
4549 for an LTTng live connection. Otherwise, the desired URL may be
4550 specified using its `--live-port` option.
4551
4552 The
4553 http://diamon.org/babeltrace[`babeltrace`]
4554 viewer supports LTTng live as one of its input formats. `babeltrace` is
4555 the default viewer when using `lttng view`. To use it manually, first
4556 list active tracing sessions by doing the following (assuming the relay
4557 daemon to connect to runs on the same host):
4558
4559 [role="term"]
4560 ----
4561 babeltrace --input-format lttng-live net://localhost
4562 ----
4563
4564 Then, choose a tracing session and start viewing events as they arrive
4565 using LTTng live:
4566
4567 [role="term"]
4568 ----
4569 babeltrace --input-format lttng-live net://localhost/host/hostname/my-session
4570 ----
4571
4572
4573 [role="since-2.3"]
4574 [[taking-a-snapshot]]
4575 ==== Taking a snapshot
4576
4577 The normal behavior of LTTng is to record trace data as trace files.
4578 This is ideal for keeping a long history of events that occurred on
4579 the target system and applications, but may be too much data in some
4580 situations. For example, you may wish to trace your application
4581 continuously until some critical situation happens, in which case you
4582 would only need the latest few recorded events to perform the desired
4583 analysis, not multi-gigabyte trace files.
4584
4585 LTTng has an interesting feature called _snapshots_. When creating
4586 a tracing session in snapshot mode, no trace files are written; the
4587 tracers' sub-buffers are constantly overwriting the oldest recorded
4588 events with the newest. At any time, either when the tracers are started
4589 or stopped, you may take a snapshot of those sub-buffers.
4590
4591 There is no difference between the format of a normal trace file and the
4592 format of a snapshot: viewers of LTTng traces also support LTTng
4593 snapshots. By default, snapshots are written to disk, but they may also
4594 be sent over the network.
4595
4596 To create a tracing session in snapshot mode, do:
4597
4598 [role="term"]
4599 ----
4600 lttng create --snapshot my-snapshot-session
4601 ----
4602
4603 Next, enable channels, events and add context to channels as usual.
4604 Once a tracing session is created in snapshot mode, channels are
4605 forced to use the
4606 <<channel-overwrite-mode-vs-discard-mode,overwrite>> mode
4607 (`--overwrite` option of the `enable-channel` command; also called
4608 _flight recorder mode_) and have an `mmap()` channel type
4609 (`--output mmap`).
4610
4611 Start tracing. When you're ready to take a snapshot, do:
4612
4613 [role="term"]
4614 ----
4615 lttng snapshot record --name my-snapshot
4616 ----
4617
4618 This records a snapshot named `my-snapshot` of all channels of
4619 all domains of the current tracing session. By default, snapshots files
4620 are recorded in the path returned by `lttng snapshot list-output`. You
4621 may change this path or decide to send snapshots over the network
4622 using either:
4623
4624 . an output path/URL specified when creating the tracing session
4625 (`lttng create`)
4626 . an added snapshot output path/URL using
4627 `lttng snapshot add-output`
4628 . an output path/URL provided directly to the
4629 `lttng snapshot record` command
4630
4631 Method 3 overrides method 2 which overrides method 1. When specifying
4632 a URL, a relay daemon must be listening on some machine (see
4633 <<sending-trace-data-over-the-network,Sending trace data over the network>>).
4634
4635 If you need to make absolutely sure that the output file won't be
4636 larger than a certain limit, you can set a maximum snapshot size when
4637 taking it with the `--max-size` option:
4638
4639 [role="term"]
4640 ----
4641 lttng snapshot record --name my-snapshot --max-size 2M
4642 ----
4643
4644 Older recorded events are discarded in order to respect this
4645 maximum size.
4646
4647
4648 [role="since-2.6"]
4649 [[mi]]
4650 ==== Machine interface
4651
4652 The `lttng` tool aims at providing a command output as human-readable as
4653 possible. While this output is easy to parse by a human being, machines
4654 have a hard time.
4655
4656 This is why the `lttng` tool provides the general `--mi` option, which
4657 must specify a machine interface output format. As of the latest
4658 LTTng stable release, only the `xml` format is supported. A schema
4659 definition (XSD) is made
4660 https://github.com/lttng/lttng-tools/blob/master/src/common/mi_lttng.xsd[available]
4661 to ease the integration with external tools as much as possible.
4662
4663 The `--mi` option can be used in conjunction with all `lttng` commands.
4664 Here are some examples:
4665
4666 [role="term"]
4667 ----
4668 lttng --mi xml create some-session
4669 lttng --mi xml list some-session
4670 lttng --mi xml list --kernel
4671 lttng --mi xml enable-event --kernel --syscall open
4672 lttng --mi xml start
4673 ----
4674
4675
4676 [[reference]]
4677 == Reference
4678
4679 This chapter presents various references for LTTng packages such as links
4680 to online manpages, tables needed by the rest of the text, descriptions
4681 of library functions, and more.
4682
4683
4684 [[online-lttng-manpages]]
4685 === Online LTTng manpages
4686
4687 LTTng packages currently install the following link:/man[man pages],
4688 available online using the links below:
4689
4690 * **LTTng-tools**
4691 ** man:lttng(1)
4692 ** man:lttng-sessiond(8)
4693 ** man:lttng-relayd(8)
4694 * **LTTng-UST**
4695 ** man:lttng-gen-tp(1)
4696 ** man:lttng-ust(3)
4697 ** man:lttng-ust-cyg-profile(3)
4698 ** man:lttng-ust-dl(3)
4699
4700
4701 [[lttng-ust-ref]]
4702 === LTTng-UST
4703
4704 This section presents references of the LTTng-UST package.
4705
4706
4707 [[liblttng-ust]]
4708 ==== LTTng-UST library (+liblttng&#8209;ust+)
4709
4710 The LTTng-UST library, or `liblttng-ust`, is the main shared object
4711 against which user applications are linked to make LTTng user space
4712 tracing possible.
4713
4714 The <<c-application,C application>> guide shows the complete
4715 process to instrument, build and run a C/$$C++$$ application using
4716 LTTng-UST, while this section contains a few important tables.
4717
4718
4719 [[liblttng-ust-tp-fields]]
4720 ===== Tracepoint fields macros (for `TP_FIELDS()`)
4721
4722 The available macros to define tracepoint fields, which should be listed
4723 within `TP_FIELDS()` in `TRACEPOINT_EVENT()`, are:
4724
4725 [role="growable func-desc",cols="asciidoc,asciidoc"]
4726 .Available macros to define LTTng-UST tracepoint fields
4727 |====
4728 |Macro |Description and parameters
4729
4730 |
4731 +ctf_integer(__t__, __n__, __e__)+
4732
4733 +ctf_integer_nowrite(__t__, __n__, __e__)+
4734 |
4735 Standard integer, displayed in base 10.
4736
4737 +__t__+::
4738 Integer C type (`int`, `long`, `size_t`, ...).
4739
4740 +__n__+::
4741 Field name.
4742
4743 +__e__+::
4744 Argument expression.
4745
4746 |+ctf_integer_hex(__t__, __n__, __e__)+
4747 |
4748 Standard integer, displayed in base 16.
4749
4750 +__t__+::
4751 Integer C type.
4752
4753 +__n__+::
4754 Field name.
4755
4756 +__e__+::
4757 Argument expression.
4758
4759 |+ctf_integer_network(__t__, __n__, __e__)+
4760 |
4761 Integer in network byte order (big endian), displayed in base 10.
4762
4763 +__t__+::
4764 Integer C type.
4765
4766 +__n__+::
4767 Field name.
4768
4769 +__e__+::
4770 Argument expression.
4771
4772 |+ctf_integer_network_hex(__t__, __n__, __e__)+
4773 |
4774 Integer in network byte order, displayed in base 16.
4775
4776 +__t__+::
4777 Integer C type.
4778
4779 +__n__+::
4780 Field name.
4781
4782 +__e__+::
4783 Argument expression.
4784
4785 |
4786 +ctf_float(__t__, __n__, __e__)+
4787
4788 +ctf_float_nowrite(__t__, __n__, __e__)+
4789 |
4790 Floating point number.
4791
4792 +__t__+::
4793 Floating point number C type (`float` or `double`).
4794
4795 +__n__+::
4796 Field name.
4797
4798 +__e__+::
4799 Argument expression.
4800
4801 |
4802 +ctf_string(__n__, __e__)+
4803
4804 +ctf_string_nowrite(__n__, __e__)+
4805 |
4806 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
4807
4808 +__n__+::
4809 Field name.
4810
4811 +__e__+::
4812 Argument expression.
4813
4814 |
4815 +ctf_array(__t__, __n__, __e__, __s__)+
4816
4817 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
4818 |
4819 Statically-sized array of integers
4820
4821 +__t__+::
4822 Array element C type.
4823
4824 +__n__+::
4825 Field name.
4826
4827 +__e__+::
4828 Argument expression.
4829
4830 +__s__+::
4831 Number of elements.
4832
4833 |
4834 +ctf_array_text(__t__, __n__, __e__, __s__)+
4835
4836 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
4837 |
4838 Statically-sized array, printed as text.
4839
4840 The string does not need to be null-terminated.
4841
4842 +__t__+::
4843 Array element C type (always `char`).
4844
4845 +__n__+::
4846 Field name.
4847
4848 +__e__+::
4849 Argument expression.
4850
4851 +__s__+::
4852 Number of elements.
4853
4854 |
4855 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
4856
4857 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
4858 |
4859 Dynamically-sized array of integers.
4860
4861 The type of +__E__+ needs to be unsigned.
4862
4863 +__t__+::
4864 Array element C type.
4865
4866 +__n__+::
4867 Field name.
4868
4869 +__e__+::
4870 Argument expression.
4871
4872 +__T__+::
4873 Length expression C type.
4874
4875 +__E__+::
4876 Length expression.
4877
4878 |
4879 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
4880
4881 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
4882 |
4883 Dynamically-sized array, displayed as text.
4884
4885 The string does not need to be null-terminated.
4886
4887 The type of +__E__+ needs to be unsigned.
4888
4889 The behaviour is undefined if +__e__+ is `NULL`.
4890
4891 +__t__+::
4892 Sequence element C type (always `char`).
4893
4894 +__n__+::
4895 Field name.
4896
4897 +__e__+::
4898 Argument expression.
4899
4900 +__T__+::
4901 Length expression C type.
4902
4903 +__E__+::
4904 Length expression.
4905 |====
4906
4907 The `_nowrite` versions omit themselves from the session trace, but are
4908 otherwise identical. This means the `_nowrite` fields won't be written
4909 in the recorded trace. Their primary purpose is to make some
4910 of the event context available to the
4911 <<enabling-disabling-events,event filters>> without having to
4912 commit the data to sub-buffers.
4913
4914
4915 [[liblttng-ust-tracepoint-loglevel]]
4916 ===== Tracepoint log levels (for `TRACEPOINT_LOGLEVEL()`)
4917
4918 The following table shows the available log level values for the
4919 `TRACEPOINT_LOGLEVEL()` macro:
4920
4921 `TRACE_EMERG`::
4922 System is unusable.
4923
4924 `TRACE_ALERT`::
4925 Action must be taken immediately.
4926
4927 `TRACE_CRIT`::
4928 Critical conditions.
4929
4930 `TRACE_ERR`::
4931 Error conditions.
4932
4933 `TRACE_WARNING`::
4934 Warning conditions.
4935
4936 `TRACE_NOTICE`::
4937 Normal, but significant, condition.
4938
4939 `TRACE_INFO`::
4940 Informational message.
4941
4942 `TRACE_DEBUG_SYSTEM`::
4943 Debug information with system-level scope (set of programs).
4944
4945 `TRACE_DEBUG_PROGRAM`::
4946 Debug information with program-level scope (set of processes).
4947
4948 `TRACE_DEBUG_PROCESS`::
4949 Debug information with process-level scope (set of modules).
4950
4951 `TRACE_DEBUG_MODULE`::
4952 Debug information with module (executable/library) scope (set of units).
4953
4954 `TRACE_DEBUG_UNIT`::
4955 Debug information with compilation unit scope (set of functions).
4956
4957 `TRACE_DEBUG_FUNCTION`::
4958 Debug information with function-level scope.
4959
4960 `TRACE_DEBUG_LINE`::
4961 Debug information with line-level scope (TRACEPOINT_EVENT default).
4962
4963 `TRACE_DEBUG`::
4964 Debug-level message.
4965
4966 Log levels `TRACE_EMERG` through `TRACE_INFO` and `TRACE_DEBUG` match
4967 http://man7.org/linux/man-pages/man3/syslog.3.html[syslog]
4968 level semantics. Log levels `TRACE_DEBUG_SYSTEM` through `TRACE_DEBUG`
4969 offer more fine-grained selection of debug information.
4970
4971
4972 [[lttng-modules-ref]]
4973 === LTTng-modules
4974
4975 This section presents references of the LTTng-modules package.
4976
4977
4978 [[lttng-modules-tp-struct-entry]]
4979 ==== Tracepoint fields macros (for `TP_STRUCT__entry()`)
4980
4981 This table describes possible entries for the `TP_STRUCT__entry()` part
4982 of `LTTNG_TRACEPOINT_EVENT()`:
4983
4984 [role="growable func-desc",cols="asciidoc,asciidoc"]
4985 .Available entries for `TP_STRUCT__entry()` (in `LTTNG_TRACEPOINT_EVENT()`)
4986 |====
4987 |Macro |Description and parameters
4988
4989 |+\__field(__t__, __n__)+
4990 |
4991 Standard integer, displayed in base 10.
4992
4993 +__t__+::
4994 Integer C type (`int`, `unsigned char`, `size_t`, ...).
4995
4996 +__n__+::
4997 Field name.
4998
4999 |+\__field_hex(__t__, __n__)+
5000 |
5001 Standard integer, displayed in base 16.
5002
5003 +__t__+::
5004 Integer C type.
5005
5006 +__n__+::
5007 Field name.
5008
5009 |+\__field_oct(__t__, __n__)+
5010 |
5011 Standard integer, displayed in base 8.
5012
5013 +__t__+::
5014 Integer C type.
5015
5016 +__n__+::
5017 Field name.
5018
5019 |+\__field_network(__t__, __n__)+
5020 |
5021 Integer in network byte order (big endian), displayed in base 10.
5022
5023 +__t__+::
5024 Integer C type.
5025
5026 +__n__+::
5027 Field name.
5028
5029 |+\__field_network_hex(__t__, __n__)+
5030 |
5031 Integer in network byte order (big endian), displayed in base 16.
5032
5033 +__t__+::
5034 Integer C type.
5035
5036 +__n__+::
5037 Field name.
5038
5039 |+\__array(__t__, __n__, __s__)+
5040 |
5041 Statically-sized array, elements displayed in base 10.
5042
5043 +__t__+::
5044 Array element C type.
5045
5046 +__n__+::
5047 Field name.
5048
5049 +__s__+::
5050 Number of elements.
5051
5052 |+\__array_hex(__t__, __n__, __s__)+
5053 |
5054 Statically-sized array, elements displayed in base 16.
5055
5056 +__t__+::
5057 array element C type.
5058 +__n__+::
5059 field name.
5060 +__s__+::
5061 number of elements.
5062
5063 |+\__array_text(__t__, __n__, __s__)+
5064 |
5065 Statically-sized array, displayed as text.
5066
5067 +__t__+::
5068 Array element C type (always char).
5069
5070 +__n__+::
5071 Field name.
5072
5073 +__s__+::
5074 Number of elements.
5075
5076 |+\__dynamic_array(__t__, __n__, __s__)+
5077 |
5078 Dynamically-sized array, displayed in base 10.
5079
5080 +__t__+::
5081 Array element C type.
5082
5083 +__n__+::
5084 Field name.
5085
5086 +__s__+::
5087 Length C expression.
5088
5089 |+\__dynamic_array_hex(__t__, __n__, __s__)+
5090 |
5091 Dynamically-sized array, displayed in base 16.
5092
5093 +__t__+::
5094 Array element C type.
5095
5096 +__n__+::
5097 Field name.
5098
5099 +__s__+::
5100 Length C expression.
5101
5102 |+\__dynamic_array_text(__t__, __n__, __s__)+
5103 |
5104 Dynamically-sized array, displayed as text.
5105
5106 +__t__+::
5107 Array element C type (always char).
5108
5109 +__n__+::
5110 Field name.
5111
5112 +__s__+::
5113 Length C expression.
5114
5115 |+\__string(n, __s__)+
5116 |
5117 Null-terminated string.
5118
5119 The behaviour is undefined behavior if +__s__+ is `NULL`.
5120
5121 +__n__+::
5122 Field name.
5123
5124 +__s__+::
5125 String source (pointer).
5126 |====
5127
5128 The above macros should cover the majority of cases. For advanced items,
5129 see path:{probes/lttng-events.h}.
5130
5131
5132 [[lttng-modules-tp-fast-assign]]
5133 ==== Tracepoint assignment macros (for `TP_fast_assign()`)
5134
5135 This table describes possible entries for the `TP_fast_assign()` part
5136 of `LTTNG_TRACEPOINT_EVENT()`:
5137
5138 [role="growable func-desc",cols="asciidoc,asciidoc"]
5139 .Available entries for `TP_fast_assign()` (in `LTTNG_TRACEPOINT_EVENT()`)
5140 |====
5141 |Macro |Description and parameters
5142
5143 |+tp_assign(__d__, __s__)+
5144 |
5145 Assignment of C expression +__s__+ to tracepoint field +__d__+.
5146
5147 +__d__+::
5148 Name of destination tracepoint field.
5149
5150 +__s__+::
5151 Source C expression (may refer to tracepoint arguments).
5152
5153 |+tp_memcpy(__d__, __s__, __l__)+
5154 |
5155 Memory copy of +__l__+ bytes from +__s__+ to tracepoint field
5156 +__d__+ (use with array fields).
5157
5158 +__d__+::
5159 Name of destination tracepoint field.
5160
5161 +__s__+::
5162 Source C expression (may refer to tracepoint arguments).
5163
5164 +__l__+::
5165 Number of bytes to copy.
5166
5167 |+tp_memcpy_from_user(__d__, __s__, __l__)+
5168 |
5169 Memory copy of +__l__+ bytes from user space +__s__+ to tracepoint
5170 field +__d__+ (use with array fields).
5171
5172 +__d__+::
5173 Name of destination tracepoint field.
5174
5175 +__s__+::
5176 Source C expression (may refer to tracepoint arguments).
5177
5178 +__l__+::
5179 Number of bytes to copy.
5180
5181 |+tp_memcpy_dyn(__d__, __s__)+
5182 |
5183 Memory copy of dynamically-sized array from +__s__+ to tracepoint field
5184 +__d__+.
5185
5186 The number of bytes is known from the field's length expression
5187 (use with dynamically-sized array fields).
5188
5189 +__d__+::
5190 Name of destination tracepoint field.
5191
5192 +__s__+::
5193 Source C expression (may refer to tracepoint arguments).
5194
5195 +__l__+::
5196 Number of bytes to copy.
5197
5198 |+tp_strcpy(__d__, __s__)+
5199 |
5200 String copy of +__s__+ to tracepoint field +__d__+ (use with string
5201 fields).
5202
5203 +__d__+::
5204 Name of destination tracepoint field.
5205
5206 +__s__+::
5207 Source C expression (may refer to tracepoint arguments).
5208 |====
This page took 0.136811 seconds and 4 git commands to generate.