1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
7 include::../common/copyright.txt[]
10 include::../common/warning-not-maintained.txt[]
16 Welcome to the **LTTng Documentation**!
18 The _Linux Trace Toolkit: next generation_ is an open source software
19 toolkit which you can use to simultaneously trace the Linux kernel, user
20 applications, and user libraries.
24 * Kernel modules to trace the Linux kernel.
25 * Shared libraries to trace user applications written in C or C++.
26 * Java packages to trace Java applications which use
28 * A kernel module to trace shell scripts and other user applications
29 without a dedicated instrumentation mechanism.
30 * Daemons and a command-line tool, cmd:lttng, to control the
34 .Open source documentation
36 This is an **open documentation**: its source is available in a
37 https://github.com/lttng/lttng-docs[public Git repository].
39 Should you find any error in the content of this text, any grammatical
40 mistake, or any dead link, we would be very grateful if you would file a
41 GitHub issue for it or, even better, contribute a patch to this
42 documentation by creating a pull request.
46 include::../common/audience.txt[]
50 === Chapter descriptions
52 What follows is a list of brief descriptions of this documentation's
53 chapters. The latter are ordered in such a way as to make the reading
54 as linear as possible.
56 . <<nuts-and-bolts,Nuts and bolts>> explains the
57 rudiments of software tracing and the rationale behind the
59 . <<installing-lttng,Installing LTTng>> is divided into
60 sections describing the steps needed to get a working installation
61 of LTTng packages for common Linux distributions and from its
63 . <<getting-started,Getting started>> is a very concise guide to
64 get started quickly with LTTng kernel and user space tracing. This
65 chapter is recommended if you're new to LTTng or software tracing
67 . <<understanding-lttng,Understanding LTTng>> deals with some
68 core concepts and components of the LTTng suite. Understanding
69 those is important since the next chapter assumes you're familiar
71 . <<using-lttng,Using LTTng>> is a complete user guide of the
72 LTTng project. It shows in great details how to instrument user
73 applications and the Linux kernel, how to control tracing sessions
74 using the `lttng` command line tool and miscellaneous practical use
76 . <<reference,Reference>> contains references of LTTng components,
77 like links to online manpages and various APIs.
79 We recommend that you read the above chapters in this order, although
80 some of them may be skipped depending on your situation. You may skip
81 <<nuts-and-bolts,Nuts and bolts>> if you're familiar with tracing
82 and LTTng. Also, you may jump over <<installing-lttng,Installing LTTng>>
83 if LTTng is already properly installed on your target system.
86 include::../common/convention.txt[]
89 include::../common/acknowledgements.txt[]
93 == What's new in LTTng {revision}?
95 The **LTTng {revision}** toolchain introduces many interesting features,
96 some of them which have been requested by users many times.
99 <<saving-loading-tracing-session,save and restore tracing sessions>>.
100 Sessions are saved to and loaded from XML files located by default in a
101 subdirectory of the user's home directory. LTTng daemons are also
102 configurable by configuration files as of LTTng-tools {revision}. This version
103 also makes it possible to load user-defined kernel probes with the new
104 session daemon's `--kmod-probes` option (or using the
105 `LTTNG_KMOD_PROBES` environment variable).
107 <<tracef,`tracef()`>> is a new instrumentation facility in LTTng-UST {revision}
108 which makes it possible to insert `printf()`-like tracepoints in C/$$C++$$
109 code for quick debugging. LTTng-UST {revision} also adds support for perf PMU
110 counters in user space on the x86 architecture
111 (see <<adding-context,Adding some context to channels>>).
113 As of LTTng-modules {revision}, a new
114 <<proc-lttng-logger-abi,LTTng logger ABI>>
115 is made available, making tracing Bash scripts, for example, much more
116 easier (just `echo` whatever you need to record to path:{/proc/lttng-logger}
117 while tracing is active). On the kernel side, some tracepoints are
118 added: state dumps of block devices, file descriptors, and file modes,
119 as well as http://en.wikipedia.org/wiki/Video4Linux[V4L2] events. Linux
120 3.15 is now officially supported, and system call tracing is now
121 possible on the MIPS32 architecture.
123 To learn more about the new features of LTTng {revision}, see
124 http://lttng.org/blog/2014/08/04/lttng-toolchain-2-5-0-is-out/[this
125 release announcement].
131 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
132 generation_ is a modern toolkit for tracing Linux systems and
133 applications. So your first question might rather be: **what is
136 As the history of software engineering progressed and led to what
137 we now take for granted--complex, numerous and
138 interdependent software applications running in parallel on
139 sophisticated operating systems like Linux--the authors of such
140 components, or software developers, began feeling a natural
141 urge of having tools to ensure the robustness and good performance
142 of their masterpieces.
144 One major achievement in this field is, inarguably, the
145 https://www.gnu.org/software/gdb/[GNU debugger (GDB)], which is an
146 essential tool for developers to find and fix bugs. But even the best
147 debugger won't help make your software run faster, and nowadays, faster
148 software means either more work done by the same hardware, or cheaper
149 hardware for the same work.
151 A _profiler_ is often the tool of choice to identify performance
152 bottlenecks. Profiling is suitable to identify _where_ performance is
153 lost in a given software; the profiler outputs a profile, a statistical
154 summary of observed events, which you may use to know which functions
155 took the most time to execute. However, a profiler won't report _why_
156 some identified functions are the bottleneck. Also, bottlenecks might
157 only occur when specific conditions are met. For a thorough
158 investigation of software performance issues, a history of execution,
159 with historical values of chosen variables, is essential. This is where
160 tracing comes in handy.
162 _Tracing_ is a technique used to understand what goes on in a running
163 software system. The software used for tracing is called a _tracer_,
164 which is conceptually similar to a tape recorder. When recording,
165 specific points placed in the software source code generate events that
166 are saved on a giant tape: a _trace_ file. Both user applications and
167 the operating system may be traced at the same time, opening the
168 possibility of resolving a wide range of problems that are otherwise
169 extremely challenging.
171 Tracing is often compared to _logging_. However, tracers and loggers are
172 two different types of tools, serving two different purposes. Tracers
173 are designed to record much lower-level events that occur much more
174 frequently than log messages, often in the thousands per second range,
175 with very little execution overhead. Logging is more appropriate for
176 very high-level analysis of less frequent events: user accesses,
177 exceptional conditions (e.g., errors, warnings), database transactions,
178 instant messaging communications, etc. More formally, logging is one of
179 several use cases that can be accomplished with tracing.
181 The list of recorded events inside a trace file may be read manually
182 like a log file for the maximum level of detail, but it is generally
183 much more interesting to perform application-specific analyses to
184 produce reduced statistics and graphs that are useful to resolve a given
185 problem. Trace viewers and analysers are specialized tools which achieve
188 So, in the end, this is what LTTng is: a powerful, open source set of
189 tools to trace the Linux kernel and user applications. LTTng is composed
190 of several components actively maintained and developed by its
191 http://lttng.org/community/#where[community].
193 Excluding proprietary solutions, a few competing software tracers exist
195 https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace] is the
196 de facto function tracer of the Linux kernel.
197 http://linux.die.net/man/1/strace[strace] is able to record all system
198 calls made by a user process.
199 https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and user
200 space tracer which uses custom user scripts to produce plain text
201 traces. http://www.sysdig.org/[sysdig] also uses scripts, written in
202 Lua, to trace and analyze the Linux kernel.
204 The main distinctive features of LTTng is that it produces correlated
205 kernel and user space traces, as well as doing so with the lowest
206 overhead amongst other solutions. It produces trace files in the
207 http://www.efficios.com/ctf[CTF] format, an optimized file format for
208 production and analyses of multi-gigabyte data. LTTng is the result of
209 close to 10 years of active development by a community of passionate
210 developers. It is currently available on some major desktop, server, and
211 embedded Linux distributions.
213 The main interface for tracing control is a single command line tool
214 named `lttng`. The latter can create several tracing sessions,
215 enable/disable events on the fly, filter them efficiently with custom
216 user expressions, start/stop tracing and do much more. Traces can be
217 recorded on disk or sent over the network, kept totally or partially,
218 and viewed once tracing is inactive or in real-time.
220 <<installing-lttng,Install LTTng now>> and start tracing!
226 include::../common/warning-installation-outdated.txt[]
228 **LTTng** is a set of software components which interact to allow
229 instrumenting the Linux kernel and user applications and controlling
230 tracing sessions (starting/stopping tracing, enabling/disabling events,
231 etc.). Those components are bundled into the following packages:
234 Libraries and command line interface to control tracing sessions.
237 Linux kernel modules allowing Linux to be traced using LTTng.
240 User space tracing library.
242 Most distributions mark the LTTng-modules and LTTng-UST packages as
243 optional. In the following sections, we always provide the steps to
244 install all three, but be aware that LTTng-modules is only required if
245 you intend to trace the Linux kernel and LTTng-UST is only required if
246 you intend to trace user space applications.
248 This chapter shows how to install the above packages on a Linux system.
249 The easiest way is to use the package manager of the system's
250 distribution (<<desktop-distributions,desktop>> or
251 <<embedded-distributions,embedded>>). Support is also available for
252 <<enterprise-distributions,enterprise distributions>>, such as Red Hat
253 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES).
255 <<building-from-source,build the LTTng packages from source>>.
258 [[desktop-distributions]]
259 === Desktop distributions
261 Official LTTng {revision} packages are available for <<ubuntu,Ubuntu>> and
264 More recent versions of LTTng are available for Fedora, openSUSE,
265 as well as Arch Linux.
267 Should any issue arise when following the procedures below, please
268 inform the http://lttng.org/community[community] about it.
274 LTTng {revision} is packaged in Ubuntu 15.04 _Vivid Vervet_. For other
275 releases of Ubuntu, you need to build and install LTTng
276 <<building-from-source,from source>>. Ubuntu 15.10 _Wily Werewolf_
277 ships with link:/docs/v2.6/[LTTng 2.6].
279 To install LTTng {revision} from the official Ubuntu repositories,
280 simply use `apt-get`:
284 sudo apt-get install lttng-tools
285 sudo apt-get install lttng-modules-dkms
286 sudo apt-get install liblttng-ust-dev
293 Debian "jessie" has official packages of LTTng {revision}:
297 sudo apt-get install lttng-tools
298 sudo apt-get install lttng-modules-dkms
299 sudo apt-get install liblttng-ust-dev
303 [[embedded-distributions]]
304 === Embedded distributions
306 Some developers may be interested in tracing the Linux kernel and user space
307 applications running on embedded systems. LTTng is packaged by two popular
308 embedded Linux distributions: <<buildroot,Buildroot>> and
309 <<oe-yocto,OpenEmbedded/Yocto>>.
315 LTTng {revision} packages in Buildroot 2014.11 and 2015.02 are named
316 `lttng-tools`, `lttng-modules`, and `lttng-libust`.
318 To enable them, start the Buildroot configuration menu as usual:
327 * _Kernel_: make sure _Linux kernel_ is enabled
328 * _Toolchain_: make sure the following options are enabled:
329 ** _Enable large file (files > 2GB) support_
330 ** _Enable WCHAR support_
332 In _Target packages_/_Debugging, profiling and benchmark_, enable
333 _lttng-modules_ and _lttng-tools_. In
334 _Target packages_/_Libraries_/_Other_, enable _lttng-libust_.
338 ==== OpenEmbedded/Yocto
340 LTTng {revision} recipes are available in the `openembedded-core` layer of
341 OpenEmbedded from August 15th, 2014 to February 8th, 2015 under the
348 Using BitBake, the simplest way to include LTTng recipes in your
349 target image is to add them to `IMAGE_INSTALL_append` in
350 path:{conf/local.conf}:
353 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
356 If you're using Hob, click _Edit image recipe_ once you have selected
357 a machine and an image recipe. Then, in the _All recipes_ tab, search
358 for `lttng` and you should find and be able to include the three LTTng
362 [[enterprise-distributions]]
363 === Enterprise distributions (RHEL, SLES)
365 To install LTTng on enterprise Linux distributions
366 (such as RHEL and SLES), please see
367 http://packages.efficios.com/[EfficiOS Enterprise Packages].
370 [[building-from-source]]
371 === Building from source
373 As <<installing-lttng,previously stated>>, LTTng is shipped as three
374 packages: LTTng-tools, LTTng-modules and LTTng-UST. LTTng-tools contains
375 everything needed to control tracing sessions, while LTTng-modules is
376 only needed for Linux kernel tracing and LTTng-UST is only needed for
379 The tarballs are available in the
380 http://lttng.org/download#build-from-source[Download section]
381 of the LTTng website.
383 Please refer to the path:{README.md} files provided by each package to
384 properly build and install them.
386 TIP: The aforementioned path:{README.md} files are rendered as
387 rich text when https://github.com/lttng[viewed on GitHub].
391 == Getting started with LTTng
393 This is a small guide to get started quickly with LTTng kernel and user
394 space tracing. For intermediate to advanced use cases and a more
395 thorough understanding of LTTng, see <<using-lttng,Using LTTng>> and
396 <<understanding-lttng,Understanding LTTng>>.
398 Before reading this guide, make sure LTTng
399 <<installing-lttng,is installed>>. You will at least need LTTng-tools.
400 Also install LTTng-modules for
401 <<tracing-the-linux-kernel,tracing the Linux kernel>>
402 and LTTng-UST for <<tracing-your-own-user-application,tracing your own
403 user space applications>>. When your traces are finally written and
405 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
406 section of this chapter will help you analyze your tracepoint
407 events to investigate.
410 [[tracing-the-linux-kernel]]
411 === Tracing the Linux kernel
413 Make sure LTTng-tools and LTTng-modules packages
414 <<installing-lttng,are installed>>.
416 Since you're about to trace the Linux kernel itself, let's look at the
417 available kernel events using the `lttng` tool, which has a
418 Git-like command line structure:
425 Before tracing, you need to create a session:
429 sudo lttng create my-session
432 TIP: You can avoid using `sudo` in the previous and following commands
433 if your user is a member of the <<lttng-sessiond,tracing group>>.
435 `my-session` is the tracing session name and could be anything you
436 like. `auto` will be used if omitted.
438 Let's now enable some events for this session:
442 sudo lttng enable-event --kernel sched_switch,sched_process_fork
445 or you might want to simply enable all available kernel events (beware
446 that trace files will grow rapidly when doing this):
450 sudo lttng enable-event --kernel --all
460 By default, traces are saved in
461 +\~/lttng-traces/__name__-__date__-__time__+,
462 where +__name__+ is the session name.
464 When you're done tracing:
472 Although `destroy` looks scary here, it doesn't actually destroy the
473 outputted trace files: it only destroys the tracing session.
475 What's next? Have a look at
476 <<viewing-and-analyzing-your-traces,Viewing and analyzing your traces>>
477 to view and analyze the trace you just recorded.
480 [[tracing-your-own-user-application]]
481 === Tracing your own user application
483 The previous section helped you create a trace out of Linux kernel
484 events. This section steps you through a simple example showing you how
485 to trace a _Hello world_ program written in C.
487 Make sure LTTng-tools and LTTng-UST packages
488 <<installing-lttng,are installed>>.
490 Tracing is just like having `printf()` calls at specific locations of
491 your source code, albeit LTTng is much faster and more flexible than
492 `printf()`. In the LTTng realm, **`tracepoint()`** is analogous to
495 Unlike `printf()`, though, `tracepoint()` does not use a format string to
496 know the types of its arguments: the formats of all tracepoints must be
497 defined before using them. So before even writing our _Hello world_ program,
498 we need to define the format of our tracepoint. This is done by writing a
499 **template file**, with a name usually ending
500 with the `.tp` extension (for **t**race**p**oint),
501 which the `lttng-gen-tp` tool (shipped with LTTng-UST) will use to generate
502 an object file (along with a `.c` file) and a header to be
503 included in our application source code.
505 Here's the whole flow:
508 .Build workflow for LTTng application tracing.
509 image::lttng-lttng-gen-tp.png[]
511 The template file format is a list of tracepoint definitions
512 and other optional definition entries which we will skip for
513 this quickstart. Each tracepoint is defined using the
514 `TRACEPOINT_EVENT()` macro. For each tracepoint, you must provide:
516 * a **provider name**, which is the "scope" of this tracepoint (this usually
517 includes the company and project names)
518 * a **tracepoint name**
519 * a **list of arguments** for the eventual `tracepoint()` call,
521 ** the argument C type
523 * a **list of fields**, which will be the actual fields of the recorded events
526 Here's a simple tracepoint definition example with two arguments: an integer
539 ctf_string(my_string_field, my_string_arg)
540 ctf_integer(int, my_integer_field, my_integer_arg)
545 The exact syntax is well explained in the
546 <<c-application,C application>> instrumenting guide of the
547 <<using-lttng,Using LTTng>> chapter, as well as in man:lttng-ust(3).
549 Save the above snippet as path:{hello-tp.tp} and run:
553 lttng-gen-tp hello-tp.tp
556 The following files will be created next to path:{hello-tp.tp}:
562 path:{hello-tp.o} is the compiled object file of path:{hello-tp.c}.
564 Now, by including path:{hello-tp.h} in your own application, you may use the
565 tracepoint defined above by properly refering to it when calling
571 #include "hello-tp.h"
573 int main(int argc, char* argv[])
577 puts("Hello, World!\nPress Enter to continue...");
579 /* The following getchar() call is only placed here for the purpose
580 * of this demonstration, for pausing the application in order for
581 * you to have time to list its events. It's not needed otherwise.
585 /* A tracepoint() call. Arguments, as defined in hello-tp.tp:
587 * 1st: provider name (always)
588 * 2nd: tracepoint name (always)
589 * 3rd: my_integer_arg (first user-defined argument)
590 * 4th: my_string_arg (second user-defined argument)
592 * Notice the provider and tracepoint names are NOT strings;
593 * they are in fact parts of variables created by macros in
596 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
598 for (x = 0; x < argc; ++x) {
599 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
602 puts("Quitting now!");
604 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
610 Save this as path:{hello.c}, next to path:{hello-tp.tp}.
612 Notice path:{hello-tp.h}, the header file generated by path:{lttng-gen-tp} from
613 our template file path:{hello-tp.tp}, is included by path:{hello.c}.
615 You are now ready to compile the application with LTTng-UST support:
619 gcc -o hello hello.c hello-tp.o -llttng-ust -ldl
623 <<tracing-the-linux-kernel,Tracing the Linux kernel>> section, the
624 following steps will look familiar.
626 First, run the application with a few arguments:
630 ./hello world and beyond
637 Press Enter to continue...
640 Use the `lttng` tool to list all available user space events:
644 lttng list --userspace
647 You should see the `hello_world:my_first_tracepoint` tracepoint listed
648 under the `./hello` process.
650 Create a tracing session:
654 lttng create my-userspace-session
657 Enable the `hello_world:my_first_tracepoint` tracepoint:
661 lttng enable-event --userspace hello_world:my_first_tracepoint
671 Go back to the running path:{hello} application and press Enter. All
672 `tracepoint()` calls will be executed and the program will finally exit.
681 Done! You may use `lttng view` to list the recorded events. This command
683 http://www.efficios.com/babeltrace[`babeltrace`]
684 in the background, if it is installed:
691 should output something like:
694 [18:10:27.684304496] (+?.?????????) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "hi there!", my_integer_field = 23 }
695 [18:10:27.684338440] (+0.000033944) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "./hello", my_integer_field = 0 }
696 [18:10:27.684340692] (+0.000002252) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "world", my_integer_field = 1 }
697 [18:10:27.684342616] (+0.000001924) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "and", my_integer_field = 2 }
698 [18:10:27.684343518] (+0.000000902) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "beyond", my_integer_field = 3 }
699 [18:10:27.684357978] (+0.000014460) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "x^2", my_integer_field = 16 }
702 When you're done, you may destroy the tracing session, which does _not_
703 destroy the generated trace files, leaving them available for further
708 lttng destroy my-userspace-session
711 The next section presents other alternatives to view and analyze your
715 [[viewing-and-analyzing-your-traces]]
716 === Viewing and analyzing your traces
718 This section describes how to visualize the data gathered after tracing
719 the Linux kernel or a user space application.
721 Many ways exist to read your LTTng traces:
723 * **`babeltrace`** is a command line utility which converts trace formats;
724 it supports the format used by LTTng,
725 CTF, as well as a basic
726 text output which may be ++grep++ed. The `babeltrace` command is
727 part of the http://www.efficios.com/babeltrace[Babeltrace] project.
728 * Babeltrace also includes a **Python binding** so that you may
729 easily open and read an LTTng trace with your own script, benefiting
730 from the power of Python.
731 * **http://projects.eclipse.org/projects/tools.tracecompass[Trace Compass]**
732 is an Eclipse plugin used to visualize and analyze various types of
733 traces, including LTTng's. It also comes as a standalone application
734 and can be downloaded from
735 http://projects.eclipse.org/projects/tools.tracecompass/downloads[here].
737 LTTng trace files are usually recorded in the path:{~/lttng-traces} directory.
738 Let's now view the trace and perform a basic analysis using
741 The simplest way to list all the recorded events of a trace is to pass its
742 path to `babeltrace` with no options:
746 babeltrace ~/lttng-traces/my-session
749 `babeltrace` will find all traces within the given path recursively and
750 output all their events, merging them intelligently.
752 Listing all the system calls of a Linux kernel trace with their arguments is
753 easy with `babeltrace` and `grep`:
757 babeltrace ~/lttng-traces/my-kernel-session | grep sys_
760 Counting events is also straightforward:
764 babeltrace ~/lttng-traces/my-kernel-session | grep sys_read | wc --lines
767 The text output of `babeltrace` is useful for isolating events by simple
768 matching using `grep` and similar utilities. However, more elaborate filters
769 such as keeping only events with a field value falling within a specific range
770 are not trivial to write using a shell. Moreover, reductions and even the
771 most basic computations involving multiple events are virtually impossible
774 Fortunately, Babeltrace ships with a Python 3 binding which makes it
775 really easy to read the events of an LTTng trace sequentially and compute
776 the desired information.
778 Here's a simple example using the Babeltrace Python binding. The following
779 script accepts an LTTng Linux kernel trace path as its first argument and
780 outputs the short names of the top 5 running processes on CPU 0 during the
786 from collections import Counter
791 if len(sys.argv) != 2:
792 msg = 'Usage: python {} TRACEPATH'.format(sys.argv[0])
793 raise ValueError(msg)
795 # a trace collection holds one to many traces
796 col = babeltrace.TraceCollection()
798 # add the trace provided by the user
799 # (LTTng traces always have the 'ctf' format)
800 if col.add_trace(sys.argv[1], 'ctf') is None:
801 raise RuntimeError('Cannot add trace')
803 # this counter dict will hold execution times:
805 # task command name -> total execution time (ns)
806 exec_times = Counter()
808 # this holds the last `sched_switch` timestamp
812 for event in col.events:
813 # keep only `sched_switch` events
814 if event.name != 'sched_switch':
817 # keep only events which happened on CPU 0
818 if event['cpu_id'] != 0:
822 cur_ts = event.timestamp
828 # previous task command (short) name
829 prev_comm = event['prev_comm']
831 # initialize entry in our dict if not yet done
832 if prev_comm not in exec_times:
833 exec_times[prev_comm] = 0
835 # compute previous command execution time
836 diff = cur_ts - last_ts
838 # update execution time of this command
839 exec_times[prev_comm] += diff
841 # update last timestamp
845 for name, ns in exec_times.most_common(5):
847 print('{:20}{} s'.format(name, s))
850 if __name__ == '__main__':
854 Save this script as path:{top5proc.py} and run it with Python 3, providing the
855 path to an LTTng Linux kernel trace as the first argument:
859 python3 top5proc.py ~/lttng-sessions/my-session-.../kernel
862 Make sure the path you provide is the directory containing actual trace
863 files (path:{channel0_0}, path:{metadata}, etc.): the `babeltrace` utility
864 recurses directories, but the Python binding does not.
866 Here's an example of output:
869 swapper/0 48.607245889 s
870 chromium 7.192738188 s
871 pavucontrol 0.709894415 s
872 Compositor 0.660867933 s
873 Xorg.bin 0.616753786 s
876 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
877 weren't using the CPU that much when tracing, its first position in the list
881 [[understanding-lttng]]
882 == Understanding LTTng
884 If you're going to use LTTng in any serious way, it is fundamental that
885 you become familiar with its core concepts. Technical terms like
886 _tracing sessions_, _domains_, _channels_ and _events_ are used over
887 and over in the <<using-lttng,Using LTTng>> chapter,
888 and it is assumed that you understand what they mean when reading it.
890 LTTng, as you already know, is a _toolkit_. It would be wrong
891 to call it a simple _tool_ since it is composed of multiple interacting
892 components. This chapter also describes the latter, providing details
893 about their respective roles and how they connect together to form
894 the current LTTng ecosystem.
900 This section explains the various elementary concepts a user has to deal
901 with when using LTTng. They are:
903 * <<tracing-session,tracing session>>
905 * <<channel,channel>>
912 A _tracing session_ is--like any session--a container of
913 state. Anything that is done when tracing using LTTng happens in the
914 scope of a tracing session. In this regard, it is analogous to a bank
915 website's session: you can't interact online with your bank account
916 unless you are logged in a session, except for reading a few static
917 webpages (LTTng, too, can report some static information that does not
918 need a created tracing session).
920 A tracing session holds the following attributes and objects (some of
921 which are described in the following sections):
924 * the tracing state (tracing started or stopped)
925 * the trace data output path/URL (local path or sent over the network)
926 * a mode (normal, snapshot or live)
927 * the snapshot output paths/URLs (if applicable)
928 * for each <<domain,domain>>, a list of <<channel,channels>>
931 ** the channel state (enabled or disabled)
932 ** its parameters (event loss mode, sub-buffers size and count,
933 timer periods, output type, trace files size and count, etc.)
934 ** a list of added context information
935 ** a list of <<event,events>>
937 ** its state (enabled or disabled)
938 ** a list of instrumentation points (tracepoints, system calls,
939 dynamic probes, etc.)
940 ** associated log levels
941 ** a filter expression
943 All this information is completely isolated between tracing sessions.
945 Conceptually, a tracing session is a per-user object; the
946 <<plumbing,Plumbing>> section shows how this is actually
947 implemented. Any user may create as many concurrent tracing sessions
948 as desired. As you can see in the list above, even the tracing state
949 is a per-tracing session attribute, so that you may trace your target
950 system/application in a given tracing session with a specific
951 configuration while another one stays inactive.
953 The trace data generated in a tracing session may be either saved
954 to disk, sent over the network or not saved at all (in which case
955 snapshots may still be saved to disk or sent to a remote machine).
961 A tracing _domain_ is the official term the LTTng project uses to
962 designate a tracer category.
964 There are currently three known domains:
968 * `java.util.logging` (JUL)
970 Different tracers expose common features in their own interfaces, but,
971 from a user's perspective, you still need to target a specific type of
972 tracer to perform some actions. For example, since both kernel and user
973 space tracers support named tracepoints (probes manually inserted in
974 source code), you need to specify which one is concerned when enabling
975 an event because both domains could have existing events with the same
978 Some features are not available in all domains. Filtering enabled
979 events using custom expressions, for example, is currently not
980 supported in the kernel domain, but support could be added in the
987 A _channel_ is a set of events with specific parameters and potential
988 added context information. Channels have unique names per domain within
989 a tracing session. A given event is always registered to at least one
990 channel; having an enabled event in two channels will produce a trace
991 with this event recorded twice everytime it occurs.
993 Channels may be individually enabled or disabled. Occurring events of
994 a disabled channel will never make it to recorded events.
996 The fundamental role of a channel is to keep a shared ring buffer, where
997 events are eventually recorded by the tracer and consumed by a consumer
998 daemon. This internal ring buffer is divided into many sub-buffers of
1001 Channels, when created, may be fine-tuned thanks to a few parameters,
1002 many of them related to sub-buffers. The following subsections explain
1003 what those parameters are and in which situations you should manually
1007 [[channel-overwrite-mode-vs-discard-mode]]
1008 ===== Overwrite and discard event loss modes
1010 As previously mentioned, a channel's ring buffer is divided into many
1011 equally sized sub-buffers.
1013 As events occur, they are serialized as trace data into a specific
1014 sub-buffer (yellow arc in the following animation) until it is full:
1015 when this happens, the sub-buffer is marked as consumable (red) and
1016 another, _empty_ (white) sub-buffer starts receiving the following
1017 events. The marked sub-buffer will be consumed eventually by a consumer
1018 daemon (returns to white).
1021 [role="docsvg-channel-subbuf-anim"]
1026 In an ideal world, sub-buffers are consumed faster than filled, like it
1027 is the case above. In the real world, however, all sub-buffers could be
1028 full at some point, leaving no space to record the following events. By
1029 design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer
1030 exists, losing events is acceptable when the alternative would be to
1031 cause substantial delays in the instrumented application's execution.
1032 LTTng privileges performance over integrity, aiming at perturbing the
1033 traced system as little as possible in order to make tracing of subtle
1034 race conditions and rare interrupt cascades possible.
1036 When it comes to losing events because no empty sub-buffer is available,
1037 the channel's _event loss mode_ determines what to do amongst:
1040 Drop the newest events until a sub-buffer is released.
1043 Clear the sub-buffer containing the oldest recorded
1044 events and start recording the newest events there. This mode is
1045 sometimes called _flight recorder mode_ because it behaves like a
1046 flight recorder: always keep a fixed amount of the latest data.
1048 Which mechanism you should choose depends on your context: prioritize
1049 the newest or the oldest events in the ring buffer?
1051 Beware that, in overwrite mode, a whole sub-buffer is abandoned as soon
1052 as a new event doesn't find an empty sub-buffer, whereas in discard
1053 mode, only the event that doesn't fit is discarded.
1055 Also note that a count of lost events will be incremented and saved in
1056 the trace itself when an event is lost in discard mode, whereas no
1057 information is kept when a sub-buffer gets overwritten before being
1060 There are known ways to decrease your probability of losing events. The
1061 next section shows how tuning the sub-buffers count and size can be
1062 used to virtually stop losing events.
1065 [[channel-subbuf-size-vs-subbuf-count]]
1066 ===== Sub-buffers count and size
1068 For each channel, an LTTng user may set its number of sub-buffers and
1071 Note that there is a noticeable tracer's CPU overhead introduced when
1072 switching sub-buffers (marking a full one as consumable and switching
1073 to an empty one for the following events to be recorded). Knowing this,
1074 the following list presents a few practical situations along with how
1075 to configure sub-buffers for them:
1077 High event throughput::
1078 In general, prefer bigger sub-buffers to
1079 lower the risk of losing events. Having bigger sub-buffers will
1080 also ensure a lower sub-buffer switching frequency. The number of
1081 sub-buffers is only meaningful if the channel is in overwrite mode:
1082 in this case, if a sub-buffer overwrite happens, you will still have
1083 the other sub-buffers left unaltered.
1085 Low event throughput::
1086 In general, prefer smaller sub-buffers
1087 since the risk of losing events is already low. Since events
1088 happen less frequently, the sub-buffer switching frequency should
1089 remain low and thus the tracer's overhead should not be a problem.
1092 If your target system has a low memory
1093 limit, prefer fewer first, then smaller sub-buffers. Even if the
1094 system is limited in memory, you want to keep the sub-buffers as
1095 big as possible to avoid a high sub-buffer switching frequency.
1097 You should know that LTTng uses CTF as its trace format, which means
1098 event data is very compact. For example, the average LTTng Linux kernel
1099 event weights about 32{nbsp}bytes. A sub-buffer size of 1{nbsp}MiB is
1100 thus considered big.
1102 The previous situations highlight the major trade-off between a few big
1103 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1104 frequency vs. how much data is lost in overwrite mode. Assuming a
1105 constant event throughput and using the overwrite mode, the two
1106 following configurations have the same ring buffer total size:
1109 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1114 * **2 sub-buffers of 4 MiB each** lead to a very low sub-buffer
1115 switching frequency, but if a sub-buffer overwrite happens, half of
1116 the recorded events so far (4{nbsp}MiB) are definitely lost.
1117 * **8 sub-buffers of 1 MiB each** lead to 4{nbsp}times the tracer's
1118 overhead as the previous configuration, but if a sub-buffer
1119 overwrite happens, only the eighth of events recorded so far are
1122 In discard mode, the sub-buffers count parameter is pointless: use two
1123 sub-buffers and set their size according to the requirements of your
1127 [[channel-switch-timer]]
1130 The _switch timer_ period is another important configurable feature of
1131 channels to ensure periodic sub-buffer flushing.
1133 When the _switch timer_ fires, a sub-buffer switch happens. This timer
1134 may be used to ensure that event data is consumed and committed to
1135 trace files periodically in case of a low event throughput:
1138 [role="docsvg-channel-switch-timer"]
1143 It's also convenient when big sub-buffers are used to cope with
1144 sporadic high event throughput, even if the throughput is normally
1148 [[channel-buffering-schemes]]
1149 ===== Buffering schemes
1151 In the user space tracing domain, two **buffering schemes** are
1152 available when creating a channel:
1155 Keep one ring buffer per process.
1158 Keep one ring buffer for all processes of a single user.
1160 The per-PID buffering scheme will consume more memory than the per-UID
1161 option if more than one process is instrumented for LTTng-UST. However,
1162 per-PID buffering ensures that one process having a high event
1163 throughput won't fill all the shared sub-buffers, only its own.
1165 The Linux kernel tracing domain only has one available buffering scheme
1166 which is to use a single ring buffer for the whole system.
1172 An _event_, in LTTng's realm, is a term often used metonymically,
1173 having multiple definitions depending on the context:
1175 . When tracing, an event is a _point in space-time_. Space, in a
1176 tracing context, is the set of all executable positions of a
1177 compiled application by a logical processor. When a program is
1178 executed by a processor and some instrumentation point, or
1179 _probe_, is encountered, an event occurs. This event is accompanied
1180 by some contextual payload (values of specific variables at this
1181 point of execution) which may or may not be recorded.
1182 . In the context of a recorded trace file, the term _event_ implies
1184 . When configuring a tracing session, _enabled events_ refer to
1185 specific rules which could lead to the transfer of actual
1186 occurring events (1) to recorded events (2).
1188 The whole <<core-concepts,Core concepts>> section focuses on the
1189 third definition. An event is always registered to _one or more_
1190 channels and may be enabled or disabled at will per channel. A disabled
1191 event will never lead to a recorded event, even if its channel
1194 An event (3) is enabled with a few conditions that must _all_ be met
1195 when an event (1) happens in order to generate a recorded event (2):
1197 . A _probe_ or group of probes in the traced application must be
1199 . **Optionally**, the probe must have a log level matching a
1200 log level range specified when enabling the event.
1201 . **Optionally**, the occurring event must satisfy a custom
1202 expression, or _filter_, specified when enabling the event.
1204 The following illustration summarizes how tracing sessions, domains,
1205 channels and events are related:
1209 image::core-concepts.png[]
1211 This diagram also shows how events may be individually enabled/disabled
1212 (green/grey) and how a given event may be registered to more than one
1219 The previous section described the concepts at the heart of LTTng.
1220 This section summarizes LTTng's implementation: how those objects are
1221 managed by different applications and libraries working together to
1225 [[plumbing-overview]]
1228 As <<installing-lttng,mentioned previously>>, the whole LTTng suite
1229 is made of the following packages: LTTng-tools, LTTng-UST, and
1230 LTTng-modules. Together, they provide different daemons, libraries,
1231 kernel modules and command line interfaces. The following tree shows
1232 which usable component belongs to which package:
1235 ** session daemon (`lttng-sessiond`)
1236 ** consumer daemon (`lttng-consumerd`)
1237 ** relay daemon (`lttng-relayd`)
1238 ** tracing control library (`liblttng-ctl`)
1239 ** tracing control command line tool (`lttng`)
1241 ** user space tracing library (`liblttng-ust`) and its headers
1242 ** preloadable user space tracing helpers
1243 (`liblttng-ust-libc-wrapper`, `liblttng-ust-pthread-wrapper`,
1244 `liblttng-ust-cyg-profile`, `liblttng-ust-cyg-profile-fast`
1245 and `liblttng-ust-dl`)
1246 ** user space tracepoint code generator command line tool
1248 ** `java.util.logging` tracepoint provider (`liblttng-ust-jul-jni`)
1249 and JAR file (path:{liblttng-ust-jul.jar})
1250 * **LTTng-modules**:
1251 ** LTTng Linux kernel tracer module
1252 ** tracing ring buffer kernel modules
1253 ** many LTTng probe kernel modules
1255 The following diagram shows how the most important LTTng components
1256 interact. Plain black arrows represent trace data paths while dashed
1257 red arrows indicate control communications. The LTTng relay daemon is
1258 shown running on a remote system, although it could as well run on the
1259 target (monitored) system.
1263 image::plumbing.png[]
1265 Each component is described in the following subsections.
1271 At the heart of LTTng's plumbing is the _session daemon_, often called
1272 by its command name, `lttng-sessiond`.
1274 The session daemon is responsible for managing tracing sessions and
1275 what they logically contain (channel properties, enabled/disabled
1276 events, etc.). By communicating locally with instrumented applications
1277 (using LTTng-UST) and with the LTTng Linux kernel modules
1278 (LTTng-modules), it oversees all tracing activities.
1280 One of the many things that `lttng-sessiond` does is to keep
1281 track of the available event types. User space applications and
1282 libraries actively connect and register to the session daemon when they
1283 start. By contrast, `lttng-sessiond` seeks out and loads the appropriate
1284 LTTng kernel modules as part of its own initialization. Kernel event
1285 types are _pulled_ by `lttng-sessiond`, whereas user space event types
1286 are _pushed_ to it by the various user space tracepoint providers.
1288 Using a specific inter-process communication protocol with Linux kernel
1289 and user space tracers, the session daemon can send channel information
1290 so that they are initialized, enable/disable specific probes based on
1291 enabled/disabled events by the user, send event filters information to
1292 LTTng tracers so that filtering actually happens at the tracer site,
1293 start/stop tracing a specific application or the Linux kernel, etc.
1295 The session daemon is not useful without some user controlling it,
1296 because it's only a sophisticated control interchange and thus
1297 doesn't make any decision on its own. `lttng-sessiond` opens a local
1298 socket for controlling it, albeit the preferred way to control it is
1299 using `liblttng-ctl`, an installed C library hiding the communication
1300 protocol behind an easy-to-use API. The `lttng` tool makes use of
1301 `liblttng-ctl` to implement a user-friendly command line interface.
1303 `lttng-sessiond` does not receive any trace data from instrumented
1304 applications; the _consumer daemons_ are the programs responsible for
1305 collecting trace data using shared ring buffers. However, the session
1306 daemon is the one that must spawn a consumer daemon and establish
1307 a control communication with it.
1309 Session daemons run on a per-user basis. Knowing this, multiple
1310 instances of `lttng-sessiond` may run simultaneously, each belonging
1311 to a different user and each operating independently of the others.
1312 Only `root`'s session daemon, however, may control LTTng kernel modules
1313 (i.e. the kernel tracer). With that in mind, if a user has no root
1314 access on the target system, he cannot trace the system's kernel, but
1315 should still be able to trace its own instrumented applications.
1317 It has to be noted that, although only `root`'s session daemon may
1318 control the kernel tracer, the `lttng-sessiond` command has a `--group`
1319 option which may be used to specify the name of a special user group
1320 allowed to communicate with `root`'s session daemon and thus record
1321 kernel traces. By default, this group is named `tracing`.
1323 If not done yet, the `lttng` tool, by default, automatically starts a
1324 session daemon. `lttng-sessiond` may also be started manually:
1331 This will start the session daemon in foreground. Use
1335 lttng-sessiond --daemonize
1338 to start it as a true daemon.
1340 To kill the current user's session daemon, `pkill` may be used:
1344 pkill lttng-sessiond
1347 The default `SIGTERM` signal will terminate it cleanly.
1349 Several other options are available and described in
1350 man:lttng-sessiond(8) or by running `lttng-sessiond --help`.
1354 ==== Consumer daemon
1356 The _consumer daemon_, or `lttng-consumerd`, is a program sharing some
1357 ring buffers with user applications or the LTTng kernel modules to
1358 collect trace data and output it at some place (on disk or sent over
1359 the network to an LTTng relay daemon).
1361 Consumer daemons are created by a session daemon as soon as events are
1362 enabled within a tracing session, well before tracing is activated
1363 for the latter. Entirely managed by session daemons,
1364 consumer daemons survive session destruction to be reused later,
1365 should a new tracing session be created. Consumer daemons are always
1366 owned by the same user as their session daemon. When its owner session
1367 daemon is killed, the consumer daemon also exits. This is because
1368 the consumer daemon is always the child process of a session daemon.
1369 Consumer daemons should never be started manually. For this reason,
1370 they are not installed in one of the usual locations listed in the
1371 `PATH` environment variable. `lttng-sessiond` has, however, a
1372 bunch of options (see man:lttng-sessiond(8)) to
1373 specify custom consumer daemon paths if, for some reason, a consumer
1374 daemon other than the default installed one is needed.
1376 There are up to two running consumer daemons per user, whereas only one
1377 session daemon may run per user. This is because each process has
1378 independent bitness: if the target system runs a mixture of 32-bit and
1379 64-bit processes, it is more efficient to have separate corresponding
1380 32-bit and 64-bit consumer daemons. The `root` user is an exception: it
1381 may have up to _three_ running consumer daemons: 32-bit and 64-bit
1382 instances for its user space applications and one more reserved for
1383 collecting kernel trace data.
1385 As new tracing domains are added to LTTng, the development community's
1386 intent is to minimize the need for additionnal consumer daemon instances
1387 dedicated to them. For instance, the `java.util.logging` (JUL) domain
1388 events are in fact mapped to the user space domain, thus tracing this
1389 particular domain is handled by existing user space domain consumer
1396 When a tracing session is configured to send its trace data over the
1397 network, an LTTng _relay daemon_ must be used at the other end to
1398 receive trace packets and serialize them to trace files. This setup
1399 makes it possible to trace a target system without ever committing trace
1400 data to its local storage, a feature which is useful for embedded
1401 systems, amongst others. The command implementing the relay daemon
1404 The basic use case of `lttng-relayd` is to transfer trace data received
1405 over the network to trace files on the local file system. The relay
1406 daemon must listen on two TCP ports to achieve this: one control port,
1407 used by the target session daemon, and one data port, used by the
1408 target consumer daemon. The relay and session daemons agree on common
1409 default ports when custom ones are not specified.
1411 Since the communication transport protocol for both ports is standard
1412 TCP, the relay daemon may be started either remotely or locally (on the
1415 While two instances of consumer daemons (32-bit and 64-bit) may run
1416 concurrently for a given user, `lttng-relayd` needs only be of its
1417 host operating system's bitness.
1419 The other important feature of LTTng's relay daemon is the support of
1420 _LTTng live_. LTTng live is an application protocol to view events as
1421 they arrive. The relay daemon will still record events in trace files,
1422 but a _tee_ may be created to inspect incoming events. Using LTTng live
1423 locally thus requires to run a local relay daemon.
1426 [[liblttng-ctl-lttng]]
1427 ==== [[lttng-cli]]Control library and command line interface
1429 The LTTng control library, `liblttng-ctl`, can be used to communicate
1430 with the session daemon using a C API that hides the underlying
1431 protocol's details. `liblttng-ctl` is part of LTTng-tools.
1433 `liblttng-ctl` may be used by including its "master" header:
1437 #include <lttng/lttng.h>
1440 Some objects are referred by name (C string), such as tracing sessions,
1441 but most of them require creating a handle first using
1442 `lttng_create_handle()`. The best available developer documentation for
1443 `liblttng-ctl` is, for the moment, its installed header files as such.
1444 Every function/structure is thoroughly documented.
1446 The `lttng` program is the _de facto_ standard user interface to
1447 control LTTng tracing sessions. `lttng` uses `liblttng-ctl` to
1448 communicate with session daemons behind the scenes.
1449 Its man page, man:lttng(1), is exhaustive, as well as its command
1450 line help (+lttng _cmd_ --help+, where +_cmd_+ is the command name).
1452 The <<controlling-tracing,Controlling tracing>> section is a feature
1453 tour of the `lttng` tool.
1457 ==== User space tracing library
1459 The user space tracing part of LTTng is possible thanks to the user
1460 space tracing library, `liblttng-ust`, which is part of the LTTng-UST
1463 `liblttng-ust` provides header files containing macros used to define
1464 tracepoints and create tracepoint providers, as well as a shared object
1465 that must be linked to individual applications to connect to and
1466 communicate with a session daemon and a consumer daemon as soon as the
1469 The exact mechanism by which an application is registered to the
1470 session daemon is beyond the scope of this documentation. The only thing
1471 you need to know is that, since the library constructor does this job
1472 automatically, tracepoints may be safely inserted anywhere in the source
1473 code without prior manual initialization of `liblttng-ust`.
1475 The `liblttng-ust`-session daemon collaboration also provides an
1476 interesting feature: user space events may be enabled _before_
1477 applications actually start. By doing this and starting tracing before
1478 launching the instrumented application, you make sure that even the
1479 earliest occurring events can be recorded.
1481 The <<c-application,C application>> instrumenting guide of the
1482 <<using-lttng,Using LTTng>> chapter focuses on using `liblttng-ust`:
1483 instrumenting, building/linking and running a user application.
1487 ==== LTTng kernel modules
1489 The LTTng Linux kernel modules provide everything needed to trace the
1490 Linux kernel: various probes, a ring buffer implementation for a
1491 consumer daemon to read trace data and the tracer itself.
1493 Only in exceptional circumstances should you ever need to load the
1494 LTTng kernel modules manually: it is normally the responsability of
1495 `root`'s session daemon to do so. If you were to develop your own LTTng
1496 probe module, however--for tracing a custom kernel or some kernel
1497 module (this topic is covered in the
1498 <<instrumenting-linux-kernel,Linux kernel>> instrumenting guide of
1499 the <<using-lttng,Using LTTng>> chapter)--you should either
1500 load it manually, or use the `--kmod-probes` option of the session
1501 daemon to load a specific list of kernel probes (beware, however,
1502 that the `--kmod-probes` option specifies an _absolute_ list, which
1503 means you also have to specify the default probes you need). The
1504 session and consumer daemons of regular users do not interact with the
1505 LTTng kernel modules at all.
1507 LTTng kernel modules are installed, by default, in
1508 +/usr/lib/modules/_release_/extra+, where +_release_+ is the
1509 kernel release (see `uname --kernel-release`).
1515 Using LTTng involves two main activities: **instrumenting** and
1516 **controlling tracing**.
1518 _<<instrumenting,Instrumenting>>_ is the process of inserting probes
1519 into some source code. It can be done manually, by writing tracepoint
1520 calls at specific locations in the source code of the program to trace,
1521 or more automatically using dynamic probes (address in assembled code,
1522 symbol name, function entry/return, etc.).
1524 It has to be noted that, as an LTTng user, you may not have to worry
1525 about the instrumentation process. Indeed, you may want to trace a
1526 program already instrumented. As an example, the Linux kernel is
1527 thoroughly instrumented, which is why you can trace it without caring
1528 about adding probes.
1530 _<<controlling-tracing,Controlling tracing>>_ is everything
1531 that can be done by the LTTng session daemon, which is controlled using
1532 `liblttng-ctl` or its command line utility, `lttng`: creating tracing
1533 sessions, listing tracing sessions and events, enabling/disabling
1534 events, starting/stopping the tracers, taking snapshots, etc.
1536 This chapter is a complete user guide of both activities,
1537 with common use cases of LTTng exposed throughout the text. It is
1538 assumed that you are familiar with LTTng's concepts (events, channels,
1539 domains, tracing sessions) and that you understand the roles of its
1540 components (daemons, libraries, command line tools); if not, we invite
1541 you to read the <<understanding-lttng,Understanding LTTng>> chapter
1542 before you begin reading this one.
1544 If you're new to LTTng, we suggest that you rather start with the
1545 <<getting-started,Getting started>> small guide first, then come
1546 back here to broaden your knowledge.
1548 If you're only interested in tracing the Linux kernel with its current
1549 instrumentation, you may skip the
1550 <<instrumenting,Instrumenting>> section.
1556 There are many examples of tracing and monitoring in our everyday life.
1557 You have access to real-time and historical weather reports and forecasts
1558 thanks to weather stations installed around the country. You know your
1559 possibly hospitalized friends' and family's hearts are safe thanks to
1560 electrocardiography. You make sure not to drive your car too fast
1561 and have enough fuel to reach your destination thanks to gauges visible
1564 All the previous examples have something in common: they rely on
1565 **probes**. Without electrodes attached to the surface of a body's
1566 skin, cardiac monitoring would be futile.
1568 LTTng, as a tracer, is no different from the real life examples above.
1569 If you're about to trace a software system, i.e. record its history of
1570 execution, you better have probes in the subject you're
1571 tracing: the actual software. Various ways were developed to do this.
1572 The most straightforward one is to manually place probes, called
1573 _tracepoints_, in the software's source code. The Linux kernel tracing
1574 domain also allows probes added dynamically.
1576 If you're only interested in tracing the Linux kernel, it may very well
1577 be that your tracing needs are already appropriately covered by LTTng's
1578 built-in Linux kernel tracepoints and other probes. Or you may be in
1579 possession of a user space application which has already been
1580 instrumented. In such cases, the work will reside entirely in the design
1581 and execution of tracing sessions, allowing you to jump to
1582 <<controlling-tracing,Controlling tracing>> right now.
1584 This chapter focuses on the following use cases of instrumentation:
1586 * <<c-application,C>> and <<cxx-application,$$C++$$>> applications
1587 * <<prebuilt-ust-helpers,prebuilt user space tracing helpers>>
1588 * <<java-application,Java application>>
1589 * <<instrumenting-linux-kernel,Linux kernel>> module or the
1591 * the <<proc-lttng-logger-abi,path:{/proc/lttng-logger} ABI>>
1593 Some advanced techniques are also presented at the very end of this
1600 Instrumenting a C (or $$C++$$) application, be it an executable program or
1601 a library, implies using LTTng-UST, the
1602 user space tracing component of LTTng. For C/$$C++$$ applications, the
1603 LTTng-UST package includes a dynamically loaded library
1604 (`liblttng-ust`), C headers and the `lttng-gen-tp` command line utility.
1606 Since C and $$C++$$ are the base languages of virtually all other
1607 programming languages
1608 (Java virtual machine, Python, Perl, PHP and Node.js interpreters, etc.),
1609 implementing user space tracing for an unsupported language is just a
1610 matter of using the LTTng-UST C API at the right places.
1612 The usual work flow to instrument a user space C application with
1615 . Define tracepoints (actual probes)
1616 . Write tracepoint providers
1617 . Insert tracepoints into target source code
1618 . Package (build) tracepoint providers
1619 . Build user application and link it with tracepoint providers
1621 The steps above are discussed in greater detail in the following
1625 [[tracepoint-provider]]
1626 ===== Tracepoint provider
1628 Before jumping into defining tracepoints and inserting
1629 them into the application source code, you must understand what a
1630 _tracepoint provider_ is.
1632 For the sake of this guide, consider the following two files:
1637 #undef TRACEPOINT_PROVIDER
1638 #define TRACEPOINT_PROVIDER my_provider
1640 #undef TRACEPOINT_INCLUDE
1641 #define TRACEPOINT_INCLUDE "./tp.h"
1643 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1646 #include <lttng/tracepoint.h>
1650 my_first_tracepoint,
1652 int, my_integer_arg,
1653 char*, my_string_arg
1656 ctf_string(my_string_field, my_string_arg)
1657 ctf_integer(int, my_integer_field, my_integer_arg)
1663 my_other_tracepoint,
1668 ctf_integer(int, some_field, my_int)
1674 #include <lttng/tracepoint-event.h>
1680 #define TRACEPOINT_CREATE_PROBES
1685 The two files above are defining a _tracepoint provider_. A tracepoint
1686 provider is some sort of namespace for _tracepoint definitions_. Tracepoint
1687 definitions are written above with the `TRACEPOINT_EVENT()` macro, and allow
1688 eventual `tracepoint()` calls respecting their definitions to be inserted
1689 into the user application's C source code (we explore this in a
1692 Many tracepoint definitions may be part of the same tracepoint provider
1693 and many tracepoint providers may coexist in a user space application. A
1694 tracepoint provider is packaged either:
1696 * directly into an existing user application's C source file
1698 * as a static library
1699 * as a shared library
1701 The two files above, path:{tp.h} and path:{tp.c}, show a typical template for
1702 writing a tracepoint provider. LTTng-UST was designed so that two
1703 tracepoint providers should not be defined in the same header file.
1705 We will now go through the various parts of the above files and
1706 give them a meaning. As you may have noticed, the LTTng-UST API for
1707 C/$$C++$$ applications is some preprocessor sorcery. The LTTng-UST macros
1708 used in your application and those in the LTTng-UST headers are
1709 combined to produce actual source code needed to make tracing possible
1712 Let's start with the header file, path:{tp.h}. It begins with
1716 #undef TRACEPOINT_PROVIDER
1717 #define TRACEPOINT_PROVIDER my_provider
1720 `TRACEPOINT_PROVIDER` defines the name of the provider to which the
1721 following tracepoint definitions will belong. It is used internally by
1722 LTTng-UST headers and _must_ be defined. Since `TRACEPOINT_PROVIDER`
1723 could have been defined by another header file also included by the same
1724 C source file, the best practice is to undefine it first.
1726 NOTE: Names in LTTng-UST follow the C
1727 _identifier_ syntax (starting with a letter and containing either
1728 letters, numbers or underscores); they are _not_ C strings
1729 (not surrounded by double quotes). This is because LTTng-UST macros
1730 use those identifier-like strings to create symbols (named types and
1733 The tracepoint provider is a group of tracepoint definitions; its chosen
1734 name should reflect this. A hierarchy like Java packages is recommended,
1735 using underscores instead of dots, e.g., `org_company_project_component`.
1737 Next is `TRACEPOINT_INCLUDE`:
1741 #undef TRACEPOINT_INCLUDE
1742 #define TRACEPOINT_INCLUDE "./tp.h"
1745 This little bit of instrospection is needed by LTTng-UST to include
1746 your header at various predefined places.
1748 Include guard follows:
1752 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
1756 Add these precompiler conditionals to ensure the tracepoint event
1757 generation can include this file more than once.
1759 The `TRACEPOINT_EVENT()` macro is defined in a LTTng-UST header file which
1764 #include <lttng/tracepoint.h>
1767 This will also allow the application to use the `tracepoint()` macro.
1769 Next is a list of `TRACEPOINT_EVENT()` macro calls which create the
1770 actual tracepoint definitions. We will skip this for the moment and
1771 come back to how to use `TRACEPOINT_EVENT()`
1772 <<defining-tracepoints,in a later section>>. Just pay attention to
1773 the first argument: it's always the name of the tracepoint provider
1774 being defined in this header file.
1776 End of include guard:
1783 Finally, include `<lttng/tracepoint-event.h>` to expand the macros:
1787 #include <lttng/tracepoint-event.h>
1790 That's it for path:{tp.h}. Of course, this is only a header file; it must be
1791 included in some C source file to actually use it. This is the job of
1796 #define TRACEPOINT_CREATE_PROBES
1801 When `TRACEPOINT_CREATE_PROBES` is defined, the macros used in path:{tp.h},
1802 which is included just after, will actually create the source code for
1803 LTTng-UST probes (global data structures and functions) out of your
1804 tracepoint definitions. How exactly this is done is out of this text's scope.
1805 `TRACEPOINT_CREATE_PROBES` is discussed further
1807 <<building-tracepoint-providers-and-user-application,Building/linking
1808 tracepoint providers and the user application>>.
1810 You could include other header files like path:{tp.h} here to create the probes
1811 of different tracepoint providers, e.g.:
1815 #define TRACEPOINT_CREATE_PROBES
1821 The rule is: probes of a given tracepoint provider
1822 must be created in exactly one source file. This source file could be one
1823 of your project's; it doesn't have to be on its own like
1824 path:{tp.c}, although
1825 <<building-tracepoint-providers-and-user-application,a later section>>
1826 shows that doing so allows packaging the tracepoint providers
1827 independently and keep them out of your application, also making it
1828 possible to reuse them between projects.
1830 The following sections explain how to define tracepoints, how to use the
1831 `tracepoint()` macro to instrument your user space C application and how
1832 to build/link tracepoint providers and your application with LTTng-UST
1837 ===== Using `lttng-gen-tp`
1839 LTTng-UST ships with `lttng-gen-tp`, a handy command line utility for
1840 generating most of the stuff discussed above. It takes a _template file_,
1841 with a name usually ending with the `.tp` extension, containing only
1842 tracepoint definitions, and outputs a tracepoint provider (either a C
1843 source file or a precompiled object file) with its header file.
1845 `lttng-gen-tp` should suffice in <<static-linking,static linking>>
1846 situations. When using it, write a template file containing a list of
1847 `TRACEPOINT_EVENT()` macro calls. The tool will find the provider names
1848 used and generate the appropriate files which are going to look a lot
1849 like path:{tp.h} and path:{tp.c} above.
1851 Just call `lttng-gen-tp` like this:
1855 lttng-gen-tp my-template.tp
1858 path:{my-template.c}, path:{my-template.o} and path:{my-template.h}
1859 will be created in the same directory.
1861 You may specify custom C flags passed to the compiler invoked by
1862 `lttng-gen-tp` using the `CFLAGS` environment variable:
1866 CFLAGS=-I/custom/include/path lttng-gen-tp my-template.tp
1869 For more information on `lttng-gen-tp`, see man:lttng-gen-tp(1).
1872 [[defining-tracepoints]]
1873 ===== Defining tracepoints
1875 As written in <<tracepoint-provider,Tracepoint provider>>,
1876 tracepoints are defined using the
1877 `TRACEPOINT_EVENT()` macro. Each tracepoint, when called using the
1878 `tracepoint()` macro in the actual application's source code, generates
1879 a specific event type with its own fields.
1881 Let's have another look at the example above, with a few added comments:
1886 /* tracepoint provider name */
1889 /* tracepoint/event name */
1890 my_first_tracepoint,
1892 /* list of tracepoint arguments */
1894 int, my_integer_arg,
1895 char*, my_string_arg
1898 /* list of fields of eventual event */
1900 ctf_string(my_string_field, my_string_arg)
1901 ctf_integer(int, my_integer_field, my_integer_arg)
1906 The tracepoint provider name must match the name of the tracepoint
1907 provider in which this tracepoint is defined
1908 (see <<tracepoint-provider,Tracepoint provider>>). In other words,
1909 always use the same string as the value of `TRACEPOINT_PROVIDER` above.
1911 The tracepoint name will become the event name once events are recorded
1912 by the LTTng-UST tracer. It must follow the tracepoint provider name
1913 syntax: start with a letter and contain either letters, numbers or
1914 underscores. Two tracepoints under the same provider cannot have the
1915 same name, i.e. you cannot overload a tracepoint like you would
1916 overload functions and methods in $$C++$$/Java.
1918 NOTE: The concatenation of the tracepoint
1919 provider name and the tracepoint name cannot exceed 254 characters. If
1920 it does, the instrumented application will compile and run, but LTTng
1921 will issue multiple warnings and you could experience serious problems.
1923 The list of tracepoint arguments gives this tracepoint its signature:
1924 see it like the declaration of a C function. The format of `TP_ARGS()`
1925 arguments is: C type, then argument name; repeat as needed, up to ten
1926 times. For example, if we were to replicate the signature of C standard
1927 library's `fseek()`, the `TP_ARGS()` part would look like:
1938 Of course, you will need to include appropriate header files before
1939 the `TRACEPOINT_EVENT()` macro calls if any argument has a complex type.
1941 `TP_ARGS()` may not be omitted, but may be empty. `TP_ARGS(void)` is
1944 The list of fields is where the fun really begins. The fields defined
1945 in this list will be the fields of the events generated by the execution
1946 of this tracepoint. Each tracepoint field definition has a C
1947 _argument expression_ which will be evaluated when the execution reaches
1948 the tracepoint. Tracepoint arguments _may be_ used freely in those
1949 argument expressions, but they _don't_ have to.
1951 There are several types of tracepoint fields available. The macros to
1952 define them are given and explained in the
1953 <<liblttng-ust-tp-fields,LTTng-UST library reference>> section.
1955 Field names must follow the standard C identifier syntax: letter, then
1956 optional sequence of letters, numbers or underscores. Each field must have
1959 Those `ctf_*()` macros are added to the `TP_FIELDS()` part of
1960 `TRACEPOINT_EVENT()`. Note that they are not delimited by commas.
1961 `TP_FIELDS()` may be empty, but the `TP_FIELDS(void)` form is _not_
1964 The following snippet shows how argument expressions may be used in
1965 tracepoint fields and how they may refer freely to tracepoint arguments.
1969 /* for struct stat */
1970 #include <sys/types.h>
1971 #include <sys/stat.h>
1983 /* simple integer field with constant value */
1985 int, /* field C type */
1986 my_constant_field, /* field name */
1987 23 + 17 /* argument expression */
1990 /* my_int_arg tracepoint argument */
1997 /* my_int_arg squared */
2001 my_int_arg * my_int_arg
2004 /* sum of first 4 characters of my_str_arg */
2008 my_str_arg[0] + my_str_arg[1] +
2009 my_str_arg[2] + my_str_arg[3]
2012 /* my_str_arg as string field */
2014 my_str_arg_field, /* field name */
2015 my_str_arg /* argument expression */
2018 /* st_size member of st tracepoint argument, hexadecimal */
2020 off_t, /* field C type */
2021 size_field, /* field name */
2022 st->st_size /* argument expression */
2025 /* st_size member of st tracepoint argument, as double */
2027 double, /* field C type */
2028 size_dbl_field, /* field name */
2029 (double) st->st_size /* argument expression */
2032 /* half of my_str_arg string as text sequence */
2034 char, /* element C type */
2035 half_my_str_arg_field, /* field name */
2036 my_str_arg, /* argument expression */
2037 size_t, /* length expression C type */
2038 strlen(my_str_arg) / 2 /* length expression */
2044 As you can see, having a custom argument expression for each field
2045 makes tracepoints very flexible for tracing a user space C application.
2046 This tracepoint definition is reused later in this guide, when
2047 actually using tracepoints in a user space application.
2050 [[using-tracepoint-classes]]
2051 ===== Using tracepoint classes
2053 In LTTng-UST, a _tracepoint class_ is a class of tracepoints sharing the
2054 same field types and names. A _tracepoint instance_ is one instance of
2055 such a declared tracepoint class, with its own event name and tracepoint
2058 What is documented in <<defining-tracepoints,Defining tracepoints>>
2059 is actually how to declare a _tracepoint class_ and define a
2060 _tracepoint instance_ at the same time. Without revealing the internals
2061 of LTTng-UST too much, it has to be noted that one serialization
2062 function is created for each tracepoint class. A serialization
2063 function is responsible for serializing the fields of a tracepoint
2064 into a sub-buffer when tracing. For various performance reasons, when
2065 your situation requires multiple tracepoints with different names, but
2066 with the same fields layout, the best practice is to manually create
2067 a tracepoint class and instantiate as many tracepoint instances as
2068 needed. One positive effect of such a design, amongst other advantages,
2069 is that all tracepoint instances of the same tracepoint class will
2070 reuse the same serialization function, thus reducing cache pollution.
2072 As an example, here are three tracepoint definitions as we know them:
2084 ctf_integer(int, userid, userid)
2085 ctf_integer(size_t, len, len)
2097 ctf_integer(int, userid, userid)
2098 ctf_integer(size_t, len, len)
2110 ctf_integer(int, userid, userid)
2111 ctf_integer(size_t, len, len)
2116 In this case, three tracepoint classes are created, with one tracepoint
2117 instance for each of them: `get_account`, `get_settings` and
2118 `get_transaction`. However, they all share the same field names and
2119 types. Declaring one tracepoint class and three tracepoint instances of
2120 the latter is a better design choice:
2124 /* the tracepoint class */
2125 TRACEPOINT_EVENT_CLASS(
2126 /* tracepoint provider name */
2129 /* tracepoint class name */
2140 ctf_integer(int, userid, userid)
2141 ctf_integer(size_t, len, len)
2145 /* the tracepoint instances */
2146 TRACEPOINT_EVENT_INSTANCE(
2147 /* tracepoint provider name */
2150 /* tracepoint class name */
2153 /* tracepoint/event name */
2162 TRACEPOINT_EVENT_INSTANCE(
2171 TRACEPOINT_EVENT_INSTANCE(
2182 Of course, all those names and `TP_ARGS()` invocations are redundant,
2183 but some C preprocessor magic can solve this:
2187 #define MY_TRACEPOINT_ARGS \
2193 TRACEPOINT_EVENT_CLASS(
2198 ctf_integer(int, userid, userid)
2199 ctf_integer(size_t, len, len)
2203 #define MY_APP_TRACEPOINT_INSTANCE(name) \
2204 TRACEPOINT_EVENT_INSTANCE( \
2208 MY_TRACEPOINT_ARGS \
2211 MY_APP_TRACEPOINT_INSTANCE(get_account)
2212 MY_APP_TRACEPOINT_INSTANCE(get_settings)
2213 MY_APP_TRACEPOINT_INSTANCE(get_transaction)
2217 [[assigning-log-levels]]
2218 ===== Assigning log levels to tracepoints
2220 Optionally, a log level can be assigned to a defined tracepoint.
2221 Assigning different levels of importance to tracepoints can be useful;
2222 when controlling tracing sessions,
2223 <<controlling-tracing,you can choose>> to only enable tracepoints
2224 falling into a specific log level range.
2226 Log levels are assigned to defined tracepoints using the
2227 `TRACEPOINT_LOGLEVEL()` macro. The latter must be used _after_ having
2228 used `TRACEPOINT_EVENT()` for a given tracepoint. The
2229 `TRACEPOINT_LOGLEVEL()` macro has the following construct:
2233 TRACEPOINT_LOGLEVEL(PROVIDER_NAME, TRACEPOINT_NAME, LOG_LEVEL)
2236 where the first two arguments are the same as the first two arguments
2237 of `TRACEPOINT_EVENT()` and `LOG_LEVEL` is one
2238 of the values given in the
2239 <<liblttng-ust-tracepoint-loglevel,LTTng-UST library reference>>
2242 As an example, let's assign a `TRACE_DEBUG_UNIT` log level to our
2243 previous tracepoint definition:
2247 TRACEPOINT_LOGLEVEL(my_provider, my_tracepoint, TRACE_DEBUG_UNIT)
2251 [[probing-the-application-source-code]]
2252 ===== Probing the application's source code
2254 Once tracepoints are properly defined within a tracepoint provider,
2255 they may be inserted into the user application to be instrumented
2256 using the `tracepoint()` macro. Its first argument is the tracepoint
2257 provider name and its second is the tracepoint name. The next, optional
2258 arguments are defined by the `TP_ARGS()` part of the definition of
2259 the tracepoint to use.
2261 As an example, let us again take the following tracepoint definition:
2266 /* tracepoint provider name */
2269 /* tracepoint/event name */
2270 my_first_tracepoint,
2272 /* list of tracepoint arguments */
2274 int, my_integer_arg,
2275 char*, my_string_arg
2278 /* list of fields of eventual event */
2280 ctf_string(my_string_field, my_string_arg)
2281 ctf_integer(int, my_integer_field, my_integer_arg)
2286 Assuming this is part of a file named path:{tp.h} which defines the tracepoint
2287 provider and which is included by path:{tp.c}, here's a complete C application
2288 calling this tracepoint (multiple times):
2292 #define TRACEPOINT_DEFINE
2295 int main(int argc, char* argv[])
2299 tracepoint(my_provider, my_first_tracepoint, 23, "Hello, World!");
2301 for (i = 0; i < argc; ++i) {
2302 tracepoint(my_provider, my_first_tracepoint, i, argv[i]);
2309 For each tracepoint provider, `TRACEPOINT_DEFINE` must be defined into
2310 exactly one translation unit (C source file) of the user application,
2311 before including the tracepoint provider header file. In other words,
2312 for a given tracepoint provider, you cannot define `TRACEPOINT_DEFINE`,
2313 and then include its header file in two separate C source files of
2314 the same application. `TRACEPOINT_DEFINE` is discussed further in
2315 <<building-tracepoint-providers-and-user-application,Building/linking
2316 tracepoint providers and the user application>>.
2318 As another example, remember this definition we wrote in a previous
2319 section (comments are stripped):
2323 /* for struct stat */
2324 #include <sys/types.h>
2325 #include <sys/stat.h>
2337 ctf_integer(int, my_constant_field, 23 + 17)
2338 ctf_integer(int, my_int_arg_field, my_int_arg)
2339 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2340 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2341 my_str_arg[2] + my_str_arg[3])
2342 ctf_string(my_str_arg_field, my_str_arg)
2343 ctf_integer_hex(off_t, size_field, st->st_size)
2344 ctf_float(double, size_dbl_field, (double) st->st_size)
2345 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2346 size_t, strlen(my_str_arg) / 2)
2351 Here's an example of calling it:
2355 #define TRACEPOINT_DEFINE
2362 stat("/etc/fstab", &s);
2364 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2370 When viewing the trace, assuming the file size of path:{/etc/fstab} is
2371 301{nbsp}bytes, the event generated by the execution of this tracepoint
2372 should have the following fields, in this order:
2375 my_constant_field 40
2377 my_int_arg_field2 529
2379 my_str_arg_field "Hello, World!"
2381 size_dbl_field 301.0
2382 half_my_str_arg_field "Hello,"
2386 [[building-tracepoint-providers-and-user-application]]
2387 ===== Building/linking tracepoint providers and the user application
2389 The final step of using LTTng-UST for tracing a user space C application
2390 (beside running the application) is building and linking tracepoint
2391 providers and the application itself.
2393 As discussed above, the macros used by the user-written tracepoint provider
2394 header file are useless until actually used to create probes code
2395 (global data structures and functions) in a translation unit (C source file).
2396 This is accomplished by defining `TRACEPOINT_CREATE_PROBES` in a translation
2397 unit and then including the tracepoint provider header file.
2398 When `TRACEPOINT_CREATE_PROBES` is defined, macros used and included by
2399 the tracepoint provider header will output actual source code needed by any
2400 application using the defined tracepoints. Defining
2401 `TRACEPOINT_CREATE_PROBES` produces code used when registering
2402 tracepoint providers when the tracepoint provider package loads.
2404 The other important definition is `TRACEPOINT_DEFINE`. This one creates
2405 global, per-tracepoint structures referencing the tracepoint providers
2406 data. Those structures are required by the actual functions inserted
2407 where `tracepoint()` macros are placed and need to be defined by the
2408 instrumented application.
2410 Both `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` need to be defined
2411 at some places in order to trace a user space C application using LTTng.
2412 Although explaining their exact mechanism is beyond the scope of this
2413 document, the reason they both exist separately is to allow the trace
2414 providers to be packaged as a shared object (dynamically loaded library).
2416 There are two ways to compile and link the tracepoint providers
2417 with the application: _<<static-linking,statically>>_ or
2418 _<<dynamic-linking,dynamically>>_. Both methods are covered in the
2419 following subsections.
2423 ===== Static linking the tracepoint providers to the application
2425 With the static linking method, compiled tracepoint providers are copied
2426 into the target application. There are three ways to do this:
2428 . Use one of your **existing C source files** to create probes.
2429 . Create probes in a separate C source file and build it as an
2430 **object file** to be linked with the application (more decoupled).
2431 . Create probes in a separate C source file, build it as an
2432 object file and archive it to create a **static library**
2433 (more decoupled, more portable).
2435 The first approach is to define `TRACEPOINT_CREATE_PROBES` and include
2436 your tracepoint provider(s) header file(s) directly into an existing C
2437 source file. Here's an example:
2445 #define TRACEPOINT_CREATE_PROBES
2446 #define TRACEPOINT_DEFINE
2451 int my_func(int a, const char* b)
2455 tracepoint(my_provider, my_tracepoint, buf, sz, limit, &tt)
2463 Again, before including a given tracepoint provider header file,
2464 `TRACEPOINT_CREATE_PROBES` and `TRACEPOINT_DEFINE` must be defined in
2465 one, **and only one**, translation unit. Other C source files of the
2466 same application may include path:{tp.h} to use tracepoints with
2467 the `tracepoint()` macro, but must not define
2468 `TRACEPOINT_CREATE_PROBES`/`TRACEPOINT_DEFINE` again.
2470 This translation unit may be built as an object file by making sure to
2471 add `.` to the include path:
2478 The second approach is to isolate the tracepoint provider code into a
2479 separate object file by using a dedicated C source file to create probes:
2483 #define TRACEPOINT_CREATE_PROBES
2488 `TRACEPOINT_DEFINE` must be defined by a translation unit of the
2489 application. Since we're talking about static linking here, it could as
2490 well be defined directly in the file above, before `#include "tp.h"`:
2494 #define TRACEPOINT_CREATE_PROBES
2495 #define TRACEPOINT_DEFINE
2500 This is actually what <<lttng-gen-tp,`lttng-gen-tp`>> does, and is
2501 the recommended practice.
2503 Build the tracepoint provider:
2510 Finally, the resulting object file may be archived to create a
2511 more portable tracepoint provider static library:
2518 Using a static library does have the advantage of centralising the
2519 tracepoint providers objects so they can be shared between multiple
2520 applications. This way, when the tracepoint provider is modified, the
2521 source code changes don't have to be patched into each application's source
2522 code tree. The applications need to be relinked after each change, but need
2523 not to be otherwise recompiled (unless the tracepoint provider's API
2526 Regardless of which method you choose, you end up with an object file
2527 (potentially archived) containing the trace providers assembled code.
2528 To link this code with the rest of your application, you must also link
2529 with `liblttng-ust` and `libdl`:
2533 gcc -o app tp.o other.o files.o of.o your.o app.o -llttng-ust -ldl
2540 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -ldl
2543 If you're using a BSD
2544 system, replace `-ldl` with `-lc`:
2548 gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -lc
2551 The application can be started as usual, e.g.:
2558 The `lttng` command line tool can be used to
2559 <<controlling-tracing,control tracing>>.
2563 ===== Dynamic linking the tracepoint providers to the application
2565 The second approach to package the tracepoint providers is to use
2566 dynamic linking: the library and its member functions are explicitly
2567 sought, loaded and unloaded at runtime using `libdl`.
2569 It has to be noted that, for a variety of reasons, the created shared
2570 library will be dynamically _loaded_, as opposed to dynamically
2571 _linked_. The tracepoint provider shared object is, however, linked
2572 with `liblttng-ust`, so that `liblttng-ust` is guaranteed to be loaded
2573 as soon as the tracepoint provider is. If the tracepoint provider is
2574 not loaded, since the application itself is not linked with
2575 `liblttng-ust`, the latter is not loaded at all and the tracepoint calls
2578 The process to create the tracepoint provider shared object is pretty
2579 much the same as the static library method, except that:
2581 * since the tracepoint provider is not part of the application
2582 anymore, `TRACEPOINT_DEFINE` _must_ be defined, for each tracepoint
2583 provider, in exactly one translation unit (C source file) of the
2585 * `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` must be defined next to
2586 `TRACEPOINT_DEFINE`.
2588 Regarding `TRACEPOINT_DEFINE` and `TRACEPOINT_PROBE_DYNAMIC_LINKAGE`,
2589 the recommended practice is to use a separate C source file in your
2590 application to define them, and then include the tracepoint provider
2591 header files afterwards, e.g.:
2595 #define TRACEPOINT_DEFINE
2596 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
2598 /* include the header files of one or more tracepoint providers below */
2604 `TRACEPOINT_PROBE_DYNAMIC_LINKAGE` makes the macros included afterwards
2605 (by including the tracepoint provider header, which itself includes
2606 LTTng-UST headers) aware that the tracepoint provider is to be loaded
2607 dynamically and not part of the application's executable.
2609 The tracepoint provider object file used to create the shared library
2610 is built like it is using the static library method, only with the
2611 `-fpic` option added:
2615 gcc -c -fpic -I. tp.c
2618 It is then linked as a shared library like this:
2622 gcc -shared -Wl,--no-as-needed -o tp.so -llttng-ust tp.o
2625 As previously stated, this tracepoint provider shared object isn't
2626 linked with the user application: it will be loaded manually. This is
2627 why the application is built with no mention of this tracepoint
2628 provider, but still needs `libdl`:
2632 gcc -o app other.o files.o of.o your.o app.o -ldl
2635 Now, to make LTTng-UST tracing available to the application, the
2636 `LD_PRELOAD` environment variable is used to preload the tracepoint
2637 provider shared library _before_ the application actually starts:
2641 LD_PRELOAD=/path/to/tp.so ./app
2646 It is not safe to use
2647 `dlclose()` on a tracepoint provider shared object that
2648 is being actively used for tracing, due to a lack of reference
2649 counting from LTTng-UST to the shared object.
2651 For example, statically linking a tracepoint provider to a
2652 shared object which is to be dynamically loaded by an application
2653 (e.g., a plugin) is not safe: the shared object, which contains the
2654 tracepoint provider, could be dynamically closed
2655 (`dlclose()`) at any time by the application.
2657 To instrument a shared object, either:
2659 * Statically link the tracepoint provider to the _application_, or
2660 * Build the tracepoint provider as a shared object (following
2661 the procedure shown in this section), and preload it when
2662 tracing is needed using the `LD_PRELOAD`
2663 environment variable.
2666 Your application will still work without this preloading, albeit without
2667 LTTng-UST tracing support:
2675 [[using-lttng-ust-with-daemons]]
2676 ===== Using LTTng-UST with daemons
2678 Some extra care is needed when using `liblttng-ust` with daemon
2679 applications that call `fork()`, `clone()` or BSD's `rfork()` without
2680 a following `exec()` family system call. The `liblttng-ust-fork`
2681 library must be preloaded for the application.
2687 LD_PRELOAD=liblttng-ust-fork.so ./app
2690 Or, if you're using a tracepoint provider shared library:
2694 LD_PRELOAD="liblttng-ust-fork.so /path/to/tp.so" ./app
2698 [[lttng-ust-pkg-config]]
2699 ===== Using pkg-config
2701 On some distributions, LTTng-UST is shipped with a pkg-config metadata
2702 file, so that you may use the `pkg-config` tool:
2706 pkg-config --libs lttng-ust
2709 This will return `-llttng-ust -ldl` on Linux systems.
2711 You may also check the LTTng-UST version using `pkg-config`:
2715 pkg-config --modversion lttng-ust
2718 For more information about pkg-config, see
2719 http://linux.die.net/man/1/pkg-config[its manpage].
2723 ===== Using `tracef()`
2725 `tracef()` is a small LTTng-UST API to avoid defining your own
2726 tracepoints and tracepoint providers. The signature of `tracef()` is
2727 the same as `printf()`'s.
2729 The `tracef()` utility function was developed to make user space tracing
2730 super simple, albeit with notable disadvantages compared to custom,
2731 full-fledged tracepoint providers:
2733 * All generated events have the same provider/event names, respectively
2734 `lttng_ust_tracef` and `event`.
2735 * There's no static type checking.
2736 * The only event field you actually get, named `msg`, is a string
2737 potentially containing the values you passed to the function
2738 using your own format. This also means that you cannot use filtering
2739 using a custom expression at runtime because there are no isolated
2741 * Since `tracef()` uses C standard library's `vasprintf()` function
2742 in the background to format the strings at runtime, its
2743 expected performance is lower than using custom tracepoint providers
2744 with typed fields, which do not require a conversion to a string.
2746 Thus, `tracef()` is useful for quick prototyping and debugging, but
2747 should not be considered for any permanent/serious application
2750 To use `tracef()`, first include `<lttng/tracef.h>` in the C source file
2751 where you need to insert probes:
2755 #include <lttng/tracef.h>
2758 Use `tracef()` like you would use `printf()` in your source code, e.g.:
2764 tracef("my message, my integer: %d", my_integer);
2769 Link your application with `liblttng-ust`:
2773 gcc -o app app.c -llttng-ust
2776 Execute the application as usual:
2783 Voilà ! Use the `lttng` command line tool to
2784 <<controlling-tracing,control tracing>>. You can enable `tracef()`
2789 lttng enable-event --userspace 'lttng_ust_tracef:*'
2793 [[lttng-ust-environment-variables-compiler-flags]]
2794 ===== LTTng-UST environment variables and special compilation flags
2796 A few special environment variables and compile flags may affect the
2797 behavior of LTTng-UST.
2799 LTTng-UST's debugging can be activated by setting the environment
2800 variable `LTTNG_UST_DEBUG` to `1` when launching the application. It
2801 can also be enabled at compile time by defining `LTTNG_UST_DEBUG` when
2802 compiling LTTng-UST (using the `-DLTTNG_UST_DEBUG` compiler option).
2804 The environment variable `LTTNG_UST_REGISTER_TIMEOUT` can be used to
2805 specify how long the application should wait for the
2806 <<lttng-sessiond,session daemon>>'s _registration done_ command
2807 before proceeding to execute the main program. The timeout value is
2808 specified in milliseconds. 0 means _don't wait_. -1 means
2809 _wait forever_. Setting this environment variable to 0 is recommended
2810 for applications with time contraints on the process startup time.
2812 The default value of `LTTNG_UST_REGISTER_TIMEOUT` (when not defined)
2813 is **3000{nbsp}ms**.
2815 The compilation definition `LTTNG_UST_DEBUG_VALGRIND` should be enabled
2816 at build time (`-DLTTNG_UST_DEBUG_VALGRIND`) to allow `liblttng-ust`
2817 to be used with http://valgrind.org/[Valgrind].
2818 The side effect of defining `LTTNG_UST_DEBUG_VALGRIND` is that per-CPU
2819 buffering is disabled.
2823 ==== $$C++$$ application
2825 Because of $$C++$$'s cross-compatibility with the C language, $$C++$$
2826 applications can be readily instrumented with the LTTng-UST C API.
2828 Follow the <<c-application,C application>> user guide above. It
2829 should be noted that, in this case, tracepoint providers should have
2830 the typical `.cpp`, `.cxx` or `.cc` extension and be built with `g++`
2831 instead of `gcc`. This is the easiest way of avoiding linking errors
2832 due to symbol name mangling incompatibilities between both languages.
2835 [[prebuilt-ust-helpers]]
2836 ==== Prebuilt user space tracing helpers
2838 The LTTng-UST package provides a few helpers that one may find
2839 useful in some situations. They all work the same way: you must
2840 preload the appropriate shared object before running the user
2841 application (using the `LD_PRELOAD` environment variable).
2843 The shared objects are normally found in dir:{/usr/lib}.
2845 The current installed helpers are:
2847 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}::
2848 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
2849 and POSIX threads tracing>>.
2851 path:{liblttng-ust-cyg-profile.so} and path:{liblttng-ust-cyg-profile-fast.so}::
2852 <<liblttng-ust-cyg-profile,Function tracing>>.
2854 path:{liblttng-ust-dl.so}::
2855 <<liblttng-ust-dl,Dynamic linker tracing>>.
2857 The following subsections document what helpers instrument exactly
2858 and how to use them.
2861 [[liblttng-ust-libc-pthread-wrapper]]
2862 ===== C standard library and POSIX threads tracing
2864 path:{liblttng-ust-libc-wrapper.so} and path:{liblttng-ust-pthread-wrapper.so}
2865 can add instrumentation to respectively some C standard library and
2866 POSIX threads functions.
2868 The following functions are traceable by path:{liblttng-ust-libc-wrapper.so}:
2871 .Functions instrumented by path:{liblttng-ust-libc-wrapper.so}
2873 |TP provider name |TP name |Instrumented function
2875 .6+|`ust_libc` |`malloc` |`malloc()`
2876 |`calloc` |`calloc()`
2877 |`realloc` |`realloc()`
2879 |`memalign` |`memalign()`
2880 |`posix_memalign` |`posix_memalign()`
2883 The following functions are traceable by
2884 path:{liblttng-ust-pthread-wrapper.so}:
2887 .Functions instrumented by path:{liblttng-ust-pthread-wrapper.so}
2889 |TP provider name |TP name |Instrumented function
2891 .4+|`ust_pthread` |`pthread_mutex_lock_req` |`pthread_mutex_lock()` (request time)
2892 |`pthread_mutex_lock_acq` |`pthread_mutex_lock()` (acquire time)
2893 |`pthread_mutex_trylock` |`pthread_mutex_trylock()`
2894 |`pthread_mutex_unlock` |`pthread_mutex_unlock()`
2897 All tracepoints have fields corresponding to the arguments of the
2898 function they instrument.
2900 To use one or the other with any user application, independently of
2901 how the latter is built, do:
2905 LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
2912 LD_PRELOAD=liblttng-ust-pthread-wrapper.so my-app
2919 LD_PRELOAD="liblttng-ust-libc-wrapper.so liblttng-ust-pthread-wrapper.so" my-app
2922 When the shared object is preloaded, it effectively replaces the
2923 functions listed in the above tables by wrappers which add tracepoints
2924 and call the replaced functions.
2926 Of course, like any other tracepoint, the ones above need to be enabled
2927 in order for LTTng-UST to generate events. This is done using the
2928 `lttng` command line tool
2929 (see <<controlling-tracing,Controlling tracing>>).
2932 [[liblttng-ust-cyg-profile]]
2933 ===== Function tracing
2935 Function tracing is the recording of which functions are entered and
2936 left during the execution of an application. Like with any LTTng event,
2937 the precise time at which this happens is also kept.
2939 GCC and clang have an option named
2940 https://gcc.gnu.org/onlinedocs/gcc-4.9.1/gcc/Code-Gen-Options.html[`-finstrument-functions`]
2941 which generates instrumentation calls for entry and exit to functions.
2942 The LTTng-UST function tracing helpers, path:{liblttng-ust-cyg-profile.so}
2943 and path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
2944 to add instrumentation to the two generated functions (which contain
2945 `cyg_profile` in their names, hence the shared object's name).
2947 In order to use LTTng-UST function tracing, the translation units to
2948 instrument must be built using the `-finstrument-functions` compiler
2951 LTTng-UST function tracing comes in two flavors, each providing
2952 different trade-offs: path:{liblttng-ust-cyg-profile-fast.so} and
2953 path:{liblttng-ust-cyg-profile.so}.
2955 **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant that
2956 should only be used where it can be _guaranteed_ that the complete event
2957 stream is recorded without any missing events. Any kind of duplicate
2958 information is left out. This version registers the following
2961 [role="growable",options="header,autowidth"]
2962 .Functions instrumented by path:{liblttng-ust-cyg-profile-fast.so}
2964 |TP provider name |TP name |Instrumented function
2966 .2+|`lttng_ust_cyg_profile_fast`
2972 Address of called function.
2978 Assuming no event is lost, having only the function addresses on entry
2979 is enough for creating a call graph (remember that a recorded event
2980 always contains the ID of the CPU that generated it). A tool like
2981 https://sourceware.org/binutils/docs/binutils/addr2line.html[`addr2line`]
2982 may be used to convert function addresses back to source files names
2986 **path:{liblttng-ust-cyg-profile.so}**,
2987 is a more robust variant which also works for use cases where
2988 events might get discarded or not recorded from application startup.
2989 In these cases, the trace analyzer needs extra information to be
2990 able to reconstruct the program flow. This version registers the
2991 following tracepoints:
2993 [role="growable",options="header,autowidth"]
2994 .Functions instrumented by path:{liblttng-ust-cyg-profile.so}
2996 |TP provider name |TP name |Instrumented function
2998 .2+|`lttng_ust_cyg_profile`
3004 Address of called function.
3013 Address of called function.
3019 To use one or the other variant with any user application, assuming at
3020 least one translation unit of the latter is compiled with the
3021 `-finstrument-functions` option, do:
3025 LD_PRELOAD=liblttng-ust-cyg-profile-fast.so my-app
3032 LD_PRELOAD=liblttng-ust-cyg-profile.so my-app
3035 It might be necessary to limit the number of source files where
3036 `-finstrument-functions` is used to prevent excessive amount of trace
3037 data to be generated at runtime.
3039 TIP: When using GCC, at least, you can use
3040 the `-finstrument-functions-exclude-function-list`
3041 option to avoid instrumenting entries and exits of specific
3044 All events generated from LTTng-UST function tracing are provided on
3045 log level `TRACE_DEBUG_FUNCTION`, which is useful to easily enable
3046 function tracing events in your tracing session using the
3047 `--loglevel-only` option of `lttng enable-event`
3048 (see <<controlling-tracing,Controlling tracing>>).
3052 ===== Dynamic linker tracing
3054 This LTTng-UST helper causes all calls to `dlopen()` and `dlclose()`
3055 in the target application to be traced with LTTng.
3057 The helper's shared object, path:{liblttng-ust-dl.so}, registers the
3058 following tracepoints when preloaded:
3060 [role="growable",options="header,autowidth"]
3061 .Functions instrumented by path:{liblttng-ust-dl.so}
3063 |TP provider name |TP name |Instrumented function
3071 Memory base address (where the dynamic linker placed the shared
3075 File system path to the loaded shared object.
3078 File size of the the loaded shared object.
3081 Last modification time (seconds since Epoch time) of the loaded shared
3088 Memory base address (where the dynamic linker placed the shared
3092 To use this LTTng-UST helper with any user application, independently of
3093 how the latter is built, do:
3097 LD_PRELOAD=liblttng-ust-dl.so my-app
3100 Of course, like any other tracepoint, the ones above need to be enabled
3101 in order for LTTng-UST to generate events. This is done using the
3102 `lttng` command line tool
3103 (see <<controlling-tracing,Controlling tracing>>).
3106 [[java-application]]
3107 ==== Java application
3109 LTTng-UST provides a _logging_ back-end for Java applications using
3110 http://docs.oracle.com/javase/7/docs/api/java/util/logging/Logger.html[`java.util.logging`]
3111 (JUL). This back-end is called the _LTTng-UST JUL agent_ and is
3112 responsible for communications with an LTTng session daemon.
3114 From the user's point of view, once the LTTng-UST JUL agent has been
3115 initialized, JUL loggers may be created and used as usual. The agent
3116 adds its own handler to the _root logger_, so that all loggers may
3117 generate LTTng events with no effort.
3119 Common JUL features are supported using the `lttng` tool
3120 (see <<controlling-tracing,Controlling tracing>>):
3122 * listing all logger names
3123 * enabling/disabling events per logger name
3130 import java.util.logging.Logger;
3131 import org.lttng.ust.jul.LTTngAgent;
3135 public static void main(String[] argv) throws Exception
3138 Logger logger = Logger.getLogger("jello");
3140 // call this as soon as possible (before logging)
3141 LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();
3144 logger.info("some info");
3145 logger.warning("some warning");
3147 logger.finer("finer information...");
3149 logger.severe("error!");
3151 // not mandatory, but cleaner
3152 lttngAgent.dispose();
3157 The LTTng-UST JUL agent Java classes are packaged in a JAR file named
3158 path:{liblttng-ust-jul.jar}. It is typically located in
3159 dir:{/usr/lib/lttng/java}. To compile the snippet above
3160 (saved as path:{Test.java}), do:
3164 javac -cp /usr/lib/lttng/java/liblttng-ust-jul.jar Test.java
3167 You can run the resulting compiled class:
3171 java -cp /usr/lib/lttng/java/liblttng-ust-jul.jar:. Test
3174 NOTE: http://openjdk.java.net/[OpenJDK] 7 is used for development and
3175 continuous integration, thus this version is directly supported.
3176 However, the LTTng-UST JUL agent has also been tested with OpenJDK 6.
3179 [[instrumenting-linux-kernel]]
3182 The Linux kernel can be instrumented for LTTng tracing, either its core
3183 source code or a kernel module. It has to be noted that Linux is
3184 readily traceable using LTTng since many parts of its source code are
3185 already instrumented: this is the job of the upstream
3186 http://git.lttng.org/?p=lttng-modules.git[LTTng-modules]
3187 package. This section presents how to add LTTng instrumentation where it
3188 does not currently exist and how to instrument custom kernel modules.
3190 All LTTng instrumentation in the Linux kernel is based on an existing
3191 infrastructure which bears the name of its main macro, `TRACE_EVENT()`.
3192 This macro is used to define tracepoints,
3193 each tracepoint having a name, usually with the
3194 +__subsys_____name__+ format,
3195 +_subsys_+ being the subsystem name and
3196 +_name_+ the specific event name.
3198 Tracepoints defined with `TRACE_EVENT()` may be inserted anywhere in
3199 the Linux kernel source code, after what callbacks, called _probes_,
3200 may be registered to execute some action when a tracepoint is
3201 executed. This mechanism is directly used by ftrace and perf,
3202 but cannot be used as is by LTTng: an adaptation layer is added to
3203 satisfy LTTng's specific needs.
3205 With that in mind, this documentation does not cover the `TRACE_EVENT()`
3206 format and how to use it, but it is mandatory to understand it and use
3207 it to instrument Linux for LTTng. A series of
3208 LWN articles explain
3209 `TRACE_EVENT()` in details:
3210 http://lwn.net/Articles/379903/[part 1],
3211 http://lwn.net/Articles/381064/[part 2], and
3212 http://lwn.net/Articles/383362/[part 3].
3213 Once you master `TRACE_EVENT()` enough for your use case, continue
3214 reading this section so that you can add the LTTng adaptation layer of
3217 This section first discusses the general method of instrumenting the
3218 Linux kernel for LTTng. This method is then reused for the specific
3219 case of instrumenting a kernel module.
3222 [[instrumenting-linux-kernel-itself]]
3223 ===== Instrumenting the Linux kernel for LTTng
3225 The following subsections explain strictly how to add custom LTTng
3226 instrumentation to the Linux kernel. They do not explain how the
3227 macros actually work and the internal mechanics of the tracer.
3229 You should have a Linux kernel source code tree to work with.
3230 Throughout this section, all file paths are relative to the root of
3231 this tree unless otherwise stated.
3233 You will need a copy of the LTTng-modules Git repository:
3237 git clone git://git.lttng.org/lttng-modules.git
3240 The steps to add custom LTTng instrumentation to a Linux kernel
3241 involves defining and using the mainline `TRACE_EVENT()` tracepoints
3242 first, then writing and using the LTTng adaptation layer.
3245 [[mainline-trace-event]]
3246 ===== Defining/using tracepoints with mainline `TRACE_EVENT()` infrastructure
3248 The first step is to define tracepoints using the mainline Linux
3249 `TRACE_EVENT()` macro and insert tracepoints where you want them.
3250 Your tracepoint definitions reside in a header file in
3251 dir:{include/trace/events}. If you're adding tracepoints to an existing
3252 subsystem, edit its appropriate header file.
3254 As an example, the following header file (let's call it
3255 path:{include/trace/events/hello.h}) defines one tracepoint using
3260 /* subsystem name is "hello" */
3262 #define TRACE_SYSTEM hello
3264 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3265 #define _TRACE_HELLO_H
3267 #include <linux/tracepoint.h>
3270 /* "hello" is the subsystem name, "world" is the event name */
3273 /* tracepoint function prototype */
3274 TP_PROTO(int foo, const char* bar),
3276 /* arguments for this tracepoint */
3279 /* LTTng doesn't need those */
3287 /* this part must be outside protection */
3288 #include <trace/define_trace.h>
3291 Notice that we don't use any of the last three arguments: they
3292 are left empty here because LTTng doesn't need them. You would only fill
3293 `TP_STRUCT__entry()`, `TP_fast_assign()` and `TP_printk()` if you were
3294 to also use this tracepoint for ftrace/perf.
3296 Once this is done, you may place calls to `trace_hello_world()`
3297 wherever you want in the Linux source code. As an example, let us place
3298 such a tracepoint in the `usb_probe_device()` static function
3299 (path:{drivers/usb/core/driver.c}):
3303 /* called from driver core with dev locked */
3304 static int usb_probe_device(struct device *dev)
3306 struct usb_device_driver *udriver = to_usb_device_driver(dev->driver);
3307 struct usb_device *udev = to_usb_device(dev);
3310 trace_hello_world(udev->devnum, udev->product);
3316 This tracepoint should fire every time a USB device is plugged in.
3318 At the top of path:{driver.c}, we need to include our actual tracepoint
3319 definition and, in this case (one place per subsystem), define
3320 `CREATE_TRACE_POINTS`, which will create our tracepoint:
3328 #define CREATE_TRACE_POINTS
3329 #include <trace/events/hello.h>
3334 Build your custom Linux kernel. In order to use LTTng, make sure the
3335 following kernel configuration options are enabled:
3337 * `CONFIG_MODULES` (loadable module support)
3338 * `CONFIG_KALLSYMS` (load all symbols for debugging/kksymoops)
3339 * `CONFIG_HIGH_RES_TIMERS` (high resolution timer support)
3340 * `CONFIG_TRACEPOINTS` (kernel tracepoint instrumentation)
3342 Boot the custom kernel. The directory
3343 dir:{/sys/kernel/debug/tracing/events/hello} should exist if everything
3344 went right, with a dir:{hello_world} subdirectory.
3347 [[lttng-adaptation-layer]]
3348 ===== Adding the LTTng adaptation layer
3350 The steps to write the LTTng adaptation layer are, in your
3351 LTTng-modules copy's source code tree:
3353 . In dir:{instrumentation/events/lttng-module},
3354 add a header +__subsys__.h+ for your custom
3355 subsystem +__subsys__+ and write your
3356 tracepoint definitions using LTTng-modules macros in it.
3357 Those macros look like the mainline kernel equivalents,
3358 but they present subtle, yet important differences.
3359 . In dir:{probes}, create the C source file of the LTTng probe kernel
3360 module for your subsystem. It should be named
3361 +lttng-probe-__subsys__.c+.
3362 . Edit path:{probes/Makefile} so that the LTTng-modules project
3363 builds your custom LTTng probe kernel module.
3364 . Build and install LTTng kernel modules.
3366 Following our `hello_world` event example, here's the content of
3367 path:{instrumentation/events/lttng-module/hello.h}:
3372 #define TRACE_SYSTEM hello
3374 #if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
3375 #define _TRACE_HELLO_H
3377 #include <linux/tracepoint.h>
3379 LTTNG_TRACEPOINT_EVENT(
3380 /* format identical to mainline version for those */
3382 TP_PROTO(int foo, const char* bar),
3385 /* possible differences */
3387 __field(int, my_int)
3388 __field(char, char0)
3389 __field(char, char1)
3390 __string(product, bar)
3393 /* notice the use of tp_assign()/tp_strcpy() and no semicolons */
3395 tp_assign(my_int, foo)
3396 tp_assign(char0, bar[0])
3397 tp_assign(char1, bar[1])
3398 tp_strcpy(product, bar)
3401 /* This one is actually not used by LTTng either, but must be
3402 * present for the moment.
3406 /* no semicolon after this either */
3411 /* other difference: do NOT include <trace/define_trace.h> */
3412 #include "../../../probes/define_trace.h"
3415 Some possible entries for `TP_STRUCT__entry()` and `TP_fast_assign()`,
3416 in the case of LTTng-modules, are shown in the
3417 <<lttng-modules-ref,LTTng-modules reference>> section.
3419 The best way to learn how to use the above macros is to inspect
3420 existing LTTng tracepoint definitions in
3421 dir:{instrumentation/events/lttng-module} header files. Compare
3422 them with the Linux kernel mainline versions in
3423 dir:{include/trace/events}.
3425 The next step is writing the LTTng probe kernel module C source file.
3426 This one is named +lttng-probe-__subsys__.c+
3427 in dir:{probes}. You may always use the following template:
3431 #include <linux/module.h>
3432 #include "../lttng-tracer.h"
3434 /* Build time verification of mismatch between mainline TRACE_EVENT()
3435 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3437 #include <trace/events/hello.h>
3439 /* create LTTng tracepoint probes */
3440 #define LTTNG_PACKAGE_BUILD
3441 #define CREATE_TRACE_POINTS
3442 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
3444 #include "../instrumentation/events/lttng-module/hello.h"
3446 MODULE_LICENSE("GPL and additional rights");
3447 MODULE_AUTHOR("Your name <your-email>");
3448 MODULE_DESCRIPTION("LTTng hello probes");
3449 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
3450 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
3451 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
3452 LTTNG_MODULES_EXTRAVERSION);
3455 Just replace `hello` with your subsystem name. In this example,
3456 `<trace/events/hello.h>`, which is the original mainline tracepoint
3457 definition header, is included for verification purposes: the
3458 LTTng-modules build system is able to emit an error at build time when
3459 the arguments of the mainline `TRACE_EVENT()` definitions do not match
3460 the ones of the LTTng-modules adaptation layer
3461 (`LTTNG_TRACEPOINT_EVENT()`).
3463 Edit path:{probes/Makefile} and add your new kernel module object
3464 next to existing ones:
3470 obj-m += lttng-probe-module.o
3471 obj-m += lttng-probe-power.o
3473 obj-m += lttng-probe-hello.o
3478 Time to build! Point to your custom Linux kernel source tree using
3479 the `KERNELDIR` variable:
3483 make KERNELDIR=/path/to/custom/linux
3486 Finally, install modules:
3490 sudo make modules_install
3494 [[instrumenting-linux-kernel-tracing]]
3497 The <<controlling-tracing,Controlling tracing>> section explains
3498 how to use the `lttng` tool to create and control tracing sessions.
3499 Although the `lttng` tool will load the appropriate _known_ LTTng kernel
3500 modules when needed (by launching `root`'s session daemon), it won't
3501 load your custom `lttng-probe-hello` module by default. You need to
3502 manually load the `lttng-probe-hello` module, and start an LTTng session
3507 sudo pkill -u root lttng-sessiond
3508 sudo modprobe lttng_probe_hello
3512 The first command makes sure any existing instance is killed. If
3513 you're not interested in using the default probes, or if you only
3514 want to use a few of them, you can use the `--kmod-probes` option
3515 of `lttng-sessiond` instead, which specifies an absolute list of
3516 probes to load (without the `lttng-probe-` prefix):
3520 sudo lttng-sessiond --kmod-probes=hello,ext4,net,block,signal,sched
3523 Confirm the custom probe module is loaded:
3527 lsmod | grep lttng_probe_hello
3530 The `hello_world` event should appear in the list when doing
3534 lttng list --kernel | grep hello
3537 You may now create an LTTng tracing session, enable the `hello_world`
3538 kernel event (and others if you wish) and start tracing:
3542 sudo lttng create my-session
3543 sudo lttng enable-event --kernel hello_world
3547 Plug a few USB devices, then stop tracing and inspect the trace (if
3548 http://diamon.org/babeltrace[Babeltrace]
3557 Here's a sample output:
3560 [15:30:34.835895035] (+?.?????????) hostname hello_world: { cpu_id = 1 }, { my_int = 8, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3561 [15:30:42.262781421] (+7.426886386) hostname hello_world: { cpu_id = 1 }, { my_int = 9, char0 = 80, char1 = 97, product = "Patriot Memory" }
3562 [15:30:48.175621778] (+5.912840357) hostname hello_world: { cpu_id = 1 }, { my_int = 10, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
3565 Two USB flash drives were used for this test.
3567 You may change your LTTng custom probe, rebuild it and reload it at
3568 any time when not tracing. Make sure you remove the old module
3569 (either by killing the root LTTng session daemon which loaded the
3570 module in the first place (if you used `--kmod-probes`), or by
3571 using `modprobe --remove` directly) before loading the updated one.
3574 [[instrumenting-out-of-tree-linux-kernel]]
3575 ===== Advanced: Instrumenting an out-of-tree Linux kernel module for LTTng
3577 Instrumenting a custom Linux kernel module for LTTng follows the exact
3579 <<instrumenting-linux-kernel-itself,adding instrumentation
3580 to the Linux kernel itself>>,
3581 the only difference being that your mainline tracepoint definition
3582 header doesn't reside in the mainline source tree, but in your
3583 kernel module source tree.
3585 The only reference to this mainline header is in the LTTng custom
3586 probe's source code (path:{probes/lttng-probe-hello.c} in our example),
3587 for build time verification:
3593 /* Build time verification of mismatch between mainline TRACE_EVENT()
3594 * arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
3596 #include <trace/events/hello.h>
3601 The preferred, flexible way to include your module's mainline
3602 tracepoint definition header is to put it in a specific directory
3603 relative to your module's root, e.g., dir:{tracepoints}, and include it
3604 relative to your module's root directory in the LTTng custom probe's
3609 #include <tracepoints/hello.h>
3612 You may then build LTTng-modules by adding your module's root
3613 directory as an include path to the extra C flags:
3617 make ccflags-y=-I/path/to/kernel/module KERNELDIR=/path/to/custom/linux
3620 Using `ccflags-y` allows you to move your kernel module to another
3621 directory and rebuild the LTTng-modules project with no change to
3625 [[proc-lttng-logger-abi]]
3626 ==== LTTng logger ABI
3628 The `lttng-tracer` Linux kernel module, installed by the LTTng-modules
3629 package, creates a special LTTng logger ABI file path:{/proc/lttng-logger}
3630 when loaded. Writing text data to this file generates an LTTng kernel
3631 domain event named `lttng_logger`.
3633 Unlike other kernel domain events, `lttng_logger` may be enabled by
3634 any user, not only root users or members of the tracing group.
3636 To use the LTTng logger ABI, simply write a string to
3637 path:{/proc/lttng-logger}:
3641 echo -n 'Hello, World!' > /proc/lttng-logger
3644 The `msg` field of the `lttng_logger` event contains the recorded
3647 NOTE: Messages are split in chunks of 1024{nbsp}bytes.
3649 The LTTng logger ABI is a quick and easy way to trace some events from
3650 user space through the kernel tracer. However, it is much more basic
3651 than LTTng-UST: it's slower (involves system call round-trip to the
3652 kernel and only supports logging strings). The LTTng logger ABI is
3653 particularly useful for recording logs as LTTng traces from shell
3654 scripts, potentially combining them with other Linux kernel/user space
3658 [[instrumenting-32-bit-app-on-64-bit-system]]
3659 ==== Advanced: Instrumenting a 32-bit application on a 64-bit system
3661 [[advanced-instrumenting-techniques]]In order to trace a 32-bit
3662 application running on a 64-bit system,
3663 LTTng must use a dedicated 32-bit
3664 <<lttng-consumerd,consumer daemon>>. This section discusses how to
3665 build that daemon (which is _not_ part of the default 64-bit LTTng
3666 build) and the LTTng 32-bit tracing libraries, and how to instrument
3667 a 32-bit application in that context.
3669 Make sure you install all 32-bit versions of LTTng dependencies.
3670 Their names can be found in the path:{README.md} files of each LTTng package
3671 source. How to find and install them will vary depending on your target
3672 Linux distribution. `gcc-multilib` is a common package name for the
3673 multilib version of GCC, which you will also need.
3675 The following packages will be built for 32-bit support on a 64-bit
3676 system: http://urcu.so/[Userspace RCU],
3677 LTTng-UST and LTTng-tools.
3680 [[building-32-bit-userspace-rcu]]
3681 ===== Building 32-bit Userspace RCU
3687 git clone git://git.urcu.so/urcu.git
3690 ./configure --libdir=/usr/lib32 CFLAGS=-m32
3696 The `-m32` C compiler flag creates 32-bit object files and `--libdir`
3697 indicates where to install the resulting libraries.
3700 [[building-32-bit-lttng-ust]]
3701 ===== Building 32-bit LTTng-UST
3707 git clone http://git.lttng.org/lttng-ust.git
3710 ./configure --prefix=/usr \
3711 --libdir=/usr/lib32 \
3712 CFLAGS=-m32 CXXFLAGS=-m32 \
3713 LDFLAGS=-L/usr/lib32
3719 `-L/usr/lib32` is required for the build to find the 32-bit versions
3720 of Userspace RCU and other dependencies.
3724 Depending on your Linux distribution,
3725 32-bit libraries could be installed at a different location than
3726 dir:{/usr/lib32}. For example, Debian is known to install
3727 some 32-bit libraries in dir:{/usr/lib/i386-linux-gnu}.
3729 In this case, make sure to set `LDFLAGS` to all the
3730 relevant 32-bit library paths, e.g.,
3731 `LDFLAGS="-L/usr/lib32 -L/usr/lib/i386-linux-gnu"`.
3734 NOTE: You may add options to path:{./configure} if you need them, e.g., for
3735 Java and SystemTap support. Look at `./configure --help` for more
3739 [[building-32-bit-lttng-tools]]
3740 ===== Building 32-bit LTTng-tools
3742 Since the host is a 64-bit system, most 32-bit binaries and libraries of
3743 LTTng-tools are not needed; the host will use their 64-bit counterparts.
3744 The required step here is building and installing a 32-bit consumer
3751 git clone http://git.lttng.org/lttng-tools.git
3754 ./configure --prefix=/usr \
3755 --libdir=/usr/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3756 LDFLAGS=-L/usr/lib32
3758 cd src/bin/lttng-consumerd
3763 The above commands build all the LTTng-tools project as 32-bit
3764 applications, but only installs the 32-bit consumer daemon.
3767 [[building-64-bit-lttng-tools]]
3768 ===== Building 64-bit LTTng-tools
3770 Finally, you need to build a 64-bit version of LTTng-tools which is
3771 aware of the 32-bit consumer daemon previously built and installed:
3777 ./configure --prefix=/usr \
3778 --with-consumerd32-libdir=/usr/lib32 \
3779 --with-consumerd32-bin=/usr/lib32/lttng/libexec/lttng-consumerd
3785 Henceforth, the 64-bit session daemon will automatically find the
3786 32-bit consumer daemon if required.
3789 [[building-instrumented-32-bit-c-application]]
3790 ===== Building an instrumented 32-bit C application
3792 Let us reuse the _Hello world_ example of
3793 <<tracing-your-own-user-application,Tracing your own user application>>
3794 (<<getting-started,Getting started>> chapter).
3796 The instrumentation process is unaltered.
3798 First, a typical 64-bit build (assuming you're running a 64-bit system):
3802 gcc -o hello64 -I. hello.c hello-tp.c -ldl -llttng-ust
3805 Now, a 32-bit build:
3809 gcc -o hello32 -I. -m32 hello.c hello-tp.c -L/usr/lib32 \
3810 -ldl -llttng-ust -Wl,-rpath,/usr/lib32
3813 The `-rpath` option, passed to the linker, will make the dynamic loader
3814 check for libraries in dir:{/usr/lib32} before looking in its default paths,
3815 where it should find the 32-bit version of `liblttng-ust`.
3818 [[running-32-bit-and-64-bit-c-applications]]
3819 ===== Running 32-bit and 64-bit versions of an instrumented C application
3821 Now, both 32-bit and 64-bit versions of the _Hello world_ example above
3822 can be traced in the same tracing session. Use the `lttng` tool as usual
3823 to create a tracing session and start tracing:
3827 lttng create session-3264
3828 lttng enable-event -u -a
3834 Use `lttng view` to verify both processes were
3835 successfully traced.
3838 [[controlling-tracing]]
3839 === Controlling tracing
3841 Once you're in possession of a software that is properly
3842 <<instrumenting,instrumented>> for LTTng tracing, be it thanks to
3843 the built-in LTTng probes for the Linux kernel, a custom user
3844 application or a custom Linux kernel, all that is left is actually
3845 tracing it. As a user, you control LTTng tracing using a single command
3846 line interface: the `lttng` tool. This tool uses `liblttng-ctl` behind
3847 the scene to connect to and communicate with session daemons. LTTng
3848 session daemons may either be started manually (`lttng-sessiond`) or
3849 automatically by the `lttng` command when needed. Trace data may
3850 be forwarded to the network and used elsewhere using an LTTng relay
3851 daemon (`lttng-relayd`).
3853 The manpages of `lttng`, `lttng-sessiond` and `lttng-relayd` are pretty
3854 complete, thus this section is not an online copy of the latter (we
3855 leave this contents for the
3856 <<online-lttng-manpages,Online LTTng manpages>> section).
3857 This section is rather a tour of LTTng
3858 features through practical examples and tips.
3860 If not already done, make sure you understand the core concepts
3861 and how LTTng components connect together by reading the
3862 <<understanding-lttng,Understanding LTTng>> chapter; this section
3863 assumes you are familiar with them.
3866 [[creating-destroying-tracing-sessions]]
3867 ==== Creating and destroying tracing sessions
3869 Whatever you want to do with `lttng`, it has to happen inside a
3870 **tracing session**, created beforehand. A session, in general, is a
3871 per-user container of state. A tracing session is no different; it
3872 keeps a specific state of stuff like:
3875 * enabled/disabled channels with associated parameters
3876 * enabled/disabled events with associated log levels and filters
3877 * context information added to channels
3878 * tracing activity (started or stopped)
3882 A single user may have many active tracing sessions. LTTng session
3883 daemons are the ultimate owners and managers of tracing sessions. For
3884 user space tracing, each user has its own session daemon. Since Linux
3885 kernel tracing requires root privileges, only `root`'s session daemon
3886 may enable and trace kernel events. However, `lttng` has a `--group`
3887 option (which is passed to `lttng-sessiond` when starting it) to
3888 specify the name of a _tracing group_ which selected users may be part
3889 of to be allowed to communicate with `root`'s session daemon. By
3890 default, the tracing group name is `tracing`.
3892 To create a tracing session, do:
3896 lttng create my-session
3899 This will create a new tracing session named `my-session` and make it
3900 the current one. If you don't specify any name (calling only
3901 `lttng create`), your tracing session will be named `auto`. Traces
3902 are written in +\~/lttng-traces/__session__-+ followed
3903 by the tracing session's creation date/time by default, where
3904 +__session__+ is the tracing session name. To save them
3905 at a different location, use the `--output` option:
3909 lttng create --output /tmp/some-directory my-session
3912 You may create as many tracing sessions as you wish:
3916 lttng create other-session
3917 lttng create yet-another-session
3920 You may view all existing tracing sessions using the `list` command:
3927 The state of a _current tracing session_ is kept in path:{~/.lttngrc}. Each
3928 invocation of `lttng` reads this file to set its current tracing
3929 session name so that you don't have to specify a session name for each
3930 command. You could edit this file manually, but the preferred way to
3931 set the current tracing session is to use the `set-session` command:
3935 lttng set-session other-session
3938 Most `lttng` commands accept a `--session` option to specify the name
3939 of the target tracing session.
3941 Any existing tracing session may be destroyed using the `destroy`
3946 lttng destroy my-session
3949 Providing no argument to `lttng destroy` will destroy the current
3950 tracing session. Destroying a tracing session will stop any tracing
3951 running within the latter. Destroying a tracing session frees resources
3952 acquired by the session daemon and tracer side, making sure to flush
3955 You can't do much with LTTng using only the `create`, `set-session`
3956 and `destroy` commands of `lttng`, but it is essential to know them in
3957 order to control LTTng tracing, which always happen within the scope of
3961 [[enabling-disabling-events]]
3962 ==== Enabling and disabling events
3964 Inside a tracing session, individual events may be enabled or disabled
3965 so that tracing them may or may not generate trace data.
3967 We sometimes use the term _event_ metonymically throughout this text to
3968 refer to a specific condition, or _rule_, that could lead, when
3969 satisfied, to an actual occurring event (a point at a specific position
3970 in source code/binary program, logical processor and time capturing
3971 some payload) being recorded as trace data. This specific condition is
3974 . A **domain** (kernel, user space or `java.util.logging`) (required).
3975 . One or many **instrumentation points** in source code or binary
3976 program (tracepoint name, address, symbol name, function name,
3977 logger name, etc.) to be executed (required).
3978 . A **log level** (each instrumentation point declares its own log
3979 level) or log level range to match (optional; only valid for user
3981 . A **custom user expression**, or **filter**, that must evaluate to
3982 _true_ when a tracepoint is executed (optional; only valid for user
3985 All conditions are specified using arguments passed to the
3986 `enable-event` command of the `lttng` tool.
3988 Condition 1 is specified using either `--kernel/-k` (kernel),
3989 `--userspace/-u` (user space) or `--jul/-j`
3990 (JUL). Exactly one of those
3991 three arguments must be specified.
3993 Condition 2 is specified using one of:
3999 Dynamic probe (address, symbol name or combination
4000 of both in binary program; only valid for kernel domain).
4003 function entry/exit (address, symbol name or
4004 combination of both in binary program; only valid for kernel domain).
4007 System call entry/exit (only valid for kernel domain).
4009 When none of the above is specified, `enable-event` defaults to
4010 using `--tracepoint`.
4012 Condition 3 is specified using one of:
4015 Log level range from the specified level to the most severe
4021 See `lttng enable-event --help` for the complete list of log level
4024 Condition 4 is specified using the `--filter` option. This filter is
4025 a C-like expression, potentially reading real-time values of event
4026 fields, that has to evaluate to _true_ for the condition to be satisfied.
4027 Event fields are read using plain identifiers while context fields
4028 must be prefixed with `$ctx.`. See `lttng enable-event --help` for
4031 The aforementioned arguments are combined to create and enable events.
4032 Each unique combination of arguments leads to a different
4033 _enabled event_. The log level and filter arguments are optional, their
4034 default values being respectively all log levels and a filter which
4035 always returns _true_.
4037 Here are a few examples (you must
4038 <<creating-destroying-tracing-sessions,create a tracing session>>
4043 lttng enable-event -u --tracepoint my_app:hello_world
4044 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_WARNING
4045 lttng enable-event -u --tracepoint 'my_other_app:*'
4046 lttng enable-event -u --tracepoint my_app:foo_bar \
4047 --filter 'some_field <= 23 && !other_field'
4048 lttng enable-event -k --tracepoint sched_switch
4049 lttng enable-event -k --tracepoint gpio_value
4050 lttng enable-event -k --function usb_probe_device usb_probe_device
4051 lttng enable-event -k --syscall --all
4054 The wildcard symbol, `*`, matches _anything_ and may only be used at
4055 the end of the string when specifying a _tracepoint_. Make sure to
4056 use it between single quotes in your favorite shell to avoid
4057 undesired shell expansion.
4059 You can see a list of events (enabled or disabled) using
4063 lttng list some-session
4066 where `some-session` is the name of the desired tracing session.
4068 What you're actually doing when enabling events with specific conditions
4069 is creating a **whitelist** of traceable events for a given channel.
4070 Thus, the following case presents redundancy:
4074 lttng enable-event -u --tracepoint my_app:hello_you
4075 lttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_DEBUG
4078 The second command, matching a log level range, is useless since the first
4079 command enables all tracepoints matching the same name,
4082 Disabling an event is simpler: you only need to provide the event
4083 name to the `disable-event` command:
4087 lttng disable-event --userspace my_app:hello_you
4090 This name has to match a name previously given to `enable-event` (it
4091 has to be listed in the output of `lttng list some-session`).
4092 The `*` wildcard is supported, as long as you also used it in a
4093 previous `enable-event` invocation.
4095 Disabling an event does not add it to some blacklist: it simply removes
4096 it from its channel's whitelist. This is why you cannot disable an event
4097 which wasn't previously enabled.
4099 A disabled event will not generate any trace data, even if all its
4100 specified conditions are met.
4102 Events may be enabled and disabled at will, either when LTTng tracers
4103 are active or not. Events may be enabled before a user space application
4107 [[basic-tracing-session-control]]
4108 ==== Basic tracing session control
4111 <<creating-destroying-tracing-sessions,created a tracing session>>
4112 and <<enabling-disabling-events,enabled one or more events>>,
4113 you may activate the LTTng tracers for the current tracing session at
4121 Subsequently, you may stop the tracers:
4128 LTTng is very flexible: user space applications may be launched before
4129 or after the tracers are started. Events will only be recorded if they
4130 are properly enabled and if they occur while tracers are started.
4132 A tracing session name may be passed to both the `start` and `stop`
4133 commands to start/stop tracing a session other than the current one.
4136 [[enabling-disabling-channels]]
4137 ==== Enabling and disabling channels
4139 <<event,As mentioned>> in the
4140 <<understanding-lttng,Understanding LTTng>> chapter, enabled
4141 events are contained in a specific channel, itself contained in a
4142 specific tracing session. A channel is a group of events with
4143 tunable parameters (event loss mode, sub-buffer size, number of
4144 sub-buffers, trace file sizes and count, etc.). A given channel may
4145 only be responsible for enabled events belonging to one domain: either
4146 kernel or user space.
4148 If you only used the `create`, `enable-event` and `start`/`stop`
4149 commands of the `lttng` tool so far, one or two channels were
4150 automatically created for you (one for the kernel domain and/or one
4151 for the user space domain). The default channels are both named
4152 `channel0`; channels from different domains may have the same name.
4154 The current channels of a given tracing session can be viewed with
4158 lttng list some-session
4161 where `some-session` is the name of the desired tracing session.
4163 To create and enable a channel, use the `enable-channel` command:
4167 lttng enable-channel --kernel my-channel
4170 This will create a kernel domain channel named `my-channel` with
4171 default parameters in the current tracing session.
4175 Because of a current limitation, all
4176 channels must be _created_ prior to beginning tracing in a
4177 given tracing session, i.e. before the first time you do
4180 Since a channel is automatically created by
4181 `enable-event` only for the specified domain, you cannot,
4182 for example, enable a kernel domain event, start tracing and then
4183 enable a user space domain event because no user space channel
4184 exists yet and it's too late to create one.
4186 For this reason, make sure to configure your channels properly
4187 before starting the tracers for the first time!
4190 Here's another example:
4194 lttng enable-channel --userspace --session other-session --overwrite \
4195 --tracefile-size 1048576 1mib-channel
4198 This will create a user space domain channel named `1mib-channel` in
4199 the tracing session named `other-session` that loses new events by
4200 overwriting previously recorded events (instead of the default mode of
4201 discarding newer ones) and saves trace files with a maximum size of
4204 Note that channels may also be created using the `--channel` option of
4205 the `enable-event` command when the provided channel name doesn't exist
4206 for the specified domain:
4210 lttng enable-event --kernel --channel some-channel sched_switch
4213 If no kernel domain channel named `some-channel` existed before calling
4214 the above command, it would be created with default parameters.
4216 You may enable the same event in two different channels:
4220 lttng enable-event --userspace --channel my-channel app:tp
4221 lttng enable-event --userspace --channel other-channel app:tp
4224 If both channels are enabled, the occurring `app:tp` event will
4225 generate two recorded events, one for each channel.
4227 Disabling a channel is done with the `disable-event` command:
4231 lttng disable-event --kernel some-channel
4234 The state of a channel precedes the individual states of events within
4235 it: events belonging to a disabled channel, even if they are
4236 enabled, won't be recorded.
4240 [[fine-tuning-channels]]
4241 ===== Fine-tuning channels
4243 There are various parameters that may be fine-tuned with the
4244 `enable-channel` command. The latter are well documented in
4245 man:lttng(1) and in the <<channel,Channel>> section of the
4246 <<understanding-lttng,Understanding LTTng>> chapter. For basic
4247 tracing needs, their default values should be just fine, but here are a
4248 few examples to break the ice.
4250 As the frequency of recorded events increases--either because the
4251 event throughput is actually higher or because you enabled more events
4252 than usual—__event loss__ might be experienced. Since LTTng never
4253 waits, by design, for sub-buffer space availability (non-blocking
4254 tracer), when a sub-buffer is full and no empty sub-buffers are left,
4255 there are two possible outcomes: either the new events that do not fit
4256 are rejected, or they start replacing the oldest recorded events.
4257 The choice of which algorithm to use is a per-channel parameter, the
4258 default being discarding the newest events until there is some space
4259 left. If your situation always needs the latest events at the expense
4260 of writing over the oldest ones, create a channel with the `--overwrite`
4265 lttng enable-channel --kernel --overwrite my-channel
4268 When an event is lost, it means no space was available in any
4269 sub-buffer to accommodate it. Thus, if you want to cope with sporadic
4270 high event throughput situations and avoid losing events, you need to
4271 allocate more room for storing them in memory. This can be done by
4272 either increasing the size of sub-buffers or by adding sub-buffers.
4273 The following example creates a user space domain channel with
4274 16{nbsp}sub-buffers of 512{nbsp}kiB each:
4278 lttng enable-channel --userspace --num-subbuf 16 --subbuf-size 512k big-channel
4281 Both values need to be powers of two, otherwise they are rounded up
4284 Two other interesting available parameters of `enable-channel` are
4285 `--tracefile-size` and `--tracefile-count`, which respectively limit
4286 the size of each trace file and the their count for a given channel.
4287 When the number of written trace files reaches its limit for a given
4288 channel-CPU pair, the next trace file will overwrite the very first
4289 one. The following example creates a kernel domain channel with a
4290 maximum of three trace files of 1{nbsp}MiB each:
4294 lttng enable-channel --kernel --tracefile-size 1M --tracefile-count 3 my-channel
4297 An efficient way to make sure lots of events are generated is enabling
4298 all kernel events in this channel and starting the tracer:
4302 lttng enable-event --kernel --all --channel my-channel
4306 After a few seconds, look at trace files in your tracing session
4307 output directory. For two CPUs, it should look like:
4310 my-channel_0_0 my-channel_1_0
4311 my-channel_0_1 my-channel_1_1
4312 my-channel_0_2 my-channel_1_2
4315 Amongst the files above, you might see one in each group with a size
4316 lower than 1{nbsp}MiB: they are the files currently being written.
4318 Since all those small files are valid LTTng trace files, LTTng trace
4319 viewers may read them. It is the viewer's responsibility to properly
4320 merge the streams so as to present an ordered list to the user.
4321 http://diamon.org/babeltrace[Babeltrace]
4322 merges LTTng trace files correctly and is fast at doing it.
4326 ==== Adding some context to channels
4328 If you read all the sections of
4329 <<controlling-tracing,Controlling tracing>> so far, you should be
4330 able to create tracing sessions, create and enable channels and events
4331 within them and start/stop the LTTng tracers. Event fields recorded in
4332 trace files provide important information about occurring events, but
4333 sometimes external context may help you solve a problem faster. This
4334 section discusses how to add context information to events of a
4335 specific channel using the `lttng` tool.
4337 There are various available context values which can accompany events
4338 recorded by LTTng, for example:
4340 * **process information**:
4344 ** scheduling priority (niceness)
4345 ** thread identifier (TID)
4346 * the **hostname** of the system on which the event occurred
4347 * plenty of **performance counters** using perf:
4348 ** CPU cycles, stalled cycles, idle cycles, etc.
4350 ** branch instructions, misses, loads, etc.
4354 The full list is available in the output of `lttng add-context --help`.
4355 Some of them are reserved for a specific domain (kernel or
4356 user space) while others are available for both.
4358 To add context information to one or all channels of a given tracing
4359 session, use the `add-context` command:
4363 lttng add-context --userspace --type vpid --type perf:thread:cpu-cycles
4366 The above example adds the virtual process identifier and per-thread
4367 CPU cycles count values to all recorded user space domain events of the
4368 current tracing session. Use the `--channel` option to select a specific
4373 lttng add-context --kernel --channel my-channel --type tid
4376 adds the thread identifier value to all recorded kernel domain events
4377 in the channel `my-channel` of the current tracing session.
4379 Beware that context information cannot be removed from channels once
4380 it's added for a given tracing session.
4383 [[saving-loading-tracing-session]]
4384 ==== Saving and loading tracing session configurations
4386 Configuring a tracing session may be long: creating and enabling
4387 channels with specific parameters, enabling kernel and user space
4388 domain events with specific log levels and filters, adding context
4389 to some channels, etc. If you're going to use LTTng to solve real
4390 world problems, chances are you're going to have to record events using
4391 the same tracing session setup over and over, modifying a few variables
4392 each time in your instrumented program or environment. To avoid
4393 constant tracing session reconfiguration, the `lttng` tool is able to
4394 save and load tracing session configurations to/from XML files.
4396 To save a given tracing session configuration, do:
4400 lttng save my-session
4403 where `my-session` is the name of the tracing session to save. Tracing
4404 session configurations are saved to dir:{~/.lttng/sessions} by default;
4405 use the `--output-path` option to change this destination directory.
4407 All configuration parameters are saved:
4409 * tracing session name
4410 * trace data output path
4411 * channels with their state and all their parameters
4412 * context information added to channels
4413 * events with their state, log level and filter
4414 * tracing activity (started or stopped)
4416 To load a tracing session, simply do:
4420 lttng load my-session
4423 or, if you used a custom path:
4427 lttng load --input-path /path/to/my-session.lttng
4430 Your saved tracing session will be restored as if you just configured
4434 [[sending-trace-data-over-the-network]]
4435 ==== Sending trace data over the network
4437 The possibility of sending trace data over the network comes as a
4438 built-in feature of LTTng-tools. For this to be possible, an LTTng
4439 _relay daemon_ must be executed and listening on the machine where
4440 trace data is to be received, and the user must create a tracing
4441 session using appropriate options to forward trace data to the remote
4444 The relay daemon listens on two different TCP ports: one for control
4445 information and the other for actual trace data.
4447 Starting the relay daemon on the remote machine is as easy as:
4454 This will make it listen to its default ports: 5342 for control and
4455 5343 for trace data. The `--control-port` and `--data-port` options may
4456 be used to specify different ports.
4458 Traces written by `lttng-relayd` are written to
4459 +\~/lttng-traces/__hostname__/__session__+ by
4460 default, where +__hostname__+ is the host name of the
4461 traced (monitored) system and +__session__+ is the
4462 tracing session name. Use the `--output` option to write trace data
4463 outside dir:{~/lttng-traces}.
4465 On the sending side, a tracing session must be created using the
4466 `lttng` tool with the `--set-url` option to connect to the distant
4471 lttng create my-session --set-url net://distant-host
4474 The URL format is described in the output of `lttng create --help`.
4475 The above example will use the default ports; the `--ctrl-url` and
4476 `--data-url` options may be used to set the control and data URLs
4479 Once this basic setup is completed and the connection is established,
4480 you may use the `lttng` tool on the target machine as usual; everything
4481 you do will be transparently forwarded to the remote machine if needed.
4482 For example, a parameter changing the maximum size of trace files will
4483 have an effect on the distant relay daemon actually writing the trace.
4487 ==== Viewing events as they arrive
4489 We have seen how trace files may be produced by LTTng out of generated
4490 application and Linux kernel events. We have seen that those trace files
4491 may be either recorded locally by consumer daemons or remotely using
4492 a relay daemon. And we have seen that the maximum size and count of
4493 trace files is configurable for each channel. With all those features,
4494 it's still not possible to read a trace file as it is being written
4495 because it could be incomplete and appear corrupted to the viewer.
4496 There is a way to view events as they arrive, however: using
4499 LTTng live is implemented, in LTTng, solely on the relay daemon side.
4500 As trace data is sent over the network to a relay daemon by a (possibly
4501 remote) consumer daemon, a _tee_ may be created: trace data will be
4502 recorded to trace files _as well as_ being transmitted to a
4503 connected live viewer:
4506 .LTTng live and the relay daemon.
4507 image::lttng-live-relayd.png[]
4509 In order to use this feature, a tracing session must created in live
4510 mode on the target system:
4517 An optional parameter may be passed to `--live` to set the interval
4518 of time (in microseconds) between flushes to the network
4519 (1{nbsp}second is the default):
4523 lttng create --live 100000
4526 will flush every 100{nbsp}ms.
4528 If no network output is specified to the `create` command, a local
4529 relay daemon will be spawned. In this very common case, viewing a live
4530 trace is easy: enable events and start tracing as usual, then use
4531 `lttng view` to start the default live viewer:
4538 The correct arguments will be passed to the live viewer so that it
4539 may connect to the local relay daemon and start reading live events.
4541 You may also wish to use a live viewer not running on the target
4542 system. In this case, you should specify a network output when using
4543 the `create` command (`--set-url` or `--ctrl-url`/`--data-url` options).
4544 A distant LTTng relay daemon should also be started to receive control
4545 and trace data. By default, `lttng-relayd` listens on 127.0.0.1:5344
4546 for an LTTng live connection. Otherwise, the desired URL may be
4547 specified using its `--live-port` option.
4550 http://diamon.org/babeltrace[`babeltrace`]
4551 viewer supports LTTng live as one of its input formats. `babeltrace` is
4552 the default viewer when using `lttng view`. To use it manually, first
4553 list active tracing sessions by doing the following (assuming the relay
4554 daemon to connect to runs on the same host):
4558 babeltrace --input-format lttng-live net://localhost
4561 Then, choose a tracing session and start viewing events as they arrive
4562 using LTTng live, e.g.:
4566 babeltrace --input-format lttng-live net://localhost/host/hostname/my-session
4570 [[taking-a-snapshot]]
4571 ==== Taking a snapshot
4573 The normal behavior of LTTng is to record trace data as trace files.
4574 This is ideal for keeping a long history of events that occurred on
4575 the target system and applications, but may be too much data in some
4576 situations. For example, you may wish to trace your application
4577 continuously until some critical situation happens, in which case you
4578 would only need the latest few recorded events to perform the desired
4579 analysis, not multi-gigabyte trace files.
4581 LTTng has an interesting feature called _snapshots_. When creating
4582 a tracing session in snapshot mode, no trace files are written; the
4583 tracers' sub-buffers are constantly overwriting the oldest recorded
4584 events with the newest. At any time, either when the tracers are started
4585 or stopped, you may take a snapshot of those sub-buffers.
4587 There is no difference between the format of a normal trace file and the
4588 format of a snapshot: viewers of LTTng traces will also support LTTng
4589 snapshots. By default, snapshots are written to disk, but they may also
4590 be sent over the network.
4592 To create a tracing session in snapshot mode, do:
4596 lttng create --snapshot my-snapshot-session
4599 Next, enable channels, events and add context to channels as usual.
4600 Once a tracing session is created in snapshot mode, channels will be
4602 <<channel-overwrite-mode-vs-discard-mode,overwrite>> mode
4603 (`--overwrite` option of the `enable-channel` command; also called
4604 _flight recorder mode_) and have an `mmap()` channel type
4607 Start tracing. When you're ready to take a snapshot, do:
4611 lttng snapshot record --name my-snapshot
4614 This will record a snapshot named `my-snapshot` of all channels of
4615 all domains of the current tracing session. By default, snapshots files
4616 are recorded in the path returned by `lttng snapshot list-output`. You
4617 may change this path or decide to send snapshots over the network
4620 . an output path/URL specified when creating the tracing session
4622 . an added snapshot output path/URL using
4623 `lttng snapshot add-output`
4624 . an output path/URL provided directly to the
4625 `lttng snapshot record` command
4627 Method 3 overrides method 2 which overrides method 1. When specifying
4628 a URL, a relay daemon must be listening on some machine (see
4629 <<sending-trace-data-over-the-network,Sending trace data over the network>>).
4631 If you need to make absolutely sure that the output file won't be
4632 larger than a certain limit, you can set a maximum snapshot size when
4633 taking it with the `--max-size` option:
4637 lttng snapshot record --name my-snapshot --max-size 2M
4640 Older recorded events will be discarded in order to respect this
4647 This chapter presents various references for LTTng packages such as links
4648 to online manpages, tables needed by the rest of the text, descriptions
4649 of library functions, etc.
4652 [[online-lttng-manpages]]
4653 === Online LTTng manpages
4655 LTTng packages currently install the following manpages, available
4656 online using the links below:
4660 ** man:lttng-sessiond(8)
4661 ** man:lttng-relayd(8)
4663 ** man:lttng-gen-tp(1)
4665 ** man:lttng-ust-cyg-profile(3)
4666 ** man:lttng-ust-dl(3)
4672 This section presents references of the LTTng-UST package.
4676 ==== LTTng-UST library (+liblttng‑ust+)
4678 The LTTng-UST library, or `liblttng-ust`, is the main shared object
4679 against which user applications are linked to make LTTng user space
4682 The <<c-application,C application>> guide shows the complete
4683 process to instrument, build and run a C/$$C++$$ application using
4684 LTTng-UST, while this section contains a few important tables.
4687 [[liblttng-ust-tp-fields]]
4688 ===== Tracepoint fields macros (for `TP_FIELDS()`)
4690 The available macros to define tracepoint fields, which should be listed
4691 within `TP_FIELDS()` in `TRACEPOINT_EVENT()`, are:
4693 [role="growable func-desc",cols="asciidoc,asciidoc"]
4694 .Available macros to define LTTng-UST tracepoint fields
4696 |Macro |Description and parameters
4699 +ctf_integer(__t__, __n__, __e__)+
4701 +ctf_integer_nowrite(__t__, __n__, __e__)+
4703 Standard integer, displayed in base 10.
4706 Integer C type (`int`, `long`, `size_t`, etc.).
4712 Argument expression.
4714 |+ctf_integer_hex(__t__, __n__, __e__)+
4716 Standard integer, displayed in base 16.
4725 Argument expression.
4727 |+ctf_integer_network(__t__, __n__, __e__)+
4729 Integer in network byte order (big endian), displayed in base 10.
4738 Argument expression.
4740 |+ctf_integer_network_hex(__t__, __n__, __e__)+
4742 Integer in network byte order, displayed in base 16.
4751 Argument expression.
4754 +ctf_float(__t__, __n__, __e__)+
4756 +ctf_float_nowrite(__t__, __n__, __e__)+
4758 Floating point number.
4761 Floating point number C type (`float` or `double`).
4767 Argument expression.
4770 +ctf_string(__n__, __e__)+
4772 +ctf_string_nowrite(__n__, __e__)+
4774 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
4780 Argument expression.
4783 +ctf_array(__t__, __n__, __e__, __s__)+
4785 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
4787 Statically-sized array of integers
4790 Array element C type.
4796 Argument expression.
4802 +ctf_array_text(__t__, __n__, __e__, __s__)+
4804 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
4806 Statically-sized array, printed as text.
4808 The string does not need to be null-terminated.
4811 Array element C type (always `char`).
4817 Argument expression.
4823 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
4825 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
4827 Dynamically-sized array of integers.
4829 The type of +__E__+ needs to be unsigned.
4832 Array element C type.
4838 Argument expression.
4841 Length expression C type.
4847 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
4849 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
4851 Dynamically-sized array, displayed as text.
4853 The string does not need to be null-terminated.
4855 The type of +__E__+ needs to be unsigned.
4857 The behaviour is undefined if +__e__+ is `NULL`.
4860 Sequence element C type (always `char`).
4866 Argument expression.
4869 Length expression C type.
4875 The `_nowrite` versions omit themselves from the session trace, but are
4876 otherwise identical. This means the `_nowrite` fields won't be written
4877 in the recorded trace. Their primary purpose is to make some
4878 of the event context available to the
4879 <<enabling-disabling-events,event filters>> without having to
4880 commit the data to sub-buffers.
4883 [[liblttng-ust-tracepoint-loglevel]]
4884 ===== Tracepoint log levels (for `TRACEPOINT_LOGLEVEL()`)
4886 The following table shows the available log level values for the
4887 `TRACEPOINT_LOGLEVEL()` macro:
4893 Action must be taken immediately.
4896 Critical conditions.
4905 Normal, but significant, condition.
4908 Informational message.
4910 `TRACE_DEBUG_SYSTEM`::
4911 Debug information with system-level scope (set of programs).
4913 `TRACE_DEBUG_PROGRAM`::
4914 Debug information with program-level scope (set of processes).
4916 `TRACE_DEBUG_PROCESS`::
4917 Debug information with process-level scope (set of modules).
4919 `TRACE_DEBUG_MODULE`::
4920 Debug information with module (executable/library) scope (set of units).
4922 `TRACE_DEBUG_UNIT`::
4923 Debug information with compilation unit scope (set of functions).
4925 `TRACE_DEBUG_FUNCTION`::
4926 Debug information with function-level scope.
4928 `TRACE_DEBUG_LINE`::
4929 Debug information with line-level scope (TRACEPOINT_EVENT default).
4932 Debug-level message.
4934 Log levels `TRACE_EMERG` through `TRACE_INFO` and `TRACE_DEBUG` match
4935 http://man7.org/linux/man-pages/man3/syslog.3.html[syslog]
4936 level semantics. Log levels `TRACE_DEBUG_SYSTEM` through `TRACE_DEBUG`
4937 offer more fine-grained selection of debug information.
4940 [[lttng-modules-ref]]
4943 This section presents references of the LTTng-modules package.
4946 [[lttng-modules-tp-struct-entry]]
4947 ==== Tracepoint fields macros (for `TP_STRUCT__entry()`)
4949 This table describes possible entries for the `TP_STRUCT__entry()` part
4950 of `LTTNG_TRACEPOINT_EVENT()`:
4952 [role="growable func-desc",cols="asciidoc,asciidoc"]
4953 .Available entries for `TP_STRUCT__entry()` (in `LTTNG_TRACEPOINT_EVENT()`)
4955 |Macro |Description and parameters
4957 |+\__field(__t__, __n__)+
4959 Standard integer, displayed in base 10.
4962 Integer C type (`int`, `unsigned char`, `size_t`, etc.).
4967 |+\__field_hex(__t__, __n__)+
4969 Standard integer, displayed in base 16.
4977 |+\__field_oct(__t__, __n__)+
4979 Standard integer, displayed in base 8.
4987 |+\__field_network(__t__, __n__)+
4989 Integer in network byte order (big endian), displayed in base 10.
4997 |+\__field_network_hex(__t__, __n__)+
4999 Integer in network byte order (big endian), displayed in base 16.
5007 |+\__array(__t__, __n__, __s__)+
5009 Statically-sized array, elements displayed in base 10.
5012 Array element C type.
5020 |+\__array_hex(__t__, __n__, __s__)+
5022 Statically-sized array, elements displayed in base 16.
5025 array element C type.
5031 |+\__array_text(__t__, __n__, __s__)+
5033 Statically-sized array, displayed as text.
5036 Array element C type (always char).
5044 |+\__dynamic_array(__t__, __n__, __s__)+
5046 Dynamically-sized array, displayed in base 10.
5049 Array element C type.
5055 Length C expression.
5057 |+\__dynamic_array_hex(__t__, __n__, __s__)+
5059 Dynamically-sized array, displayed in base 16.
5062 Array element C type.
5068 Length C expression.
5070 |+\__dynamic_array_text(__t__, __n__, __s__)+
5072 Dynamically-sized array, displayed as text.
5075 Array element C type (always char).
5081 Length C expression.
5083 |+\__string(n, __s__)+
5085 Null-terminated string.
5087 The behaviour is undefined behavior if +__s__+ is `NULL`.
5093 String source (pointer).
5096 The above macros should cover the majority of cases. For advanced items,
5097 see path:{probes/lttng-events.h}.
5100 [[lttng-modules-tp-fast-assign]]
5101 ==== Tracepoint assignment macros (for `TP_fast_assign()`)
5103 This table describes possible entries for the `TP_fast_assign()` part
5104 of `LTTNG_TRACEPOINT_EVENT()`:
5106 .Available entries for `TP_fast_assign()` (in `LTTNG_TRACEPOINT_EVENT()`)
5107 [role="growable func-desc",cols="asciidoc,asciidoc"]
5109 |Macro |Description and parameters
5111 |+tp_assign(__d__, __s__)+
5113 Assignment of C expression +__s__+ to tracepoint field +__d__+.
5116 Name of destination tracepoint field.
5119 Source C expression (may refer to tracepoint arguments).
5121 |+tp_memcpy(__d__, __s__, __l__)+
5123 Memory copy of +__l__+ bytes from +__s__+ to tracepoint field
5124 +__d__+ (use with array fields).
5127 Name of destination tracepoint field.
5130 Source C expression (may refer to tracepoint arguments).
5133 Number of bytes to copy.
5135 |+tp_memcpy_from_user(__d__, __s__, __l__)+
5137 Memory copy of +__l__+ bytes from user space +__s__+ to tracepoint
5138 field +__d__+ (use with array fields).
5141 Name of destination tracepoint field.
5144 Source C expression (may refer to tracepoint arguments).
5147 Number of bytes to copy.
5149 |+tp_memcpy_dyn(__d__, __s__)+
5151 Memory copy of dynamically-sized array from +__s__+ to tracepoint field
5154 The number of bytes is known from the field's length expression
5155 (use with dynamically-sized array fields).
5158 Name of destination tracepoint field.
5161 Source C expression (may refer to tracepoint arguments).
5164 Number of bytes to copy.
5166 |+tp_strcpy(__d__, __s__)+
5168 String copy of +__s__+ to tracepoint field +__d__+ (use with string
5172 Name of destination tracepoint field.
5175 Source C expression (may refer to tracepoint arguments).