event delivery algo finished
[lttv.git] / ltt / branches / poly / doc / developer / lttvwindow_events_delivery.txt
1 Linux Trace Toolkit
2
3 Mathieu Desnoyers 17-05-2004
4
5
6 This document explains how the lttvwindow API could process the event requests
7 of the viewers, merging event requests and hook lists to benefit from the fact
8 that process_traceset can call multiple hooks for the same event.
9
10 First, we will explain the detailed process of event delivery in the current
11 framework. We will then study its strengths and weaknesses.
12
13 In a second time, a framework where the events requests are dealt by the main
14 window with fine granularity will be described. We will then discussed the
15 advantages and inconvenients over the first framework.
16
17
18 1. (Actual) Boundaryless event reading
19
20 Actually, viewers request events in a time interval from the main window. They
21 also specify a (not so) maximum number of events to be delivered. In fact, the
22 number of events to read only gives a stop point, from where only events with
23 the same timestamp will be delivered.
24
25 Viewers register hooks themselves in the traceset context. When merging read
26 requests in the main window, all hooks registered by viewers will be called for
27 the union of all the read requests, because the main window has no control on
28 hook registration.
29
30 The main window calls process_traceset on its own for all the intervals
31 requested by all the viewers. It must not duplicate a read of the same time
32 interval : it could be very hard to filter by viewers. So, in order to achieve
33 this, time requests are sorted by start time, and process_traceset is called for
34 each time request. We keep the last event time between each read : if the start
35 time of the next read is lower than the time reached, we continue the reading
36 from the actual position.
37
38 We deal with specific number of events requests (infinite end time) by
39 garantying that, starting from the time start of the request, at least that
40 number of events will be read. As we can't do it efficiently without interacting
41 very closely with process_traceset, we always read the specified number of
42 events requested starting from the current position when we answer to a request
43 based on the number of events.
44
45 The viewers have to filter events delivered by traceset reading, because they
46 can be asked by another viewer for a totally (or partially) different time
47 interval.
48
49
50 Weaknesses
51
52 - process_middle does not guarantee the number of events read
53
54 First of all, a viewer that requests events to process_traceset has no garantee
55 that it will get exactly what it asked for. For example, a direct call to
56 traceset_middle for a specific number of events will delived _at least_ that
57 quantity of events, plus the ones that have the same timestamp that the last one
58 has.
59
60 - Border effects
61
62 Viewer's writers will have to deal with a lot of border effects caused by the
63 particularities of the reading. They will be required to select the information
64 they need from their input by filtering.
65
66 - Lack of encapsulation and difficulty of testing
67
68 The viewer's writer will have to take into account all the border effects caused
69 by the interaction with other modules. This means that event if a viewer works
70 well alone or with another viewer, it's possible that new bugs arises when a new
71 viewer comes around. So, even if a perfect testbench works well for a viewer, it
72 does not confirm that no new bug will arise when another viewer is loaded at the
73 same moment asking for different time intervals.
74
75
76 - Duplication of the work
77
78 Time based filters and counters of events will have to be implemented at the
79 viewer's side, which is a duplication of the functionnalities that would
80 normally be expected from the tracecontext API.
81
82 - Lack of control over the data input
83
84 As we expect module's writers to prefer to be as close as possible from the raw
85 datas, making them interact with a lower level library that gives them a data
86 input that they only control by further filtering of the input is not
87 appropriated. We should expect some reluctancy from them about using this API
88 because of this lack of control on the input.
89
90 - Speed cost
91
92 All hooks of all viewers will be called for all the time intervals. So, if we
93 have a detailed events list and a control flow view, asking both for different
94 time intervals, the detailed events list will have to filter all the events
95 delivered originally to the control flow view. This can be a case occuring quite
96 often.
97
98
99
100 Strengths
101
102 - Simple concatenation of time intervals at the main window level.
103
104 Having the opportunity of delivering more events than necessary to the viewers
105 means that we can concatenate time intervals and number of events requested
106 fairly easily, while being hard to determine if some specific cases will be
107 wrong, in depth testing being impossible.
108
109 - No duplication of the tracecontext API
110
111 Viewers deal directly with the tracecontext API for registering hooks, removing
112 a layer of encapsulation.
113
114
115
116
117
118 2. (Proposed) Strict boundaries events reading
119
120 The idea behind this method is to provide exactly the events requested by the
121 viewers to them, no more, no less.
122
123 It uses the new API for process traceset suggested in the document
124 process_traceset_strict_boundaries.txt.
125
126 It also means that the lttvwindow API will have to deal with viewer's hooks.
127 Those will not be allowed to add them directly in the context. They will give
128 them to the lttvwindow API, along with the time interval or the position and
129 number of events. The lttvwindow API will have to take care of adding and
130 removing hooks for the different time intervals requested. That means that hooks
131 insertion and removal will be done between each traceset processing based on
132 the time intervals and event positions related to each hook. We must therefore
133 provide a simple interface for hooks passing between the viewers and the main
134 window, make them easier to manage from the main window. A modification to the
135 LttvHooks type solves this problem.
136
137
138 Architecture
139
140 Added to the lttvwindow API :
141
142
143 - lttvwindow_events_request
144 ( MainWindow *main_win,
145 LttTime start_time,
146 LttvTracesetPosition start_position,
147 LttTime end_time,
148 guint num_events,
149 LttvTracesetPosition end_position,
150 LttvHooksById before_traceset,
151 LttvHooksById before_trace,
152 LttvHooksById before_tracefile,
153 LttvHooksById middle,
154 LttvHooksById after_tracefile,
155 LttvHooksById after_trace,
156 LttvHooksById after_traceset)
157
158
159 Internal functions :
160
161 - lttvwindow_process_pending_requests
162
163
164
165 Implementation
166
167
168 - Type LttvHooks
169
170 see hook_prio.txt
171
172 The viewers will just have to pass hooks to the main window through this type,
173 using the hook.h interface to manipulate it. Then, the main window will add
174 them and remove them from the context to deliver exactly the events requested by
175 each viewer through process traceset.
176
177
178 - lttvwindow_events_request
179
180 It adds the EventsRequest struct to the array of time requests pending and
181 registers a pending request for the next g_idle if none is registered.
182
183 typedef struct _EventsRequest {
184 LttTime start_time,
185 LttvTracesetPosition start_position,
186 LttTime end_time,
187 guint num_events,
188 LttvTracesetPosition end_position,
189 LttvHooksById before_traceset,
190 LttvHooksById before_trace,
191 LttvHooksById before_tracefile,
192 LttvHooksById middle,
193 LttvHooksById after_tracefile,
194 LttvHooksById after_trace,
195 LttvHooksById after_traceset)
196 } EventsRequest;
197
198
199 - lttvwindow_process_pending_requests
200
201 This internal function gets called by g_idle, taking care of the pending
202 requests. It is responsible for concatenation of time intervals and position
203 requests. It does it with the following algorithm organizing process traceset
204 calls. Here is the detailed description of the way it works :
205
206
207 - Events Requests Servicing Algorithm
208
209 Data structures necessary :
210
211 List of requests added to context : list_in
212 List of requests not added to context : list_out
213
214 Initial state :
215
216 list_in : empty
217 list_out : many events requests
218
219
220 While list_in !empty and list_out !empty
221 1. If list_in is empty (need a seek)
222 1.1 Add requests to list_in
223 1.1.1 Find all time requests with the lowest start time in list_out
224 (ltime)
225 1.1.2 Find all position requests with the lowest position in list_out
226 (lpos)
227 1.1.3 If lpos.start time < ltime
228 - Add lpos to list_in, remove them from list_out
229 1.1.4 Else, (lpos.start time >= ltime)
230 - Add ltime to list_in, remove them from list_out
231 1.2 Seek
232 1.2.1 If first request in list_in is a time request
233 1.2.1.1 Seek to that time
234 1.2.2 Else, the first request in list_in is a position request
235 1.2.2.1 Seek to that position
236 1.3 Call begin for all list_in members
237 (1.3.1 begin hooks called)
238 (1.3.2 middle hooks added)
239 2. Else, list_in is not empty, we continue a read
240 2.1 For each req of list_out
241 - if req.start time == current time
242 - Add to list_in, remove from list_out
243 - Call begin
244 - if req.start position == current position
245 - Add to list_in, remove from list_out
246 - Call begin
247
248 3. Find end criterions
249 3.1 End time
250 3.1.1 Find lowest end time in list_in
251 3.1.2 Find lowest start time in list_out
252 3.1.3 Use lowest of both as end time
253 3.2 Number of events
254 3.2.1 Find lowest number of events in list_in
255 3.3 End position
256 3.3.1 Find lowest end position in list_in
257 3.3.2 Find lowest start position in list_out
258 3.3.3 Use lowest of both as end position
259
260 4. Call process traceset middle
261 4.1 Call process traceset middle (Use end criterion found in 3)
262 5. After process traceset middle
263 5.1 For each req in list_in
264 - req.num -= count
265 - if req.num == 0
266 - Call end for req
267 - remove req from list_in
268 - if req.end time == current time
269 - Call end for req
270 - remove req from list_in
271 - if req.end pos == current pos
272 - Call end for req
273 - remove req from list_in
274
275
276
277 Notes :
278 End criterions for process traceset middle :
279 If the criterion is reached, event is out of boundaries and we return.
280 Current time > End time
281 Event count > Number of events
282 Current position >= End position
283
284 The >= for position is necessary to make ensure consistency between start time
285 requests and positions requests that happens to be at the exact same start time
286 and position.
287
288
289
290 Weaknesses
291
292 - None (nearly?) :)
293
294
295 Strengths
296
297 - Removes the need for filtering of information supplied to the viewers.
298
299 - Viewers have a better control on their data input.
300
301 - Solves all the weaknesses idenfied in the actual boundaryless traceset
302 reading.
This page took 0.035815 seconds and 5 git commands to generate.