events req servicing v2, with background computation, mod
[lttv.git] / ltt / branches / poly / doc / developer / lttvwindow_events_delivery.txt
1 Linux Trace Toolkit
2
3 Mathieu Desnoyers 17-05-2004
4
5
6 This document explains how the lttvwindow API could process the event requests
7 of the viewers, merging event requests and hook lists to benefit from the fact
8 that process_traceset can call multiple hooks for the same event.
9
10 First, we will explain the detailed process of event delivery in the current
11 framework. We will then study its strengths and weaknesses.
12
13 In a second time, a framework where the events requests are dealt by the main
14 window with fine granularity will be described. We will then discussed the
15 advantages and inconvenients over the first framework.
16
17
18 1. (Actual) Boundaryless event reading
19
20 Actually, viewers request events in a time interval from the main window. They
21 also specify a (not so) maximum number of events to be delivered. In fact, the
22 number of events to read only gives a stop point, from where only events with
23 the same timestamp will be delivered.
24
25 Viewers register hooks themselves in the traceset context. When merging read
26 requests in the main window, all hooks registered by viewers will be called for
27 the union of all the read requests, because the main window has no control on
28 hook registration.
29
30 The main window calls process_traceset on its own for all the intervals
31 requested by all the viewers. It must not duplicate a read of the same time
32 interval : it could be very hard to filter by viewers. So, in order to achieve
33 this, time requests are sorted by start time, and process_traceset is called for
34 each time request. We keep the last event time between each read : if the start
35 time of the next read is lower than the time reached, we continue the reading
36 from the actual position.
37
38 We deal with specific number of events requests (infinite end time) by
39 garantying that, starting from the time start of the request, at least that
40 number of events will be read. As we can't do it efficiently without interacting
41 very closely with process_traceset, we always read the specified number of
42 events requested starting from the current position when we answer to a request
43 based on the number of events.
44
45 The viewers have to filter events delivered by traceset reading, because they
46 can be asked by another viewer for a totally (or partially) different time
47 interval.
48
49
50 Weaknesses
51
52 - process_middle does not guarantee the number of events read
53
54 First of all, a viewer that requests events to process_traceset has no garantee
55 that it will get exactly what it asked for. For example, a direct call to
56 traceset_middle for a specific number of events will delived _at least_ that
57 quantity of events, plus the ones that have the same timestamp that the last one
58 has.
59
60 - Border effects
61
62 Viewer's writers will have to deal with a lot of border effects caused by the
63 particularities of the reading. They will be required to select the information
64 they need from their input by filtering.
65
66 - Lack of encapsulation and difficulty of testing
67
68 The viewer's writer will have to take into account all the border effects caused
69 by the interaction with other modules. This means that event if a viewer works
70 well alone or with another viewer, it's possible that new bugs arises when a new
71 viewer comes around. So, even if a perfect testbench works well for a viewer, it
72 does not confirm that no new bug will arise when another viewer is loaded at the
73 same moment asking for different time intervals.
74
75
76 - Duplication of the work
77
78 Time based filters and counters of events will have to be implemented at the
79 viewer's side, which is a duplication of the functionnalities that would
80 normally be expected from the tracecontext API.
81
82 - Lack of control over the data input
83
84 As we expect module's writers to prefer to be as close as possible from the raw
85 datas, making them interact with a lower level library that gives them a data
86 input that they only control by further filtering of the input is not
87 appropriated. We should expect some reluctancy from them about using this API
88 because of this lack of control on the input.
89
90 - Speed cost
91
92 All hooks of all viewers will be called for all the time intervals. So, if we
93 have a detailed events list and a control flow view, asking both for different
94 time intervals, the detailed events list will have to filter all the events
95 delivered originally to the control flow view. This can be a case occuring quite
96 often.
97
98
99
100 Strengths
101
102 - Simple concatenation of time intervals at the main window level.
103
104 Having the opportunity of delivering more events than necessary to the viewers
105 means that we can concatenate time intervals and number of events requested
106 fairly easily, while being hard to determine if some specific cases will be
107 wrong, in depth testing being impossible.
108
109 - No duplication of the tracecontext API
110
111 Viewers deal directly with the tracecontext API for registering hooks, removing
112 a layer of encapsulation.
113
114
115
116
117
118 2. (Proposed) Strict boundaries events reading
119
120 The idea behind this method is to provide exactly the events requested by the
121 viewers to them, no more, no less.
122
123 It uses the new API for process traceset suggested in the document
124 process_traceset_strict_boundaries.txt.
125
126 It also means that the lttvwindow API will have to deal with viewer's hooks.
127 Those will not be allowed to add them directly in the context. They will give
128 them to the lttvwindow API, along with the time interval or the position and
129 number of events. The lttvwindow API will have to take care of adding and
130 removing hooks for the different time intervals requested. That means that hooks
131 insertion and removal will be done between each traceset processing based on
132 the time intervals and event positions related to each hook. We must therefore
133 provide a simple interface for hooks passing between the viewers and the main
134 window, make them easier to manage from the main window. A modification to the
135 LttvHooks type solves this problem.
136
137
138 Architecture
139
140 Added to the lttvwindow API :
141
142
143 void lttvwindow_events_request
144 ( MainWindow *main_win,
145 EventsRequest *events_request);
146
147 void lttvwindow_events_request
148 ( MainWindow *main_win,
149 EventsRequest events_request);
150
151 void lttvwindow_events_request_remove_all
152 ( MainWindow *main_win,
153 gpointer viewer);
154
155
156 Internal functions :
157
158 - lttvwindow_process_pending_requests
159
160
161 Events Requests Removal
162
163 A new API function will be necessary to let viewers remove all event requests
164 they have made previously. By allowing this, no more out of bound requests will
165 be serviced : a viewer that sees its time interval changed before the first
166 servicing is completed can clear its previous events requests and make a new
167 one for the new interval needed, considering the finished chunks as completed
168 area.
169
170 It is also very useful for dealing with the viewer destruction case : the viewer
171 just has to remove its events requests from the main window before it gets
172 destroyed.
173
174
175 Permitted GTK Events Between Chunks
176
177 All GTK Events will be enabled between chunks. This is due to the fact that the
178 background processing and a high priority request are seen as the same case.
179 While a background processing is in progress, the whole graphical interface must
180 be enabled.
181
182 We needed to deal with the coherence of background processing and diverse GTK
183 events anyway. This algorithm provides a generalized way to deal with any type
184 of request and any GTK events.
185
186
187 Background Computation Request
188
189 The types of background computation that can be requested by a viewer : state
190 computation (main window scope) or viewer specific background computation.
191
192 A background computation request is asked via lttvwindow_events_request, with a
193 priority field set with a low priority.
194
195 If a lttvwindow_events_request_remove_all is done on the viewer pointer, it will
196 not affect the state computation as no viewer pointer will have been passed in
197 the initial request. This is the expected result. For the background processings
198 that call viewer's hooks, they will be removed.
199
200
201 A New "Redraw" Button
202
203 It will be used to redraw the viewers entirely. It is useful to restart the
204 servicing after a "stop" action.
205
206 A New "Continue" Button
207
208 It will tell the viewers to send requests for damaged areas. It is useful to
209 complete the servicing after a "stop" action.
210
211
212
213 Tab change
214
215 If a tab change occurs, we still want to do background processing.
216 Events requests must be stocked in a list located in the same scope than the
217 traceset context. Right now, this is tab scope. All functions called from the
218 request servicing function must _not_ use the current_tab concept, as it may
219 change. The idle function must the take a tab, and not the main window, as
220 parameter.
221
222 If a tab is removed, its associated idle events requests servicing function must
223 also be removed.
224
225 It now looks a lot more useful to give a Tab* to the viewer instead of a
226 MainWindow*, as all the information needed by the viewer is located at the tab
227 level. It will diminish the dependance upon the current tab concept.
228
229
230
231 Idle function (lttvwindow_process_pending_requests)
232
233 The idle function must return FALSE to be removed from the idle functions when
234 no more events requests are pending. Otherwise, it returns TRUE. It will service
235 requests until there is no more request left.
236
237
238
239
240 Implementation
241
242
243 - Type LttvHooks
244
245 see hook_prio.txt
246
247 The viewers will just have to pass hooks to the main window through this type,
248 using the hook.h interface to manipulate it. Then, the main window will add
249 them and remove them from the context to deliver exactly the events requested by
250 each viewer through process traceset.
251
252
253 - lttvwindow_events_request
254
255 It adds the an EventsRequest struct to the array of time requests
256 pending and registers a pending request for the next g_idle if none is
257 registered. The viewer can access this structure during the read as its
258 hook_data. Only the stop_flag can be changed by the viewer through the
259 event hooks.
260
261 typedef LttvEventsRequestPrio guint;
262
263 typedef struct _EventsRequest {
264 gpointer viewer_data;
265 gboolean servicing; /* service in progress: TRUE */
266 LttvEventsRequestPrio prio; /* Ev. Req. priority */
267 LttTime start_time; /* Unset : { 0, 0 } */
268 LttvTracesetContextPosition *start_position; /* Unset : num_traces = 0 */
269 gboolean stop_flag; /* Continue:TRUE Stop:FALSE */
270 LttTime end_time; /* Unset : { 0, 0 } */
271 guint num_events; /* Unset : G_MAXUINT */
272 LttvTracesetContextPosition *end_position; /* Unset : num_traces = 0 */
273 LttvHooks *before_traceset; /* Unset : NULL */
274 LttvHooks *before_trace; /* Unset : NULL */
275 LttvHooks *before_tracefile;/* Unset : NULL */
276 LttvHooks *event; /* Unset : NULL */
277 LttvHooksById *event_by_id; /* Unset : NULL */
278 LttvHooks *after_tracefile; /* Unset : NULL */
279 LttvHooks *after_trace; /* Unset : NULL */
280 LttvHooks *after_traceset; /* Unset : NULL */
281 LttvHooks *before_chunk; /* Unset : NULL */
282 LttvHooks *after_chunk /* Unset : NULL */
283 } EventsRequest;
284
285
286
287 - lttvwindow_events_request_remove_all
288
289 It removes all the events requests from the pool that has their "viewer" field
290 maching the viewer pointer given in argument.
291
292 It calls the traceset/trace/tracefile end hooks for each request removed.
293
294
295 - lttvwindow_process_pending_requests
296
297 This internal function gets called by g_idle, taking care of the pending
298 requests. It is responsible for concatenation of time intervals and position
299 requests. It does it with the following algorithm organizing process traceset
300 calls. Here is the detailed description of the way it works :
301
302
303
304 - Revised Events Requests Servicing Algorithm (v2)
305
306 The reads are splitted in chunks. After a chunk is over, we want to check if
307 there is a GTK Event pending and execute it. It can add or remove events
308 requests from the event requests list. If it happens, we want to start over
309 the algorithm from the beginning.
310
311 Two levels of priority exists. High priority and low priority. High prio
312 requests are serviced first, even if lower priority requests has lower start
313 time or position.
314
315
316 Data structures necessary :
317
318 List of requests added to context : list_in
319 List of requests not added to context : list_out
320
321 Initial state :
322
323 list_in : empty
324 list_out : many events requests
325
326
327 A. While list_in !empty and list_out !empty and !GTK Event pending
328 1. If list_in is empty (need a seek)
329 1.1 Add requests to list_in
330 1.1.1 Find all time requests with the highest priority and lowest start
331 time in list_out (ltime)
332 1.1.2 Find all position requests with the highest priority and lowest
333 position in list_out (lpos)
334 1.1.3 If lpos.prio > ltime.prio
335 || (lpos.prio == ltime.prio && lpos.start time < ltime)
336 - Add lpos to list_in, remove them from list_out
337 1.1.4 Else, (lpos.prio < ltime.prio
338 ||(lpos.prio == ltime.prio && lpos.start time >= ltime))
339 - Add ltime to list_in, remove them from list_out
340 1.2 Seek
341 1.2.1 If first request in list_in is a time request
342 - If first req in list_in start time != current time
343 - Seek to that time
344 1.2.2 Else, the first request in list_in is a position request
345 - If first req in list_in pos != current pos
346 - If the position is the same than the saved state, restore state
347 - Else, seek to that position
348 1.3 Add hooks and call begin for all list_in members
349 1.3.1 If !servicing
350 - begin hooks called
351 - servicing = TRUE
352 1.3.2 call before_chunk
353 1.3.3 events hooks added
354 2. Else, list_in is not empty, we continue a read
355 2.1 For each req of list_out
356 - if req.start time == current context time
357 - Add to list_in, remove from list_out
358 - If !servicing
359 - Call begin
360 - servicing = TRUE
361 - Call before_chunk
362 - events hooks added
363 - if req.start position == current position
364 - Add to list_in, remove from list_out
365 - If !servicing
366 - Call begin
367 - servicing = TRUE
368 - Call before_chunk
369 - events hooks added
370
371 3. Find end criterions
372 3.1 End time
373 3.1.1 Find lowest end time in list_in
374 3.1.2 Find lowest start time in list_out (>= than current time*)
375 * To eliminate lower prio requests
376 3.1.3 Use lowest of both as end time
377 3.2 Number of events
378 3.2.1 Find lowest number of events in list_in
379 3.2.2 Use min(CHUNK_NUM_EVENTS, min num events in list_in) as num_events
380 3.3 End position
381 3.3.1 Find lowest end position in list_in
382 3.3.2 Find lowest start position in list_out (>= than current
383 position)
384 3.3.3 Use lowest of both as end position
385
386 4. Call process traceset middle
387 4.1 Call process traceset middle (Use end criterion found in 3)
388 * note : end criterion can also be viewer's hook returning TRUE
389 5. After process traceset middle
390 - if current context time > traceset.end time
391 - For each req in list_in
392 - Call end for req
393 - Remove events hooks for req
394 - remove req from list_in
395 5.1 For each req in list_in
396 - req.num -= count
397 - if req.num == 0
398 - Call end for req
399 - Remove events hooks for req
400 - remove req from list_in
401 - if current context time > req.end time
402 - Call end for req
403 - Remove events hooks for req
404 - remove req from list_in
405 - if req.end pos == current pos
406 - Call end for req
407 - Remove events hooks for req
408 - remove req from list_in
409 - if req.stop_flag == TRUE
410 - Call end for req
411 - Remove events hooks for req
412 - remove req from list_in
413 - if exists one events requests in list_out that has
414 higher priority and time != current time
415 - Use current position as start position for req
416 - Remove start time from req
417 - Call after_chunk for req
418 - Remove event hooks for req
419 - Put req back in list_out, remove from list_in
420 - Save current state into saved_state.
421
422 B. When interrupted
423 1. for each request in list_in
424 1.1. Use current postition as start position
425 1.2. Remove start time
426 1.3. Call after_chunk
427 1.4. Remove event hooks
428 1.5. Put it back in list_out
429 2. Save current state into saved_state.
430 2.1 Free old saved state.
431 2.2 save current state.
432
433
434
435
436
437 Notes :
438 End criterions for process traceset middle :
439 If the criterion is reached, event is out of boundaries and we return.
440 Current time >= End time
441 Event count > Number of events
442 Current position >= End position
443 Last hook list called returned TRUE
444
445 The >= for position is necessary to make ensure consistency between start time
446 requests and positions requests that happens to be at the exact same start time
447 and position.
448
449 We only keep one saved state in memory. If, for example, a low priority
450 servicing is interrupted, a high priority is serviced, then the low priority
451 will use the saved state to start back where it was instead of seeking to the
452 time. In the very specific case where a low priority servicing is interrupted,
453 and then a high priority servicing on top of it is also interrupted, well, the
454 low priority will loose its state and will have to seek back. It should not
455 occur often. The solution to it would be to save one state per priority.
456
457
458
459
460
461
462 Weaknesses
463
464 - There is a possibility that we must use seek if more than one interruption
465 occurs, i.e. low priority interrupted by addition of high priority, and then
466 high priority interrupted. The seek will be necessary for the low priority.
467 It could be a good idea to keep one saved_state per priority ?
468
469
470 Strengths
471
472 - Removes the need for filtering of information supplied to the viewers.
473
474 - Viewers have a better control on their data input.
475
476 - Solves all the weaknesses idenfied in the actual boundaryless traceset
477 reading.
478
479 - Background processing available.
480
This page took 0.041285 seconds and 5 git commands to generate.