detailed event list redesign
[lttv.git] / ltt / branches / poly / doc / developer / guidetailed-event-list-redesign.txt
CommitLineData
a0beef4f 1
2Redesign of the GUI detailed event list
3
4Mathieu Desnoyers 08/2005
5
6The basic problem about this list is that it uses the number of events, not the
7time, as a vertical axis (for the Y axis scrollbar).
8
9Seeking in the traces is done by time. We have no clue of the number of events
10between two times without doing preparsing.
11
12If we want to fully reuse textDump, it's bettwer if we depend upon state
13computation. It would be good to make the viewer work with this information
14missing though.
15
16textDump's print_field should be put in a lttv/lttv core file, so we can use it
17as is in the detailed event list module without depending upon batchAnalysis.
18
19
20* With preparsing only :
21
22The problem then becomes simpler :
23
24We can precompute the event number while doing state computation and save it
25periodically with saved states. We can then use the event number in the trace
26as scrollbar value, which, when scrolled, would result into a search in the
27saved states by event number.
28
29How much time would it take to seek back to the wanted position from the last
30saved state ?
31
32compudj@dijkstra:~/local/bin$ ./lttv -m batchtest -1 -2 -t
33/home/compudj/traces/200MB
34** Message: Processing trace while counting events (12447572 events in 14.0173
35seconds)
36** Message: Processing trace while updating state (9.46535 seconds)
37
389.46535 s / 12447572 events * 50000 events = 0.038 s
39
4038 ms latency shouldn't be too noticeable by a user when scrolling.
41
42(note : counting events batchtest does also verify time flow integrity and get
43the position for each event (not optimal), that's why it takes 14s)
44
45As an optimisation, we could use a backing text buffer (an array of strings),
46where we would save the 50000 computed events between two consecutive saved
47states.
48
49Memory required : 50000 * 20 bytes/event = 1MB
50
51Which seems ok, but costy. In would be better, in fact, not to depend on the
52saved states interval for such a value : we could keep a 1000 events array, for
53instance (for 20KB cost, which is really better).
54
55The backing text buffer would, by itself, make sure it has a sufficient
56number of events so a scroll up/down of one page would be responded directly.
57That imply that a scroll up/down would first update the shown fields, and only
58afterward make the backing buffer resync its events in the background. In the
59case where the events were not directly available, it would have to update the
60buffer in the foreground and only then show the requested events.
61
62
63
64
65* If we want the viewer to be able to show information without preparsing :
66
67This is the hardest the problem could get. We have to seek by time (even the
68scrollbar must seek by time), but increment/decrement event by event when using
69the scrollbar up/down, page up/page down. Let's call them "far scroll" and "near
70scroll", respectively.
71
72A far scroll must resync the trace to the time requested by the scrollbar value.
73
74A near scroll must sync the trace to a time that is prior to the requested
75event, show the events requested, and then sync the scrollbar value (without
76event updating) to the shown event time.
77
78* seek n events backward
79
80We have no information about how far back we must request events in the trace :
81
82The algorithm would look like :
83
84seek_n_events_backward(current time, current position, time_offset, filter)
85Returns : a TracesetPosition
86 - If the current time < beginning of trace, is means we cannot get any more
87 events, inform the requester that a list of less than n events is ready.
88 - Else, request a read to a the time_offset backward, calling the
89 per event hook, and calling the after_traceset hook when finished. The end
90 position would be the position of the current first event.
91
92per_event_hook
93 - if filter returns true
94 - Append the traceset position to a list of maximum size n. Remove the first
95 entries.
96
97after_traceset_hook
98 - if the list has a size less than n, invoke a seek_n_events_backward
99 subsequent iteration, for completing the list. The new time_offset is the
100 last time_offset used multiplied by 2. (can be done by tail recursion (if we
101 want to split this operation in multiple segments) or by an iterative
102 algorithm (seek_n_events_backward would be a while() calling its own
103 process_traceset_middle()).
104 - if the list a a size of n, it's complete : call the viewer get_print_events
105 hook.
106
107
108* seek n events forward
109
110seek_n_events_forward(current position, filter)
111 - Simple : seek to the current position, request read of trace calling an
112 event counting hook (starts at 0).
113
114event_counting_hook
115 - if filter returns true
116 - increment event count.
117 - if event count > requested count, inform that the current position if the
118 wanted position. Return TRUE, so the read will stop.
119
120
121* Printing events
122
123get_print_events
124 - seek to the position at the beginning of the list. End position is the
125 current one (not in the list! the one currently shown). Call a events
126 request between this positions, printing the fields to strings shown in the
127 viewer.
128
129
130
131seek_n_events backward and forward seems to be interesting algorithms that
132should be implemented in the tracecontext library. With those helpers, it would
133become simpler to implement a detailed event list not depending on state
134computation.
135
136
This page took 0.027036 seconds and 4 git commands to generate.