basic lttvwindow works
[lttv.git] / ltt / branches / poly / doc / developer / requests_servicing_schedulers.txt
1 Linux Trace Toolkit
2
3 Requests Servicing Schedulers
4
5
6 Mathieu Desnoyers, 07/06/2004
7
8
9 In the LTT graphical interface, two main types of events requests may occur :
10
11 - events requests made by a viewer concerning a traceset for a ad hoc
12 computation.
13 - events requests made by a viewer concerning a trace for a precomputation.
14
15
16 Ad Hoc Computation
17
18 The ad hoc computation must be serviced immediately : they are directly
19 responding to events requests that must be serviced to complete the graphical
20 widgets'data. This kind of computation may lead to incomplete result as long as
21 precomputation are not finished. Once precomputation is over, the widgets will
22 be redrawn if they needed such information. A ad hoc computation is done on a
23 traceset : the workspace of a tab.
24
25 Precomputation
26
27 Traces are global objects. Only one instance of a trace is opened for all the
28 program. Precomputation will append data to the traces attributes (states,
29 statistics). It must inform the widgets which asked for such states or
30 statistics of their availability. Only one precomputation must be launched for
31 each trace and no duplication of precomputation must be done.
32
33
34 Schedulers
35
36 There is one tracesetcontext per traceset. Each reference to a trace by a
37 traceset also has its own tracecontext. Each trace, by itself, has its own
38 tracecontext.
39
40 Let's define a scheduler as a g_idle events request servicing function.
41
42 There is one scheduler per traceset context (registered when there are requests
43 to answer). There is also one scheduler per autonomous trace context (not
44 related to any traceset context).
45
46 A scheduler processes requests for a specific traceset or trace by combining
47 time intervals of the requests. It is interruptible by any GTK event. A
48 precomputation scheduler has a lower priority than a ad hoc computation
49 scheduler. That means that no precomputation will be performed until there is
50 no more ad hoc compuation pending. When a scheduler is interrupted, it makes no
51 assumption about the presence or absence of the current requests in its pool
52 when it starts back.
53
54
55 Foreground Scheduler
56
57 There can be one foreground scheduler per traceset (one traceset per tab). It
58 simply calls the hooks given by the events requests of the viewers for the
59 specified time intervals.
60
61
62 Background Scheduler
63
64 Right now, to simplify the problem of the background scheduler, we assume that
65 the module that loads the extended statistics hooks has been loaded before the
66 data is requested and that it is not unloaded until the program stops. We will
67 eventually have to deal with the requests removal based on module load/unload,
68 but it complicates the problem quite a bit.
69
70 A background scheduler adds hooks located under a global attributes path
71 (specified by the viewer who makes the request) to the trace's traceset
72 context (the trace is specified by the viewer). Then, it processes the whole
73 trace with this context (and hooks).
74
75 Typically, a module that extends statistics will register hooks in the global
76 attributes tree under /TraceState/Statistics/ModuleName/hook_name . A viewer
77 that needs these statistics for a set of traces does a background computation
78 request through a call to the main window API function. It must specify all
79 types of hooks that must be called for the specified trace.
80
81 The background computation requests for a trace are queued. When the idle
82 function kicks in to answer these requests, it add the hooks of all the requests
83 toghether in the context and starts the read. It also keeps a list of the
84 background requests currently serviced.
85
86 The read is done from start to end of the trace, calling all the hooks present
87 in the context. Only when the read is over, the after_request hooks of the
88 currently serviced requests are called and the requests are destroyed.
89
90 If there are requests in the waiting queue, they are all added to the current
91 pool and processed. It is important to understand that, while a processing is in
92 being done, no requests are added to the pool : they wait for their turn in the
93 queue.
94
95 Every hook that are added to the context by the scheduler comes from global
96 attributes, i.e.
97 /traces/trace_path/TraceState/Statistics/ModuleName/hook_name
98
99 They come with a flag telling either in_progress or ready. If the flag
100 ready is set, a viewer knows that the data it needs is already ready and he
101 doesn't have to make a request.
102
103 If the flag in_progress is set, that means that the data it needs is currently
104 being serviced, and it must wait for the current servicing to be finished. It
105 tells the lttvwindow API to call a hook when the actual servicing is over (there
106 is a special function for this, as it requires to modify the pool of requests
107 actually being serviced : we must make sure that no new reading hooks are
108 added!).
109
110
111
112
113
114 New Global Attributes
115
116 When a hook is added to the trace context, The variable
117 /traces/trace_path/TraceState/Statistics/ModuleName/hook_name is set.
118
119 When a processing is fired, a variable
120 /traces/trace_path/TraceState/Statistics/ModuleName/in_progress is set.
121
122 When a processing finished, a variable
123 /traces/trace_path/TraceState/Statistics/ModuleName/in_progress is unset
124 /traces/trace_path/TraceState/Statistics/ModuleName/ready is set
125
126
127
128
129
130 Typical Use For a Viewer
131
132 When a viewer wants extended information, it must first check if it is ready.
133 if not :
134 Before a viewer makes a request, it must check the in_prgoress status of the
135 hooks.
136
137 If the in_progress is unset, it makes the request.
138
139 If the in_progress is set, it makes a special request for being informed of the
140 end of request.
141
142
143
144
145 Hooks Lists
146
147 In order to answer the problems of background processing, we need to add a
148 reference counter for each hook of a hook list. If the same hook is added twice,
149 it will be called only once, but it will need two "remove" to be really removed
150 from the list. Two hooks are identical if they have the same function pointer
151 and hook_data.
152
153
154
155
156
157
158 Implementation
159
160 Ad Hoc Computation
161
162 see lttvwindow_events_delivery.txt
163
164
165 Hooks Lists
166
167 need new ref_count field with each hook
168 lttv_hook_add and lttv_hook_add_list must compare addition with present and
169 increment ref counter if already present.
170
171 lttv_hook_remove and remove_with_data must decrement ref_count is >1, or remove
172 the element otherwise (==1).
173
174
175
176 Background Scheduler
177
178 Global traces
179
180 Two global attributes per trace :
181 /traces/path_to_trace/LttvTrace
182 It is a pointer to the LttvTrace structure.
183 /traces/path_to_trace/LttvBackgroundComputation
184 /traces/path_to_trace/TraceState/... hooks to add to background computation
185 in_progress and ready flags.
186
187 struct _LttvBackgroundComputation {
188 GSList *events_requests;
189 /* A GSList * to the first events request of background computation for a
190 * trace. */
191 LttvTraceset *ts;
192 /* A factice traceset that contains just one trace */
193 LttvTracesetContext *tsc;
194 /* The traceset context that reads this trace */
195 }
196
197
198
199
200 Modify Traceset
201 Points to the global traces. Opens new one only when no instance of the pathname
202 exists.
203
204 Modify LttvTrace ?
205
206 Modify trace opening / close to make them create and destroy
207 LttvBackgroundComputation (and call end requests hooks for servicing requests ?)
208
209 EventsRequest Structure
210
211 This structure is the element of the events requests pools. The viewer field is
212 used as an ownership identifier as well as pointer to the data structure upon
213 which the action applies. Typically, this is a pointer to the viewer's data
214 structure.
215
216 In a ad hoc events request, a pointer to this structure is used as hook_data in
217 the hook lists
218
219
220
221 Background Requests Servicing Algorithm (v1)
222
223
224 list_in : currently serviced requests
225 list_out : queue of requests waiting for processing
226
227 notification lists :
228 notify_in : currently checked notifications
229 notify_out : queue of notifications that comes along with next processing.
230
231
232 1. Before processing
233 if list_in is empty
234 - Add all requests in list_out to list_in, empty list_out
235 - for each request in list_in
236 - add hooks to context
237 - set hooks'in_progress flag to TRUE
238 - seek trace to start
239 - Move all notifications from notify_out to notify_in.
240
241 2. call process traceset middle for a chunk
242 (assert list_in is not empty! : should not even be called in that case)
243
244 3. After the chunk
245 3.1 call after_chunk hooks from list_in
246 3.2 for each notify_in
247 - if current time >= notify time, call notify and remove from notify_in
248 - if current position >= notify position, call notify and remove from
249 notify_in
250 3.2 if end of trace reached
251 - for each request in list_in
252 - set hooks'in_progress flag to FALSE
253 - set hooks'ready flag to TRUE
254 - remove hooks from context
255 - remove request
256 - for each notifications in notify_in
257 - call notify and remove from notify_in
258 - return FALSE (scheduler stopped)
259 3.3 else
260 - return TRUE (scheduler still registered)
261
This page took 0.033935 seconds and 4 git commands to generate.