basic lttvwindow works
[lttv.git] / ltt / branches / poly / doc / developer / requests_servicing_schedulers.txt
CommitLineData
ca566f70 1Linux Trace Toolkit
2
3Requests Servicing Schedulers
4
5
6Mathieu Desnoyers, 07/06/2004
7
8
9In the LTT graphical interface, two main types of events requests may occur :
10
11- events requests made by a viewer concerning a traceset for a ad hoc
12 computation.
13- events requests made by a viewer concerning a trace for a precomputation.
14
15
16Ad Hoc Computation
17
18The ad hoc computation must be serviced immediately : they are directly
19responding to events requests that must be serviced to complete the graphical
20widgets'data. This kind of computation may lead to incomplete result as long as
21precomputation are not finished. Once precomputation is over, the widgets will
22be redrawn if they needed such information. A ad hoc computation is done on a
23traceset : the workspace of a tab.
24
25Precomputation
26
27Traces are global objects. Only one instance of a trace is opened for all the
28program. Precomputation will append data to the traces attributes (states,
29statistics). It must inform the widgets which asked for such states or
30statistics of their availability. Only one precomputation must be launched for
31each trace and no duplication of precomputation must be done.
32
33
34Schedulers
35
36There is one tracesetcontext per traceset. Each reference to a trace by a
37traceset also has its own tracecontext. Each trace, by itself, has its own
38tracecontext.
39
40Let's define a scheduler as a g_idle events request servicing function.
41
42There is one scheduler per traceset context (registered when there are requests
43to answer). There is also one scheduler per autonomous trace context (not
44related to any traceset context).
45
46A scheduler processes requests for a specific traceset or trace by combining
47time intervals of the requests. It is interruptible by any GTK event. A
48precomputation scheduler has a lower priority than a ad hoc computation
49scheduler. That means that no precomputation will be performed until there is
50no more ad hoc compuation pending. When a scheduler is interrupted, it makes no
51assumption about the presence or absence of the current requests in its pool
52when it starts back.
53
54
55Foreground Scheduler
56
57There can be one foreground scheduler per traceset (one traceset per tab). It
58simply calls the hooks given by the events requests of the viewers for the
59specified time intervals.
60
61
62Background Scheduler
63
493c473c 64Right now, to simplify the problem of the background scheduler, we assume that
65the module that loads the extended statistics hooks has been loaded before the
66data is requested and that it is not unloaded until the program stops. We will
67eventually have to deal with the requests removal based on module load/unload,
68but it complicates the problem quite a bit.
69
70A background scheduler adds hooks located under a global attributes path
71(specified by the viewer who makes the request) to the trace's traceset
72context (the trace is specified by the viewer). Then, it processes the whole
73trace with this context (and hooks).
74
75Typically, a module that extends statistics will register hooks in the global
76attributes tree under /TraceState/Statistics/ModuleName/hook_name . A viewer
77that needs these statistics for a set of traces does a background computation
78request through a call to the main window API function. It must specify all
79types of hooks that must be called for the specified trace.
80
81The background computation requests for a trace are queued. When the idle
82function kicks in to answer these requests, it add the hooks of all the requests
83toghether in the context and starts the read. It also keeps a list of the
84background requests currently serviced.
85
86The read is done from start to end of the trace, calling all the hooks present
87in the context. Only when the read is over, the after_request hooks of the
88currently serviced requests are called and the requests are destroyed.
89
90If there are requests in the waiting queue, they are all added to the current
91pool and processed. It is important to understand that, while a processing is in
92being done, no requests are added to the pool : they wait for their turn in the
93queue.
94
95Every hook that are added to the context by the scheduler comes from global
96attributes, i.e.
97/traces/trace_path/TraceState/Statistics/ModuleName/hook_name
98
99They come with a flag telling either in_progress or ready. If the flag
100ready is set, a viewer knows that the data it needs is already ready and he
101doesn't have to make a request.
102
103If the flag in_progress is set, that means that the data it needs is currently
104being serviced, and it must wait for the current servicing to be finished. It
105tells the lttvwindow API to call a hook when the actual servicing is over (there
106is a special function for this, as it requires to modify the pool of requests
107actually being serviced : we must make sure that no new reading hooks are
108added!).
109
110
111
112
113
114New Global Attributes
115
116When a hook is added to the trace context, The variable
117/traces/trace_path/TraceState/Statistics/ModuleName/hook_name is set.
118
119When a processing is fired, a variable
120/traces/trace_path/TraceState/Statistics/ModuleName/in_progress is set.
121
122When a processing finished, a variable
123/traces/trace_path/TraceState/Statistics/ModuleName/in_progress is unset
124/traces/trace_path/TraceState/Statistics/ModuleName/ready is set
125
126
127
128
129
130Typical Use For a Viewer
131
132When a viewer wants extended information, it must first check if it is ready.
133if not :
134Before a viewer makes a request, it must check the in_prgoress status of the
135hooks.
136
137If the in_progress is unset, it makes the request.
138
139If the in_progress is set, it makes a special request for being informed of the
140end of request.
141
ca566f70 142
143
144
145Hooks Lists
146
147In order to answer the problems of background processing, we need to add a
148reference counter for each hook of a hook list. If the same hook is added twice,
149it will be called only once, but it will need two "remove" to be really removed
150from the list. Two hooks are identical if they have the same function pointer
151and hook_data.
152
153
63b8a718 154
493c473c 155
156
157
63b8a718 158Implementation
159
160Ad Hoc Computation
161
162see lttvwindow_events_delivery.txt
163
164
165Hooks Lists
166
167need new ref_count field with each hook
168lttv_hook_add and lttv_hook_add_list must compare addition with present and
169increment ref counter if already present.
170
171lttv_hook_remove and remove_with_data must decrement ref_count is >1, or remove
172the element otherwise (==1).
173
174
175
176Background Scheduler
177
178Global traces
179
180Two global attributes per trace :
181/traces/path_to_trace/LttvTrace
182 It is a pointer to the LttvTrace structure.
183/traces/path_to_trace/LttvBackgroundComputation
493c473c 184/traces/path_to_trace/TraceState/... hooks to add to background computation
185 in_progress and ready flags.
63b8a718 186
187struct _LttvBackgroundComputation {
188 GSList *events_requests;
189 /* A GSList * to the first events request of background computation for a
190 * trace. */
191 LttvTraceset *ts;
192 /* A factice traceset that contains just one trace */
193 LttvTracesetContext *tsc;
194 /* The traceset context that reads this trace */
195}
196
197
198
199
200Modify Traceset
201Points to the global traces. Opens new one only when no instance of the pathname
202exists.
203
204Modify LttvTrace ?
205
206Modify trace opening / close to make them create and destroy
207LttvBackgroundComputation (and call end requests hooks for servicing requests ?)
208
ca566f70 209EventsRequest Structure
210
211This structure is the element of the events requests pools. The viewer field is
212used as an ownership identifier as well as pointer to the data structure upon
213which the action applies. Typically, this is a pointer to the viewer's data
214structure.
215
216In a ad hoc events request, a pointer to this structure is used as hook_data in
217the hook lists
218
ca566f70 219
4f70505a 220
221Background Requests Servicing Algorithm (v1)
222
223
224list_in : currently serviced requests
225list_out : queue of requests waiting for processing
226
2d262115 227notification lists :
228notify_in : currently checked notifications
229notify_out : queue of notifications that comes along with next processing.
230
4f70505a 231
2321. Before processing
233 if list_in is empty
234 - Add all requests in list_out to list_in, empty list_out
235 - for each request in list_in
236 - add hooks to context
237 - set hooks'in_progress flag to TRUE
238 - seek trace to start
2d262115 239 - Move all notifications from notify_out to notify_in.
4f70505a 240
2412. call process traceset middle for a chunk
242 (assert list_in is not empty! : should not even be called in that case)
243
2443. After the chunk
245 3.1 call after_chunk hooks from list_in
2d262115 246 3.2 for each notify_in
247 - if current time >= notify time, call notify and remove from notify_in
248 - if current position >= notify position, call notify and remove from
249 notify_in
4f70505a 250 3.2 if end of trace reached
251 - for each request in list_in
252 - set hooks'in_progress flag to FALSE
253 - set hooks'ready flag to TRUE
4f70505a 254 - remove hooks from context
255 - remove request
2d262115 256 - for each notifications in notify_in
257 - call notify and remove from notify_in
4f70505a 258 - return FALSE (scheduler stopped)
259 3.3 else
260 - return TRUE (scheduler still registered)
261
This page took 0.038484 seconds and 4 git commands to generate.