insertion and removal will be done between each traceset processing based on
the time intervals and event positions related to each hook. We must therefore
provide a simple interface for hooks passing between the viewers and the main
-window, make them easier to manage from the main window. A modification to the
+window, making them easier to manage from the main window. A modification to the
LttvHooks type solves this problem.
void lttvwindow_events_request
-( MainWindow *main_win,
- EventsRequest *events_request);
-
-void lttvwindow_events_request
-( MainWindow *main_win,
- EventsRequest events_request);
+( Tab *tab,
+ const EventsRequest *events_request);
void lttvwindow_events_request_remove_all
-( MainWindow *main_win,
- gpointer viewer);
+( Tab *tab,
+ gconstpointer viewer);
Internal functions :
Permitted GTK Events Between Chunks
-All GTK Events will be enabled between chunks. This is due to the fact that the
-background processing and a high priority request are seen as the same case.
-While a background processing is in progress, the whole graphical interface must
-be enabled.
+All GTK Events will be enabled between chunks. A viewer could ask for a
+long computation that has no impact on the display : in that case, it is
+necessary to keep the graphical interface active. While a processing is in
+progress, the whole graphical interface must be enabled.
We needed to deal with the coherence of background processing and diverse GTK
events anyway. This algorithm provides a generalized way to deal with any type
Background Computation Request
-The types of background computation that can be requested by a viewer : state
-computation (main window scope) or viewer specific background computation.
+A background computation has a trace scope, and is therefore not linked to a
+main window. It is not detailed in this document.
+see requests_servicing_schedulers.txt
+
+A New "Redraw" Button
+
+It will be used to redraw the viewers entirely. It is useful to restart the
+servicing after a "stop" action.
+
+A New "Continue" Button
+
+It will tell the viewers to send requests for damaged areas. It is useful to
+complete the servicing after a "stop" action.
+
+
+
+Tab change
-A background computation request is asked via lttvwindow_events_request, with a
-priority field set with a low priority.
+If a tab change occurs, we still want to do background processing.
+Events requests must be stocked in a list located in the same scope than the
+traceset context. Right now, this is tab scope. All functions called from the
+request servicing function must _not_ use the current_tab concept, as it may
+change. The idle function must the take a tab, and not the main window, as
+parameter.
-If a lttvwindow_events_request_remove_all is done on the viewer pointer, it will
-not affect the state computation as no viewer pointer will have been passed in
-the initial request. This is the expected result. For the background processings
-that call viewer's hooks, they will be removed.
+If a tab is removed, its associated idle events requests servicing function must
+also be removed.
+
+It now looks a lot more useful to give a Tab* to the viewer instead of a
+MainWindow*, as all the information needed by the viewer is located at the tab
+level. It will diminish the dependance upon the current tab concept.
+
+
+
+Idle function (lttvwindow_process_pending_requests)
+
+The idle function must return FALSE to be removed from the idle functions when
+no more events requests are pending. Otherwise, it returns TRUE. It will service
+requests until there is no more request left.
- lttvwindow_events_request
-It adds the an EventsRequest struct to the array of time requests
+It adds the an EventsRequest struct to the list of events requests
pending and registers a pending request for the next g_idle if none is
registered. The viewer can access this structure during the read as its
hook_data. Only the stop_flag can be changed by the viewer through the
event hooks.
-typedef LttvEventsRequestPrio guint;
-
typedef struct _EventsRequest {
- gpointer viewer_data;
+ gpointer owner; /* Owner of the request */
+ gpointer viewer_data; /* Unset : NULL */
gboolean servicing; /* service in progress: TRUE */
- LttvEventsRequestPrio prio; /* Ev. Req. priority */
- LttTime start_time; /* Unset : { 0, 0 } */
- LttvTracesetContextPosition *start_position; /* Unset : num_traces = 0 */
+ LttTime start_time;/* Unset : { G_MAXUINT, G_MAXUINT }*/
+ LttvTracesetContextPosition *start_position; /* Unset : NULL */
gboolean stop_flag; /* Continue:TRUE Stop:FALSE */
- LttTime end_time; /* Unset : { 0, 0 } */
+ LttTime end_time;/* Unset : { G_MAXUINT, G_MAXUINT } */
guint num_events; /* Unset : G_MAXUINT */
- LttvTracesetContextPosition *end_position; /* Unset : num_traces = 0 */
- LttvHooks *before_traceset; /* Unset : NULL */
- LttvHooks *before_trace; /* Unset : NULL */
- LttvHooks *before_tracefile;/* Unset : NULL */
+ LttvTracesetContextPosition *end_position; /* Unset : NULL */
+ LttvHooks *before_chunk_traceset; /* Unset : NULL */
+ LttvHooks *before_chunk_trace; /* Unset : NULL */
+ LttvHooks *before_chunk_tracefile;/* Unset : NULL */
LttvHooks *event; /* Unset : NULL */
LttvHooksById *event_by_id; /* Unset : NULL */
- LttvHooks *after_tracefile; /* Unset : NULL */
- LttvHooks *after_trace; /* Unset : NULL */
- LttvHooks *after_traceset; /* Unset : NULL */
- LttvHooks *before_chunk; /* Unset : NULL */
- LttvHooks *after_chunk /* Unset : NULL */
+ LttvHooks *after_chunk_tracefile; /* Unset : NULL */
+ LttvHooks *after_chunk_trace; /* Unset : NULL */
+ LttvHooks *after_chunk_traceset; /* Unset : NULL */
+ LttvHooks *before_request; /* Unset : NULL */
+ LttvHooks *after_request; /* Unset : NULL */
} EventsRequest;
-
- lttvwindow_events_request_remove_all
-It removes all the events requests from the pool that has their "viewer" field
-maching the viewer pointer given in argument.
+It removes all the events requests from the pool that has their "owner" field
+maching the owner pointer given as argument.
-It calls the traceset/trace/tracefile end hooks for each request removed.
+It calls the traceset/trace/tracefile end hooks for each request removed if
+they are currently serviced.
- lttvwindow_process_pending_requests
The reads are splitted in chunks. After a chunk is over, we want to check if
there is a GTK Event pending and execute it. It can add or remove events
requests from the event requests list. If it happens, we want to start over
-the algorithm from the beginning.
-
-Two levels of priority exists. High priority and low priority. High prio
-requests are serviced first, even if lower priority requests has lower start
-time or position.
+the algorithm from the beginning. The after traceset/trace/tracefile hooks are
+called after each chunk, and before traceset/trace/tracefile are
+called when the request processing resumes. Before and after request hooks are
+called respectively before and after the request processing.
Data structures necessary :
list_out : many events requests
-A. While list_in !empty and list_out !empty and !GTK Event pending
+0.1 Lock the traces
+0.2 Seek traces positions to current context position.
+
+A. While (list_in !empty or list_out !empty)
1. If list_in is empty (need a seek)
1.1 Add requests to list_in
- 1.1.1 Find all time requests with the highest priority and lowest start
- time in list_out (ltime)
- 1.1.2 Find all position requests with the highest priority and lowest
- position in list_out (lpos)
- 1.1.3 If lpos.prio > ltime.prio
- || (lpos.prio == ltime.prio && lpos.start time < ltime)
+ 1.1.1 Find all time requests with lowest start time in list_out (ltime)
+ 1.1.2 Find all position requests with lowest position in list_out (lpos)
+ 1.1.3 If lpos.start time < ltime
- Add lpos to list_in, remove them from list_out
- 1.1.4 Else, (lpos.prio < ltime.prio
- ||(lpos.prio == ltime.prio && lpos.start time >= ltime))
+ 1.1.4 Else, (lpos.start time >= ltime)
- Add ltime to list_in, remove them from list_out
1.2 Seek
1.2.1 If first request in list_in is a time request
- Seek to that time
1.2.2 Else, the first request in list_in is a position request
- If first req in list_in pos != current pos
- - If the position is the same than the saved state, restore state
- - Else, seek to that position
- 1.3 Add hooks and call begin for all list_in members
+ - seek to that position
+ 1.3 Add hooks and call before request for all list_in members
1.3.1 If !servicing
- - begin hooks called
+ - begin request hooks called
- servicing = TRUE
- 1.3.2 call before_chunk
+ 1.3.2 call before chunk
1.3.3 events hooks added
2. Else, list_in is not empty, we continue a read
+ 2.0 For each req of list_in
+ - Call before chunk
+ - events hooks added
2.1 For each req of list_out
- if req.start time == current context time
+ or req.start position == current position
- Add to list_in, remove from list_out
- If !servicing
- - Call begin
- - servicing = TRUE
- - Call before_chunk
- - events hooks added
- - if req.start position == current position
- - Add to list_in, remove from list_out
- - If !servicing
- - Call begin
+ - Call before request
- servicing = TRUE
- - Call before_chunk
+ - Call before chunk
- events hooks added
3. Find end criterions
3.1 End time
3.1.1 Find lowest end time in list_in
3.1.2 Find lowest start time in list_out (>= than current time*)
- * To eliminate lower prio requests
+ * To eliminate lower prio requests (not used)
3.1.3 Use lowest of both as end time
3.2 Number of events
3.2.1 Find lowest number of events in list_in
3.3 End position
3.3.1 Find lowest end position in list_in
3.3.2 Find lowest start position in list_out (>= than current
- position)
+ position *not used)
3.3.3 Use lowest of both as end position
4. Call process traceset middle
5. After process traceset middle
- if current context time > traceset.end time
- For each req in list_in
- - Call end for req
- Remove events hooks for req
+ - Call end chunk for req
+ - Call end request for req
- remove req from list_in
5.1 For each req in list_in
+ - Call end chunk for req
+ - Remove events hooks for req
- req.num -= count
- - if req.num == 0
- - Call end for req
- - Remove events hooks for req
- - remove req from list_in
- - if current context time > req.end time
- - Call end for req
- - Remove events hooks for req
- - remove req from list_in
- - if req.end pos == current pos
- - Call end for req
- - Remove events hooks for req
+ - if req.num == 0
+ or
+ current context time >= req.end time
+ or
+ req.end pos == current pos
+ or
+ req.stop_flag == TRUE
+ - Call end request for req
- remove req from list_in
- - if req.stop_flag == TRUE
- - Call end for req
- - Remove events hooks for req
- - remove req from list_in
- - if exists one events requests in list_out that has
- higher priority and time != current time
- - Use current position as start position for req
- - Remove start time from req
- - Call after_chunk for req
- - Remove event hooks for req
- - Put req back in list_out, remove from list_in
- - Save current state into saved_state.
-
-B. When interrupted
+ If GTK Event pending : break A loop
+
+B. When interrupted between chunks
1. for each request in list_in
1.1. Use current postition as start position
1.2. Remove start time
- 1.3. Call after_chunk
- 1.4. Remove event hooks
- 1.5. Put it back in list_out
- 2. Save current state into saved_state.
- 2.1 Free old saved state.
- 2.2 save current state.
-
+ 1.3. Move from list_in to list_out
+C. Unlock the traces
requests and positions requests that happens to be at the exact same start time
and position.
-We only keep one saved state in memory. If, for example, a low priority
-servicing is interrupted, a high priority is serviced, then the low priority
-will use the saved state to start back where it was instead of seeking to the
-time. In the very specific case where a low priority servicing is interrupted,
-and then a high priority servicing on top of it is also interrupted, well, the
-low priority will loose its state and will have to seek back. It should not
-occur often. The solution to it would be to save one state per priority.
-
-
-
Weaknesses
-- None (nearly?) :)
-
+- ?
Strengths
- Solves all the weaknesses idenfied in the actual boundaryless traceset
reading.
-- Background processing available.
-