Time Frame Processing Architecture

This architecture is used in the "data acquisition" case-study in chapter 8 of Object Oriented Analysis And Design.

In order to perform pseudo-multitasking, think of time as a series of "time-frames". Each concurrent task is written as an "activity".

The architecture ensures that all activities are executed in each time frame. Activities must be:

run in the same order within each time frame,

obviously each activity must terminate, and

each activity should perform small units of work that complete within a predictable (or at least estimatable) amount of time.

(Is there any difference between this and "Cooperative Multitasking with a round-robin scheduler"?)

The architecture is appropriate for performing work periodically. In the Booch example, it is used to poll sensors at fixed intervals. For example, if time frames occur every 60th of a second and we have an activity that requires a reading from a device every second, the activity when initiated will test whether a reading needs to be taken and will read the device only if it hasn't been read in the last 59 frames.

Has anyone seen or used this style of architecture? I'd like to hear about it if you have.

--


I Have This Pattern. In fact I've had it for two separate systems. One was a signal identification system and the other was a real time simulator. You can email me for the details. -- Yonat Sharon


A standard for real-time processing. See Doug Schmidt's home page for ACE/TAO information; a system that uses this and many other patterns.


I expect that every video game ever written uses this architecture. Certainly the ones that I write do. -- Nat Pryce


For a nice write-up on this technique, see the article "Simple Task Scheduler Prevents Priority Inversion" by Orv Balcom in the June 1995 Embedded Systems Programming. In the world of tiny little embedded processors, this architecture often represents the entirety of the 'operating system'. In a common implementation, the scheduler is the body of a timer interrupt routine. Most embedded systems people are quite familiar with it, if not by name, then by pattern. -- Tim Voght


OK, then what are the standard problems that arise when you use this architecture? What patterns are often associated with it? --Ralph Johnson

A common problem: You have one high-speed task that takes a very small fraction of CPU time when running alone, but needs to be run very often - perhaps you need to sample the audio output at 44.1 KHz.

Then you have lots and lots of slow tasks that only need to run once a second or so, any one of which doesn't seem to interfere with the fast task; and even when all added together the CPU seems to be running at less than a 50% load.

It's almost too easy to schedule all of the slow tasks at exactly the same instant, causing a glitch in the audio once a second.

If that glitch is imperceptible, then Time Frame Processing works fine. Otherwise you need to switch to Real Time techniques such as the RMA (rate monotonic algorithm).


My attempt at forces:

A small system with restricted memory. There may well be no operating system. (Scheduling and elaborate runtime systems aren't likely to be practicable.)

Pseudo-multitasking and deterministic scheduling are required.

Since there isn't a notion of an OS task or of "thread" the programmer must create an architecture that performs consistent interleaving of tasks.

"Therefore":

Define a notion of an atomic "activity" which terminates. Each activity is assigned a priority number. (The Lower the value the more frequently invoked.)

Partition runtime into discrete time of fixed length (time-frames)

Each time-frame is initiated by hardware (or the external actor of your choice...)

Code runs in reponse to the timer interrupt. This code iterates through each activity in turn deciding whether it should be run in this time-frame depending on its priority number.

Ensure that the timer doesn't interrupt running activities by calculating/estimating the total amount of time spent in all activities present in the system. - The timing granularity of the time-frame is critical to the applicability of this (proto?) pattern.

Partitioning up a bubble sort execution process into an atomic activity would for example involve performance of a few tests on indexes, possibly swap two numbers and possibly some index updates in each activity invocation. (Hence you perform a small chunk of the algorithm in each invocation - but you ensure that you terminate.)

The emphasis is on partitioning non-constant time execution into constant time chunks that can be inverleaved by an external entity.

--

Some comments on the forces...

"A small system with restricted memory. There may well be no operating system. (Scheduling and elaborate runtime systems aren't likely to be practicable.)"

Video games that run on desk-top PCs use this pattern. In fact, video games are probably among the few applications commonly run on current PCs that actually require all the CPU, memory, and audio & graphical processing capability provided by todays hardware!

"Pseudo-multitasking and deterministic scheduling are required."

Pseudo-multitasking is the result of using this pattern.

"Since there isn't a notion of an OS task or of "thread" the programmer must create an architecture that performs consistent interleaving of tasks."

PC video games run on desktop operating systems, and even home games consoles have some form of OS. The new Sega Dreamcast uses Windows CE, for example, which implements processes with memory-protection and multiple pre-emptive threads.

I think the forces can be described as:

Multitasking is required (but what does deterministic mean in this context?)

The order in which tasks are selected for processing must be deterministic.

You want to minimize the amount of time the system spends scheduling tasks and maximize the amount of time spent performing the tasks. (This one's a bit obvious - is it too obvious to be a valid force? Pre Emptive Multi Tasking spends *more* time scheduling tasks, *less* time performing the tasks, than this Cooperative Multitasking - so no, it's not obvious.)

The overhead of using more convenient forms of multitasking, such as the time and space overhead of transitioning to kernel mode and saving/restoring processor context when using pre-emptive threads or coroutines, is too great, either because of restricted processing capabilities (i.e. embedded systems), or because of the large number of tasks that require scheduling (i.e. games, human-in-the-loop simulations).

Deterministic ordering:

If you had the following activities:

Read in data from sensors (Activity A)

Update domain objects based on sensor data (Activity B)

Redraw GUI objects. (Activity C)

Poll user keypad. (Activity D)

You wish to ensure that each activity happens in a predefined order:

ABC ABC ABCD ABC ABC ABCD ...

Imagine if the system was nondeterministic (i.e. you can't guarantee the activities run in a predefined order) Let's say you got an execution that ran:

BAC DCBB ABC ABCD

..the sensors aren't being polled periodically at the right intervals and you might get also get errors by updating domain objects twice in a row.

A similar case can be made for the video game example above. (There is a causal relationship between each of the activities. The activities must occur in a single predefined order which does not change over time to ensure correct timing behaviour and consistent updates.)

overhead of using more convenient forms of multitasking...

If the tasks to be scheduled are static enough in nature and change very little you can decide the scheduling statically.

(I'm assuming there isn't any self-modifying code, or demand-loaded componentry involved.) David Parnas used a (different) static scheduling strategy in the A7E aircraft project. See the SEI book Software Engineering In Practice for a description.

--

You also have a time and space overhead if the scheduler needs a transition to kernel mode (if necessary) or needs to save and restore processor state when tasks are scheduled. This overhead occurs if you use pre-emptive threads or coroutines but does not occur in a Time Frame Processing Architecture.


This sounds like a timeslicer model, and a venerable one at that. The problem with such polling loops is CPU thrashing and fragility. Which is why interrupt driven hardware was invented.

-- Richard Henderson


Here are some patterns that immediately come to mind...

Multiple Phases per Time Frame

One often needs to divide each time frame into causally related phases. For example, in a game it is unfair for each player to think and act (e.g. attack their opponent or avoid their opponent's attack) in turn. The first player to be simulated has an unfair advantage if he and his opponent attack each other simultaneously: his attack might kill the opponent before the opponent's attack is simulated.

Therefore: divide each time frame into "phases" in which every object performs the same kind of activity, and the activities of one phase have a causal relationship to the activities of the next. E.g: every time frame in the game, all objects would decide and perform their next action based on the current public state of other objects, then collision detection would be performed and affected objects would record the results of collisions as private state, finally all objects would update their public state (whether they are alive or dead, their graphical representation, etc.) ready for the next time frame.

Variable-Duration Frames

In many programs organised using a Time Frame Processing Architecture one cannot predict in advance how long each frame will last. For example: the program may not be running on a real-time operating system (e.g. a game on Windows) or the program may have to perform more I/O processing during some frames than during others.

Therefore: measure the duration of each frame and feed that duration into to the simulation phase of the next frame. Over multiple frames, variations in time will be smoothed out. This approach adapts to gradual changes in processor load.

Often the frame rate is close to the accuracy limit of the computer's clock, resulting in inaccurate time measurements and jerky animation. To solve this, "antialias" time measurements by timing multiple frames and using the average duration.


When programming using the languages defined in the IEC Standard 61131, this is the architecture used.


similar to Real Time.


See original on c2.com