Improving timing guarantees

Modern real-time systems have sizes that make exhaustive timing measurements impossible. Sound yet approximate timing analysis methods are used instead. These methods analyse system behaviour statically, trying to extract as much information about the behaviour as possible.

Any information that cannot be extracted has to be accounted for by assuming the corresponding penalties. For example, in the determination of upper timing bounds, a memory access that cannot be safely predicted as a cache hit must be assumed to be a cache miss.

Consequently, the overall gap between the WCET (BCET) and the upper (lower) bound can be considered as the product of Uncertainty × Penalties.

This is where PREDATOR comes in. The project is concerned with:

The following graphs illustrate the development of a single architectural parameter, the cache-miss penalty, and reported degrees of overestimation obtained by state-of-the-art timing-analysis methods and tools.

Cache-miss penalties Degrees of WCET overestimation

Source: studies by Lim et al. (1995), Thesing et al. (2002), and Souyris et al. (2005)

The most recent study reported a 60 cycles cache-miss penalty for instructions and 200 cycles for data. The breakthroughs in timing-analysis technology achieved during the past 12 years have just sufficed to preserve the degree of overestimation fighting a steeply increasing cache-miss penalty.

In terms of the paradigm of reducing the product of Uncertainty and Penalties, this means that the degree of uncertainty has been largely reduced while the size of the penalties has grown tremendously.

The already existing gap will increase further, not only for technological reasons (speed gap between processor and memory) but also because of the shift in applications towards media processing and towards new applications that combine safety-critical functions with convenience functions (see automotive industry).

Traditionally, one tries to give guarantees on the worst-case or critical-case behaviour by increasing the average-case performance (over-provisioning). However, approaches to improve the average-case behaviour of systems are often disastrous to predictability. Well-known examples in the case of computer architectures are caches, advanced speculation techniques to improve instruction-level parallelism, and scheduling for tasks and communication.

On the opposite side of the spectrum, there is the pursuit of a conservative strategy throughout all design decisions: choosing a time-triggered architecture, switching off caches and pipelining, getting rid of speculation and dynamic scheduling. This approach favours predictability but clearly suffers from ignoring advancements in computer architecture design towards average performance. For example, an experiment performed by Alfred Roßkopf at EADS has shown that switching off the caches of a PowerPC 604 led to a slow-down of the processor by a factor of 30.

Embedded system design needs to go through a paradigm shift towards a reconciliation of both design goals, predictability and efficiency, taking into account the multi-objective nature of the underlying problem.

PREDATOR will provide a methodology, a conceptual framework and tools for the design of systems that are both predictable and efficient.