Reconciling efficiency with predictability: the PREDATOR approach

Synergetic development of tools with design

In order to achieve predictability, offline analysis of entire hardware/software systems is needed. All design and implementation methods need to be considered under the aspects of analysis complexity and precision. For example, random and round-robin cache replacement policies are simple to implement, but their behaviour is hard to predict. Thus, it is necessary to consider analysability and design as a single entity.

Performance prediction methods have been developed on various layers that allow precise guarantees on restricted designs and domains. Our design methodology assumes the existence — and, where lacking, the development — of appropriate methods for predicting system behaviour.

Two orthogonal ways to improve the predictability of embedded systems have already been mentioned:

  1. Reducing uncertainty by improving analysability
  2. Influencing design and implementation concepts to reduce penalties

Basic questions about the relation between analysis and design will be clarified and the right combination of design methods with analysis methods will be identified. Concrete goals are:

Efficiency vs. predictability as a multi-objective optimisation problem

The conflicting goals are efficiency and predictability. At first, the conflict seems surprising, and regarded at a sufficiently abstract level, there is no mutual dependence, if we assume to have unbounded and non-dedicated resources. However, any concrete system has bounded resources, which usually have to be shared — some for logical reasons, because of a cooperative, distributed implementation of a function, others in a time-shared fashion, because there are not enough for an exclusive assignment to tasks.

Looking at the trade-off analysis in form of a multiobjective optimisation problem opens new avenues to rational design: explicit optimisation and determination of Pareto-optimal designs. In addition, there are the prospects of including other non-functional objectives into the new design approach.

Static vs. dynamic techniques

Dynamic (run-time) mechanisms for resolving resource conflicts potentially increase interference and reduce time-predictability. On the other hand, they have been in the centre of advances in computer architecture and design over the last decade. However, there are either alternatives available (e.g. cache vs. scratchpad memory) or dynamic methods are not considered to be harmful (e.g. caches with LRU replacement strategy).

What is necessary is to understand:

The particular role of the compiler has to be stressed, as it is responsible for the generation of predictable and analysable code.

WCET estimation and overrun handling

Taking only worst cases into account will increase predictability, but endanger efficiency, because the actual execution times are always subject to unpredictable variations. The variations depend on the architecture characteristics, the actual input data and interrupt and I/O loads.

Moreover, in many real-time applications, the WCETs may occur very infrequently, but may be much higher than the average execution time. In these cases, a worst-case guarantee would waste computational resources. On the other hand, a more relaxed guarantee, although increasing resource efficiency, would make the system prone to transient overloads, which could significantly degrade system performance, if not carefully handled.

PREDATOR investigates robust scheduling algorithms and overload management techniques to increase resource efficiency while tolerating transient overloads. Interrupt handling will be integrated into the scheduling decisions to account for both I/O and task timing requirements. If a task executes more than expected, the other activities in the system should not be affected by the misbehaviour of that task, but they should continue to run under a "temporal protection" mechanism isolating the behaviour of tasks and preventing reciprocal interference. Temporal protection can be enforced by the kernel through proper resource reservation schemes.

Feedback-based resource reservation schemes and adaptive scheduling will be investigated to enable the system to react to unpredictable changes and cope with dynamic environments. An important point will be to identify the inter-dependencies between design choices on the different layers and in particular between the layers. Especially for scheduling inter-dependencies can happen on every layer, and in the case of non-coordination would lead to non-predictable systems.

Another point is the identification of programming paradigms and kernel mechanisms for making DVS methods more predictable, to safely trade performance with energy consumption.

Resource-aware abstraction

This design concept would bound the access costs to resources on the one hand and export information about resource usage on the other hand. Component models are needed that expose resource demand and guarantees instead of abstracting them away. As a result, there is no distinction between functional and non-functional properties anymore as both system aspects need to be considered on an equal level.

This new approach can be considered as a paradigm shift in component-based design. Central questions in this topic are whether:

Summary

The threats and challenges in designing predictable and efficient embedded systems and the approaches and methods that will be used in the project to address these challenges can be summarised as follows:

Section Threat/challenge Approach Supporting methods
Processor architecture Memory-access variability, timing anomalies, domino effects Analysable caches, compiler-controlled memory management, simple cores with analysable behaviour Simulation, non-preemptive scheduling, sensitivity analysis
OS Interference-caused variability in task execution times Controlling interference through predictable kernel mechanisms Task scheduling and resource management protocols
Software development Badly analysable code Synergetic implementation Model-based design, code synthesis, compiler
Distributed operation Communication latency and throughput variability Predictable communication fabric, end-to-end QoS negotiation protocols Modular performance analysis, hybrid simulation-based formal analysis
Cross-layer dependencies Cross-layer interference Resource-aware abstraction Resource reservation mechanisms
Scheduling Cross-layer interference Integrating task scheduling and interrupt handling Aperiodic servers and resource reservation
Interrupt handling Non-linear dependency between performance and CPU speed Task scheduling and communication mechanisms Overload handling methods and asynchronous buffers