Chapter 4: Design Patterns for Embedding Concurrency and Resource Management – Design Patterns for Embedded Systems in C

Chapter 4 Design Patterns for Embedding Concurrency and Resource Management

Chapter Outline

  1. Basic Concurrency Concepts 152

    1. Identifying Tasks 160

    2. Concurrency in the UML 161

    3. Real-Time Operating Systems 163

  2. Scheduling Patterns 164

  3. Cyclic Executive Pattern 164

    1. Abstract 165

    2. Problem 165

    3. Pattern Structure 165

    4. Collaboration Roles 165

      1. AbstractCEThread 165

      2. CyclicExecutive 165

      3. CycleTimer 166

    5. ConcreteCEThread 166

    6. Consequences 167

    7. Implementation Strategies 167

    8. Related Patterns 167

    9. Example 167

  4. Static Priority Pattern 170

    1. Abstract 170

    2. Problem 171

    3. Pattern Structure 171

    4. Collaboration Roles 172

      1. AbstractStaticThread 172

      2. ConcreteThread 172

      3. Mutex 172

      4. PriorityQueue 172

      5. SharedResource 172

      6. Stack 173

      7. StaticPriorityScheduler 173

      8. StaticTaskControlBlock 173

    5. Consequences 173

    6. Implementation Strategies 173

    7. Related Patterns 174

    8. Example 175

  5. Task Coordination Patterns 182

  6. Critical Region Pattern 182

    1. Abstract 182

    2. Problem 182

    3. Pattern Structure 182

    4. Collaboration Roles 183

      1. CRSharedResource 183

      2. TaskWithCriticalRegion 183

      3. TaskWithSharedResource 184

    5. Consequences 184

    6. Implementation Strategies 184

    7. Related Patterns 184

    8. Example 184

  7. Guarded Call Pattern 190

    1. Abstract 190

    2. Problem 190

    3. Pattern Structure 190

    4. Collaboration Roles 191

      1. GuardedResource 191

      2. PreemptiveTask 191

      3. Semaphore 192

      4. StaticPriorityScheduler 192

    5. Consequences 192

    6. Implementation Strategies 192

    7. Related Patterns 193

    8. Example 194

  8. Queuing Pattern 207

    1. Abstract 207

    2. Problem 207

    3. Pattern Structure 208

    4. Collaboration Roles 208

      1. Message 208

      2. MessageQueue 208

      3. Mutex 209

      4. Qtask 209

    5. Consequences 209

    6. Implementation Strategies 210

    7. Related Patterns 210

    8. Example 210

  9. Rendezvous Pattern 224

    1. Abstract 225

    2. Problem 226

    3. Pattern Structure 226

    4. Collaboration Roles 226

      1. Rendezvous 226

      2. Semaphore 227

      3. SynchronizingThread 227

    5. Consequences 227

    6. Implementation Strategies 228

    7. Related Patterns 228

    8. Example 228

  10. Deadlock Avoidance Patterns 231

  11. Simultaneous Locking Pattern 232

    1. Abstract 232

    2. Problem 233

    3. Pattern Structure 233

    4. Collaboration Roles 234

      1. MasteredResource 234

      2. MultimasteredResource 234

      3. Mutex 235

      4. QueryMutex 235

      5. ResourceClient 235

      6. Resource Master 235

      7. ResourceMasterSimplified 235

    5. Consequences 236

    6. Implementation Strategies 236

    7. Related Patterns 236

    8. Example 236

  12. Ordered Locking 242

    1. Abstract 242

    2. Problem 243

    3. Pattern Structure 243

    4. Collaboration Roles 244

      1. Mutex 244

      2. OrderedResource 244

      3. ResourceClient 245

      4. ResourceList 247

      5. ResourceReference 247

    5. Consequences 247

    6. Implementation Strategies 248

    7. Related Patterns 248

    8. Example 248

  13. So, What Have We Learned? 255

Patterns in this chapter

Cyclic Executive Pattern – Schedule threads in an infinite loop

Static Priority Pattern – Schedule threads by priority

Critical Region Pattern – Protect resources by disabling task switching

Guarded Call Pattern – Protect resources through mutex semaphores

Queuing Pattern – Serialize access through queuing messages

Rendezvous Pattern – Coordinate complex task synchronization

Simultaneous Locking Pattern – Avoid deadlock by locking resources together

Ordered Locking Pattern – Avoid deadlock by locking resources only in specific orders

Most embedded systems need to perform several activities simultaneously. The key to doing that is through the definition and management of the concurrency model of the system. In this section, I will provide an overview of the basic terms and concepts of concurrency to support the understanding and usage of the design patterns that come later in this chapter.

4.1 Basic Concurrency Concepts

Most embedded systems need to perform several activities simultaneously. The key to doing that is through the definition and management of the concurrency model of the system. In this section, I will provide an overview of the basic terms and concepts of concurrency to support the understanding and usage of the design patterns that come later in this chapter.

The most important terms to understand are given in the sidebar.

Concurrency Term Definitions

  • Action Sequence: a series of actions, possibly with decision branch points, in which the order of execution is fully determined

  • Arrival Pattern: how the event that begins a concurrency unit arrives; it may be periodic or aperiodic

  • Blocking: see priority inversion

  • Blocking time: the length of time a high-priority task is prevented from running because a lower-priority task owns a required resource

  • Concurrency: the simultaneous execution of action sequences

  • Concurrency unit: a set of actions executed sequentially in the same thread of execution. Also known as a task or thread

  • Criticality: the importance of the completion of an action sequence

  • Deadline: a point in time at which the completion of an action or action sequence becomes incorrect or irrelevant

  • Deadlock: a situation in which multiple clients are simultaneously waiting on the same conditions but the conditions cannot, in principle, be satisfied

  • Execution time: the length of time required for an action or action sequence

  • Interarrival time: the time between task initiation for an aperiodic task

  • Jitter: variation around the period of a periodic task

  • Multitasking Preemptive Scheduling: a scheduling schema in which the highest priority task ready to run will do so, preempting any lower priority tasks that are currently running

  • Period: the interval between periodic task invocations

  • Priority: a numeric value used to determine which task, of a set of ready-to-run tasks, will execute preferentially

  • Priority inversion: when a lower priority task is running even though a higher priority task is ready to run. Also known as blocking

  • Pseudoconcurrency: execution of multiple tasks by executing a single task at a time and switching focus between tasks according to some scheduling policy

  • Race condition: a computational situation in which the result depends upon the order of execution of actions, but that order is not inherently knowable

  • Real-Time Action: an action for which timeliness is an aspect of correctness

  • Schedulability: a task set is said to be schedulable if all of its tasks are timely

  • Synchronization Pattern: how tasks will manage synchronization at specific points in their respective action sequences

  • Synchronization Point: a specific action in a sequence at which the task will wait until other tasks reach corresponding synchronization points

  • Task: see concurrency unit

  • Thread: see concurrency unit

  • Timeliness: the ability of a task to predictably complete prior to its deadline, or to meet its time-related performance requirements

  • Urgency: the nearness of an action sequence’s deadline

  • Worst Case Execution Time: the longest time an action sequence execution can take

Figure 4-1 shows the relationship between actions, action sequences, and tasks1 . Each task (also known as a concurrency unit) consists of a set of actions performed in a specific sequence. These action sequences are executed independently from the others. This means that within a task, the order of action execution is fully known; in the case of Task 1 in the figure, first Action A, then Action B, then Action C or Action D, then Action E and so on. What is not known is the order of action execution between tasks. Which action executes first? Action A? Action Z? Action Beta? The answer is you don’t know and you don’t care – if you care, then concurrency was a poor choice.

Figure 4-1 Concurrency as a set of action sequences

The only points at which you care about the order of execution between tasks are called synchronization points. This is indicated in the activity diagram of Figure 4-1 with a pair of horizontal bars. The first of these is called a join, since it joins together a set of actions from different tasks into, logically, a single thread. The second bar is called a fork, as it forks from the single thread into multiple. The semantics of using joins and forks in this way are that the flow cannot proceed past the synchronization point until all predecessor actions (in this case, Action E, Action Z, and Action Gamma) have completed. Once all three actions have completed, control goes on to subsequent actions (Action F, Action X, and Action Zeta, not necessarily in that order). Synchronization points are common places in which tasks will share data or ensure preconditions are met before proceeding.

Tasks can communicate in two fundamental ways. Synchronous communications are like a phone call – both parties are engaged at the same time and invoke services and exchange data immediately. The sender “sends and waits” for a response. Synchronous communication is usually implemented via function calls. Asynchronous communication is like a postcard. The sender of the message “sends and forgets.” At some later time, the receiver gets and processes the message or data. The message processing is less timely but the sender need not wait until the receiver is ready.

For tasks to execute concurrently, they must run on different CPUs or cores within a multicore CPU. Within a given CPU core, multiple tasks are run using pseudoconcurrency in which only one task executes at any instant in time, but the processor gives the appearance of concurrency by switching among the ready tasks. There are many different policies that can be used to select which task to run, some of which will be provided as patterns later in this chapter.

The basic structure of tasks depends a bit on the scheduling pattern that will invoke them. A task for a cyclic executive is simply sequential code that returns when it is done. A task that runs in a cooperative multitasking environment usually runs in a loop but contains points at which it voluntarily releases control back to the scheduler to allow other tasks to run. Code Listing 4-1 shows a typical cyclic executive. First, the initial global data declarations are followed by system initiation, and then (if initiation succeeds), the infinite cyclic executive task loop.

Code Listing 4-1 Cyclic executive processing loop

void main(void) {

/* global static and stack data */

static int nTasks = 3;

int currentTask;

/* initialization code */

currentTask = 0;

if (POST()) { /* Power On Self Test succeeds */

/* scheduling executive */

while (TRUE) {

task1();

task2();

task3();

}; /* end cyclic processing loop */

}

}

The different ways to schedule the tasks have both benefits and costs. Cyclic executives are very simple; the scheduler consists of an infinite loop that executes the tasks one after the other. While simple, it isn’t flexible and can have poor response to incoming events since an event processed by a task must wait until that task is run within the cycle. In addition, task executions must be short if other tasks are going to run in a timely fashion. A variant known as time-triggered cyclic executive starts the cycle at a time boundary; if it completes the cycle before the next time boundary, it sleeps until the start of the next epoch. A cooperative round-robin scheduler is basically a cyclic executive in which the tasks don’t run to completion but instead run to specific points at which they release control back to the scheduling loop. This approach allows long tasks to run while still permitting other tasks to make progress in their processing.

While simple, there are a number of problems with both cyclic executives and round-robin schedulers. First, they are not responsive to urgent events; the task that processes the urgent event simply runs in its sequence. Secondly, they are not flexible since the schedule order is determined at design time. Thirdly, they don’t scale well to large numbers of tasks. Lastly, a single misbehaving task stops the entire system from execution. These kinds of schedulers are best-suited to very simple systems or systems that are highly safety critical (since simple systems are far easier to prove correct and certify than complex ones).

The other primary kind of scheduling is preemptive scheduling. In preemptive scheduling, a scheduler stops the currently executing task when it decides to do so and runs the next task. The criterion for preempting a task and the selection of the next task in sequence depends on the particular scheduling policy.

At the high level, tasks in a preemptive scheduler have four significant states. A task is said to be waiting when the conditions for it to be run are not currently in force. When the conditions required for a task to run are true, then the task is said to be ready to run. When the scheduler initiates the task, the task is said to be running. When a task is ready to run but is being prevented because another task owns a necessary resource, the task is said to be blocked.

A time-slicing scheduler (also known as time-driven multitasking executive or TDME) operates on a fairness doctrine; that is, every task gets an equal opportunity to execute. The tasks are preempted after they run for a specific period of time. TDME schedulers are tuned by adjusting the duration of the time slices; too long and the other tasks don’t run often enough to do their jobs, too short and too high a percentage of time is spent switching the task context.

The most common scheduling pattern for larger embedded systems is priority-based scheduling. In this approach, each task is assigned a priority, a scalar value that will be used to determine which task, from the set of ready to run tasks, will be selected for execution.

A task running within either a time-slicing or priority-based preemptive scheduler runs in an infinite loop and will be interrupted by the scheduler when it is appropriate for other tasks to run. The basic structure for this latter task template looks like Code Listing 4-2. You can see in the code listing where the task static and stack data go. Following these data declarations, there is usually some initialization or set-up code followed by an infinite, or at least indefinite, loop where the task cycles endlessly waiting for a signal of interest – this might come from a queue insertion or event occurrence. When such a signal appears, a switch case statement then performs the processing. When complete, the task code loops back to wait for the next signal.

Code Listing 4-2 Basic task structure with a preemptive scheduler

/*

here is where private static data goes

*/

static int taskAInvocationCounter = 0;

void taskA(void) {

/* more private task data */

int stufftoDo;

/* initialization code */

stuffToDo = 1;

while (stuffToDo) {

signal = waitOnSignal();

switch (signal) {

case signal1:

/* signal 1 processing here */

break;

case signal2:

/* signal 2 processing here */

case signal3:

/* signal 3 processing here */

};

}; /* end infinite while loop */

}; /* end task */

Priority can be set according to any criterion imaginable, but the two prevalent types of priority-based scheduling are urgency-based and criticality-based2 . Many people use these terms imprecisely but the terms have precise meanings. Urgency refers to the nearness of a task’s deadlines while criticality refers to how important the execution of the task is to the system. Figure 4-2 shows the difference. The curve itself is known as the utility function; this is the value to the system of the completion of an action or action sequence as a function of time. The criticality is the height of the curve while urgency is the time distance to the point at which the criticality falls to zero. It should be noted that a task that is real-time does not necessarily have a deadline, but deadlines are the most common way to specify timeliness constraints for a real-time task.

Figure 4-2 Criticality vs. urgency

When a task executes, there are several important time-related terms, summarized in Figure 4-3. For a periodic task (one that executes at a more-or-less fixed rate), two important values are the period (the duration of the time interval between task initiations) and the jitter (the variation in the period). Nonperiodic tasks (known commonly as aperiodic tasks) are characterized by the interarrival time (the time between task initiations) and the minimum interarrival time (the smallest such time). All tasks have an execution time (the time required to complete the activities of the task) although for computation of schedulability, the worst-case execution time is usually used. The deadline is the period of time between when the task becomes ready to run and when it must complete.

Figure 4-3 Time-related terms

When you use semaphores to lock a resource, it can happen that a lower priority task locks a resource and then is preempted by a higher priority task that needs that resource. The high-priority task must block to allow the latter task to run and release the resource. This is known as priority inversion because, contrary to normal priority scheduling rules, the lower priority task is running even though there is a higher priority task ready to run.

Note that priority inversion cannot be avoided in multitasking preemptive schedule that includes the use of semaphores to guard resources. That isn’t necessarily bad. What is almost always bad is unbounded priority inversion. Consider Figure 4-4. In this figure, there are two tasks. Task A is the higher priority3 , executing with a period of 50 ms and an execution time of 10 ms, 5 ms of which it locks resource R. Task Z is the lower priority task, with a period of 1000 ms and needing 500 ms of time to execute, 10 ms of which it locks resource R. Note that in this example, the deadline for each task occurs at the start of the next period.

Figure 4-4 Blocking

Priority inversion occurs when Task Z is running with the resource R locked and then Task A becomes ready to run. It will immediately preempt Task Z but as soon as it gets to the point at which it needs to access the resource R it is “stuck”; it must block and allow Task Z to run until it releases the resource. Once the resource is release by Task Z, Task A again preempts Task Z and all is as it should be.

If schedulability of tasks is a concern (and it almost always should be), you should examine the effect of blocking on schedulability. In the case in Figure 4-4, what is the worst case? It happens when Task Z has just barely locked R and then Task A runs. In this case the total execution time for Task A is 10 ms plus the blocking time of 10 ms. Since Task A completes in 20 ms (less than the deadline) and the execution time for Z is 500 ms + 10 runs of Task A (at 10 ms each) = 600 ms, every one is, as they say “fat, dumb, and happy.” Now consider Figure 4-5.

Figure 4-5 Unbounded blocking

In this example, only tasks A and Z use the resource. Task Y is the second-lowest priority task running every 800 ms for a total of 100 ms, while Task X is the third-lowest priority task with a period of 500 ms with an execution time of 80 ms. What happens to schedulability now?

The worst case is the following: Task Z runs and locks the resource and then Task A runs, preempts, runs until it needs to access R, and then blocks. At this point Task Z runs, but immediately Task Y runs. Since it is a higher priority, it preempts Z. As it runs, task X becomes ready to run and preempts Y. At this point, Task A is being blocked by three tasks. The resource will not become available for 190 ms (10+100+80), and Task A now misses its deadline. This is called unbounded priority inversion and is a critical problem with many embedded system designs.

There are a number of design solutions, such as using critical regions and variants of the priority inheritance pattern, that solve this problem. These patterns will be discussed later in this chapter.

4.1.1 Identifying Tasks

The entire previous discussion of what is a task and what are task properties avoids the crucial question of how to identify a good set of tasks for your particular embedded system. Generally speaking, good tasks have independent action sequences that interact with others only at specific points of interactions. However, selecting a good task set involves more than this because we create task sets to optimize performance and responsiveness. We could, in principle, run every object (or function, if you prefer) in a separate task thread or we can run them all in a single task thread. If ease of maintenance is the primary concern, then independence should be the primary task identification strategy. If schedulability is the primary concern, then the recurrence properties (e.g., the task period) should guide your choice. Table 4-1 shows common strategies for task identification4 .

Table 4-1 Task Identification Strategies

Strategy Description Pros Cons
Single Event Groups Use a single event type per task Very simple threading model Doesn't scale to many events well; suboptimal performance
Interrupt Handler Use a single event type to raise an interrupt Simple to implement for handling a small set of incoming event types; highly efficient in terms of handling urgent events quickly Doesn’t scale well to many events; interrupt handles typically must be very short and atomic; possible to drop events; difficult to share data with other tasks
Event Source Group all events from a single source so as to be handled by one task Simple threading model Doesn’t scale to many event sources well; suboptimal performance
Related Information Group all processing of information (usually around a resource) together in a task Useful for “background” tasks that perform low-urgency processing Same as Event Source
Independent Processing Identify different sequences of actions as threads when there is no dependency between the difference sequences Leads to simple tasking model May result in too many or too few threads for optimality; doesn’t address sharing of resources or task synchronization
Interface Device A kind of event source Useful for handling device (e.g., bus) interfaces Same as Event Source
Recurrence Properties Group together events of similar recurrence properties such as period, minimum interarrival time, or deadline Best for schedulability; can lead to optimal scheduling in terms of timely response to events More complex
Safety Assurance Group special safety and reliability assurance behaviors together (e.g., watchdogs) Good way to add on safety and reliability processing for safety-critical and high-reliability systems Suboptimal performance

In practice, the task set you select will use a combination of several of the strategies from Table 4-1.

4.1.2 Concurrency in the UML

In the UML, there are three ways to represent concurrency: «active» objects, forks and joins in activity diagrams, and orthogonal regions in state machines5 . Of these, the primary means is with «active» objects. An «active» object is one that owns a thread of execution. It is usually a structured object, meaning that it contains part objects inside itself that execute in the context of «active» object’s thread. The «active» object has the explicit responsibility of managing the event queue for the set of software elements that execute in its thread’s context. The icon for an «active» object is an object with a heavy border on its left and right sides although often the stereotype «active» is shown instead. A task diagram is a UML class (or object6 ) diagram whose primary focus is on the concurrency units and their interaction. Figure 4-6 shows an example of an object task diagram containing five tasks («active» objects), one shared resource (with guarding semaphore), and comments showing concurrency-related metadata (priority, period, and deadline in this case7 ). Note that with some tasks we use ports (named connection points) to connect directly with objects inside the tasks while in other cases we just use links (most likely implemented as pointers). In the case of the ControlThread and the BuildInTestThread, these «active» objects do not contain internal objects but are objects that run in their own threads.

Figure 4-6 UML task diagram

4.1.3 Real-Time Operating Systems

In many embedded systems, a real-time operating system (RTOS8 ) provides a set of services to the application system including:

  • task services such as the creation, scheduling, and destruction of tasks

  • event services such as pending on an event, posting, and event and management of event queues

  • creation, destruction, and usage of mutex semaphores

  • time services, such as delay for a specified period of time, and the creation and destruction of timers

  • file system services

  • network input/output services

This is no surprise – these services are offered by most desktop operating systems as well. RTOSs are different in a number of ways. First and foremost, they are embeddable; they are designed to be burned into ROM, Flash, or some nonvolatile memory. They may execute from that memory or copy themselves into volatile memory via a boot loader. Secondly, they are tuned to be both fast and predictable9 . Thirdly, they are usually designed to be scalable; they are often designed using the Microkernel Architecture Pattern10 and that enables the memory image of the RTOS to include only those services that the system needs and for which it is willing to allocate space.

Figure 4-7 shows a typical structure for an RTOS. An RTOS must be specialized for the target hardware. This is done via a hardware abstraction layer that most RTOS vendors refer to as the Board Support Package (BSP). Above this are kernel services such as memory, task, and timer services. Optionally, file services and network services may be added or removed as needed by the application. The various applications and their tasks run atop the RTOS, invoking the RTOS services via an application programming interface (API) as necessary.

Figure 4-7 Basic RTOS structure

It is now fairly common to embedded such RTOSs, but it wasn’t so long ago that the application software had to provide these services. Indeed, many embedded systems running in smaller memory space and processor target environments cannot afford the time, memory, or recurring cost overhead of commercial RTOSs.

Let’s now look at some patterns useful for managing the concurrency aspects of embedded systems. It should be noted that the Interrupt Pattern of the previous chapter is a concurrency pattern in addition to a hardware access pattern.

Scheduling Patterns

This section contains patterns related to the invocation of tasks. This is a service that an RTOS will provide, if available. If not, then your application software must provide these services. Even if your system uses an RTOS to provide scheduling, understanding the benefits and costs of the different scheduling approaches gives you more information to select an optimal scheduling strategy.

4.2 Cyclic Executive Pattern

As mentioned earlier in this chapter, the Cyclic Executive Pattern has the advantage of being a very simple approach to scheduling multiple tasks but isn’t very flexible or responsive to urgent events. The Cyclic Executive Pattern implements a fairness doctrine for scheduling that allows all tasks an equal opportunity to run. Although suboptimal in terms of responsiveness, the Cyclic Executive Pattern is easy to analyze for schedulability (that is, the ability of the system to predictably meet all task deadlines): the deadline for each task must be greater than or equal to the sum of all the worst-case execution times for all the tasks plus the loop overhead. Assuming that the deadline for each task is the duration of one complete task execution cycle, Equation 4-1 gives the relation that must be true for the task set to be schedulable, where:

  • Dj is the deadline for task j

  • Cj is the worst-case execution time for task j

  • K is the cyclic executive loop overhead, including the overhead of the task invocation and return

for all tasks i,

Equation 4-1: Schedulability for Cyclic Executive with n tasks

4.2.1 Abstract

The Cyclic Executive Pattern is used primarily in two situations. First, for very small embedded applications, it allows multiple tasks to be run pseudoconcurrently without the overhead of a complex scheduler or RTOS. Secondly, in high-safety relevant systems, the Cyclic Executive Pattern is easy to certify, hence its prevalence in avionics and flight management systems. The scheduler is a simple loop that calls each of the tasks in sequence. Each task is simply a function invoked by the scheduler that runs to completion and then returns.

4.2.2 Problem

Many embedded systems are tiny applications that have extremely tight memory and time constraints and cannot possibly host even a scaled-down microkernel RTOS. The Cyclic Executive Pattern provides a low-resource means to accomplish this goal.

4.2.3 Pattern Structure

Figure 4-1 shows the structure of this simple pattern. The CyclicExecutive has a controlLoop() function that iteratively invokes the run() operation of each of the task threads. Each run() operation is relatively short and runs to completion.

4.2.4 Collaboration Roles

This section describes the roles for this pattern.

4.2.4.1 AbstractCEThread

The AbstractCEThread provides a standard interface for the threads by declaring a run() function. It is this function that the Cyclic Executive will invoke to execute the task. When this function completes, the CyclicExecutive runs the next task in the list.

4.2.4.2 CyclicExecutive

As mentioned in Section4-1, this class contains the infinite loop that runs the tasks each in turn. In addition, the CyclicExecutive contains global stack and static data that are needed either by the tasks or by the scheduler itself. The code in Figure 4-4.1 is typical for a cyclic executive.

A variant of the pattern is the time-triggered Cyclic Executive. In this variant, the CyclicExecutive will set up and use the CycleTimer to initiate each epoch (execution of the task list). Note that while the Interrupt Pattern from Chapter 3 can be and the Cyclic Executive can simply wait for the interrupt to do the next cycle, unless there is background processing to perform, it can sit in a so-called “busy loop” and poll for the timer to elapse.

4.2.4.3 CycleTimer

The CycleTimer isn’t used in the most common variant of the CyclicExecutive; the fact that it is optional is indicated with the “0,1” multiplicity on the directed association in Figure 4-8. This timer can invoke an interrupt when it expires or, even more simply, can return TRUE (non-zero) to the hasElapsed() function. The CyclicExecutive would then call start() to begin the timing interval for the next epoch.

Figure 4-8 Cyclic Executive Pattern

4.2.5 ConcreteCEThread

The ConcreteCEThread stands for the functions and related data that implement a logical task in the system. Each has its own implementation of the run() function.

4.2.6 Consequences

As mentioned, the primary benefit of this pattern is its simplicity. It’s pretty difficult to get the scheduler wrong. On the other hand, the pattern is demonstrably less responsive to high-urgency events and this makes the pattern inappropriate for applications that have high responsiveness requirements. Another benefit of the pattern is that it is very lightweight in terms of required resources and so is appropriate for very small memory devices.

A downside of using the pattern is that the interaction of threads is made more challenging than other patterns. If an output of one task is needed by another, the data must be retained in global memory or a shared resource until that task runs (See the Task Coordination Patterns section later in this chapter). On the other hand, because of the lack of preemption, unbounded blocking is not a concern. Deadlock can only occur because of a mistake – a single misbehaved task (i.e., one that never returns control to the cyclic executive) shuts the entire system down. In a preemptive scheduling system, unless the misbehaving task has turned off task switching, the other tasks will continue to function even if one task fails.

4.2.7 Implementation Strategies

The implementation of this pattern is almost trivially simple. In most cases, the Cyclic Executive may simply be the application’s main() function and related global data. In other cases, it will be invoked from main().

4.2.8 Related Patterns

Because the pattern’s responsiveness is demonstrably suboptimal, it is often augmented with some interrupt service routines (see the Interrupt Pattern in Chapter 3) for high urgency events. Another use for the Interrupt Pattern with the Cyclic Executive Pattern is to implement the epoch timer response with the time-triggered cyclic executive, although the Polling Pattern (also from Chapter 3) may be used to poll the timer as well.

The Cyclic Executive Pattern does not address data sharing between tasks. The Cyclic Executive does define global data structures that can be used for this purpose but it is possible to mix other patterns for this purpose (see the Task Coordination patterns section in this chapter). Because there is no preemption, global data do not require access serialization with this pattern.

4.2.9 Example

The example shown in Figure 4-9 shows the cyclic executive scheduler GasControlExecutive. Its controlLoop() function calls the run services in each of the threads. Because we used the file implementation, it was necessary to encode the name of the file in the name of the run function of each to avoid global naming clash. In the example, I implemented the time-triggered variant of the pattern. As you can see in Code Listing 4-3, the controlLoop() function implements a busy wait for the elapse of the epoch time. Note that the timer startEpochTimer() function resets the elapsed variable to FALSE (0) before it starts the timer. When the timer elapses, this variable is set to TRUE (1).

Code Listing 4-3 GasControlExecutive.c

#include "GasControlExecutive.h"

#include "GasActuatorThread.h"

#include "GasControlEpochTimer.h"

#include "GasDisplayThread.h"

#include "GasSensorThread.h"

void controlLoop(void) {

while (TRUE) {

startEpochTimer();

GasSensorThread_run();

GasActuatorThread_run();

GasDisplayThread_run();

while(!epochTimerHasElapsed()) ;

};

}

Figure 4-9 Cyclic Executive Example

Each of the code files for the other elements simply declare their data and functions and the related C file provides the implementation. For example, the implementation for the SharedData file is given in Code Listing 4-4 and Code Listing 4-5.

Code Listing 4-4 SharedData.h

#ifndef SharedData_H

#define SharedData_H

extern int commandedGasFlow;

extern int measuredGasFlow;

#endif

Code Listing 4-5 SharedData.c

#include "SharedData.h"

int commandedGasFlow;

int measuredGasFlow;

Code Listing 4-6 GasDisplayThread.c

void GasDisplayThread_run(void) {

printf("Measured Gas Flow %d\n", measuredGasFlow);

printf("commandedGasFlow %d\n\n", commandedGasFlow);

}

4.3 Static Priority Pattern

The Static Priority Pattern is supported by most real-time operating systems. It has good response to high-priority events and scales well to large numbers of tasks. Different schemas can be used to assign priorities, but, as mentioned earlier, priorities are usually assigned based on urgency, criticality, or some combination of the two. When assigned using rate monotonic scheduling (see the Implementation Strategies section below), this scheduling pattern is both optimal (you can’t do better with other scheduling strategies) and stable (in an overload situation, which tasks will fail is predictable – the lowest priority tasks). Assuming that each task deadline occurs at the end of the period, the system is schedulable if the inequality in Equation 4-2 holds, where:

  • Cj is the worst case execution time for task j

  • Tj is the period (deadline occurs at the end of the period) for task j

  • Bj is the worst-case blocking time for task j

  • n is the number of tasks

Equation 4-2: Schedulability for Static Priority Pattern with RMS priorities for n tasks 11

In the case where the task is not periodic, the minimum interarrival time for task j (the smallest time between successive invocation of the same task) is used. Note that the blocking terms only extend to the n-1 task; this is because by definition, the lowest priority task cannot be blocked.

4.3.1 Abstract

The Static Priority Pattern is a very common pattern in embedded systems because of its strong RTOS support, ease of use, and good responsiveness to urgent events. It may be overkill for very small, simple, or highly predictable systems that are not driven by urgent asynchronous events. The pattern is easy to analyze for schedulability but naïve implementation with blocking resource sharing can lead to unbounded priority inversion. Fortunately, there are resource sharing patterns that can avoid this problem or at least bound priority inversion to at most one level. The most common way to assign priorities is on the basis of deadline duration (that is, task urgency); the shorter the deadline, the higher the priority. If the deadline is at the end of the period, this schema is known as rate monotonic scheduling. It assumes that all tasks are periodic, but the minimum interarrival interval can be used for aperiodic tasks, although this often results in overdesigning the system with an impact on system recurring cost.

4.3.2 Problem

The Static Priority Pattern provides a simple approach to scheduling tasks based on priority. This pattern addresses systems with more tasks than the Cyclic Executive Pattern and also emphasizes responsiveness to urgent events over fairness.

4.3.3 Pattern Structure

Figure 4-10 shows the structure for the Static Priority Pattern. The colored classes indicate the element normally implemented for you by an RTOS. Note that the mapping constraint on the diagram indicates that there is a StaticTaskControlBlock for each AbstractStaticThread. This holds data about the task needed by the scheduler to support scheduling and preemption.

Figure 4-10 Static Priority Pattern

Figure 4-11 Pattern Example

4.3.4 Collaboration Roles

This section describes the roles for this pattern.

4.3.4.1 AbstractStaticThread

The AbstractStaticThread class is an abstract (i.e., noninstantiable) super class for Concrete Thread. Abstract Thread associates with the Scheduler; specifically the Scheduler executes the run() function of the thread to run the thread. The AbstractStaticThread specifies the run interface to be realized by the ConcreteThread.

4.3.4.2 ConcreteThread

The ConcreteThread is an «active» class that serves as a container and thread context for a set of passive “semantic” objects that do the real work of the system. The ConcreteThread class provides a straightforward means of attaching these semantic objects into the concurrency architecture. ConcreteThread represents the threads in the application software.

4.3.4.3 Mutex

The Mutex is a mutual exclusion semaphore class that serializes access to a SharedResource. The guarded functions of the SharedResource invoke the lock() function of the Mutex whenever they are called and release() once the service is complete. Other client threads that attempt to invoke a service when its associated Mutex is locked become blocked until the Mutex is unlocked. The blocking is performed when the Mutex semaphore signals the StaticPriorityScheduler that a call attempt was made by the currently active thread. The mutexID identifies which Mutex is blocking the task. This element is usually provided by the RTOS.

4.3.4.4 PriorityQueue

The PriorityQueue is a queue of StaticTaskControlBlock pointers organized by priority. The pattern uses two distinct instances of PriorityQueue. The first is the ready queue. When a task becomes ready to run, a pointer to its StaticTaskControlBlock is inserted into the queue in order of its priority. If the priority of the task is higher than the currently running task (or becomes so when the current task terminates), then the task is removed from the ready queue and run. The other PriorityQueue is the blocked queue; this queue holds the set of tasks currently blocked because they tried to access SharedResources protected by a locked Mutex. This element is usually provided by the RTOS.

4.3.4.5 SharedResource

A SharedResource is a class whose instances are shared by one or more ConcreteThreads. For the system to operate properly in all cases, all shared resources must either be reentrant (meaning that corruption from simultaneous access cannot occur) or they must be protected via access serialization. Protection via mutual exclusion semaphores is discussed in the Guarded Call Pattern later in this chapter.

4.3.4.6 Stack

Each AbstractThread has a Stack for return addresses and passed parameters. This is normally only explicit at the assembly language level within the application thread, but is an important part of the scheduling infrastructure. This element is usually provided by the RTOS.

4.3.4.7 StaticPriorityScheduler

This class orchestrates the execution of multiple threads based on their priority according to a simple rule: always run the highest priority ready thread. When the «active» thread object is created, it (or its creator) calls the createThread() operation to create a thread for the «active» object. Whenever this thread is executed by the StaticPriorityScheduler, it calls the StartAddr address (except when the thread has been blocked or preempted, in which case it calls the EntryPoint address). This element is usually provided by the RTOS.

4.3.4.8 StaticTaskControlBlock

The StaticTaskControlBlock contains the scheduling information for its corresponding AbstractThread object. This includes the priority of the thread, the default start address and the current entry address, if it was preempted or blocked prior to completion. The StaticPriorityScheduler maintains a StaticTaskControlBlock object for each existing AbstractThread. This element is usually provided by the RTOS.

4.3.5 Consequences

As previously mentioned, the Static Priority Pattern runs well with potentially large numbers of tasks and provides responsiveness to incoming events. Typically the tasks exist in waiting for initiation most of the time and become active only when an initiating event (such as the elapse of their period) occurs. Priority is only an issue when there is more than one task ready to run; in this case the highest priority task will be run preferentially.

It is common to use blocking to serialize access to resources. While this is highly effective, naïve implementation can lead to unbounded priority inversion in the manner described in the first section of this chapter. See the section on resource sharing patterns later in this chapter for patterns to address this concern.

4.3.6 Implementation Strategies

By far, the most common implementation of this pattern is to use a commercial RTOS to provide the scheduling infrastructure. While this wasn’t true 20 years ago, today there are dozens of popular RTOSs that will fit the constraints of most embedded systems.

Regardless of whether you buy a commercial RTOS or roll your own, the problem of assigning priorities must still be addressed. RMS (rate monotonic scheduling) is the most common means to assign priorities. It is optimal and stable. By optimal, I mean that if the task set can be scheduled by any means whatsoever, it can also be scheduled by RMS. By stable, I mean that in the case of an overloaded situation in which some tasks will miss their deadlines, with static priority scheduling, it is clear which tasks will fail – those with the lowest priority. RMS simply makes the priority a function of the period – the shorter the period, the higher the priority12 .

RMS makes a number of assumptions about the tasks but is reasonably robust in the presence of violations to those assumptions:

  • tasks are periodic

  • task deadlines occur at the end of the period

  • tasks are infinitely preemptable

As to the first assumption, if tasks are not periodic, then they can occur at a maximum rate (that is, with a minimum time between task invocations). This value can be used instead of the period to determine the priority.

For the second assumption, it sometimes happens that the deadline is less than the period. In this case, simply use the deadline (Dj ) instead of the period (Tj ) to assign the priority.

The last assumption is that a currently running task can be preempted as soon as a higher priority task becomes ready to run. This isn’t true if the currently running task is in a critical region (a period of time in which task switching is disabled – see the Critical Region Pattern later in this chapter). However, if the critical regions are short, this is usually not a problem. Such situations can be analyzed by treating the duration of the longest critical region of all other tasks as the worst case blocking time for the task under analysis.

4.3.7 Related Patterns

The Static Priority Pattern offers a more responsive alternative to fairness doctrine approaches such as the Cyclic Executive Pattern. When resources must be shared with other tasks, this pattern must be mixed with resource sharing patterns such as those in this chapter. When tasks need to coordinate, this pattern can be mixed with task coordination patterns. The task coordination patterns overlap a bit with the resource sharing patterns since one of the ways that tasks coordinate is by sharing common resources.

A very similar pattern to the Static Priority Pattern is the Dynamic Priority Pattern. In this pattern, the design does not specify the priority of the task; this is computed at run-time. The most common priority assignment strategy is known as Earliest Deadline First (EDF) scheduling. In EDF, the task must specify the duration of its deadline from which the scheduler computes the absolute time of the elapse of this task’s next deadline. The task is then put into the ready queue based on the nearness of the next absolute deadline for all ready tasks. That is, if a task A is being queued that has a nearer next deadline than another task B, then it will be placed in front of task B in the queue; otherwise, it will be placed later in the queue. EDF is optimal (in the same sense that RMS is) but it is not stable; in an overload situation which tasks will fail to meet their deadlines cannot be computed at design time. If you use EDF scheduling, then schedulability can be computed in a manner similar to RMS scheduling, except that the upper utilization bound is 1.00 rather than a function of the number of tasks. This is shown in Equation 4-3.

Equation 4-3: Earliest Deadline First schedulability

4.3.8 Example

In this example, we are going to assume the use of a commercial RTOS so we won’t explicitly model or code the infrastructure elements of the pattern. Instead, the example (See Figure 4-11) focuses on concrete threads and shared resources.

In the code listings below, the calls to initialize timers, pend on the OS events, and create tasks that are all RTOS-dependent.

Code Listing 4-7 Static Priority Example main.c

void main(void) {

/* initialize the links and start the tasks */

MotorPositionSensor_initRelations();

MotorDisplay_initRelations();

MotorController_initRelations();

/* now let the tasks do their thing */

while (TRUE) {

};

};

The code listings for the MotorPositionSensor, MotorDisplay, and MotorController are very simple, as can be seen in Code Listing 4-8 through Code Listing 4-13. The interesting code can be found in the _Init() functions, where the OS event queues are created and the tasks created with the corresponding _run() functions. A queue size of 10 events is more than enough space if the events are handled in a timely fashion. Note also that each task is created with a 1024 int sized stack. Following the task creation, the corresponding _run() function creates an OS timer (with the RETRIGGER parameter so that the timer repeats automatically) and then runs an infinite loop. Within the loop, the task then pends on the event appearance. When the corresponding event with the proper id occurs, the OSQPend() function returns and the task is initiated. Since this is an infinite loop, once the loop completes, each task iterates and goes to sleep waiting for its next event.

The code listings for MotorData (Code Listing 4-14 and Code Listing 4-15) are even simpler since it is just a passive resource with services invoked by the active threads. Again note that the calls to create, initialize, and use threads are RTOS-specific.

Code Listing 4-8 MontorPositionSensor.h

#ifndef MotorPositionSensor_H

#define MotorPositionSensor_H

void MotorPositionSensor_Init(void);

void MotorPositionSensor_Cleanup(void);

void MotorPositionSensor_run(void);

void getPosition(void);

void MotorPositionSensor_initRelations(void);

#endif

Code Listing 4-2 MotorPositionSensor.c

#include "MotorData.h"

#include "MotorPositionSensor.h"

#include <stdlib.h>

static int mps_priority = 5;

static int mps_eventQueue[10];

static int mps_stack[1024];

void MotorPositionSensor_Cleanup(void) {

}

void MotorPositionSensor_Init(void) {

/* create the OS event queue and the task */

/* use a queue of 10 events, and a 1024 element stack */

OSQCreate (&mps_eventQueue[0], 10);

OSTaskCreate (MotorPositionSensor_run, NULL,

(void *)&mps_stack[1024], 5);

/* now run the task */

MotorPositionSensor_run();

}

void MotorPositionSensor_run(void) {

int os_pend_timer_id;

os_pend_timer_id = OSCreateTimer(50, RETRIGGER);

while (TRUE) {

OSQPend(os_pend_timer_id, WAIT_FOREVER);

getPosition();

};

}

void getPosition(void) {

int x;

/* get position by reading the sensor */

x = rand();

setMeasPos(x);

}

void MotorPositionSensor_initRelations(void) {

MotorPositionSensor_Init();

}

Code Listing 4-3 MotorDisplay.h

#ifndef MotorDisplay_H

#define MotorDisplay_H

void MotorDisplay_Init(void);

void MotorDisplay_Cleanup(void);

void MotorDisplay_run(void);

void display(void);

void MotorDisplay_initRelations(void);

#endif

Code Listing 4-4 MotorDisplay.c

#include "MotorData.h"

#include <stdio.h>

#include "MotorDisplay.h"

static int md_priority = 10;

static int md_eventQueue[10];

static int md_stack[1024];

void MotorDisplay_Init(void) {

/* create the OS event queue and the task */

/* use a queue of 10 events, and a 1024 element stack */

OSQCreate (&md_eventQueue[0], 10);

OSTaskCreate (MotorDisplay_run, NULL,

(void *)&md_stack[1024], 5);

/* now run the task */

MotorDisplay_run();

}

void MotorDisplay_Cleanup(void) {

}

void MotorDisplay_run(void) {

int os_pend_timer_id;

os_pend_timer_id = OSCreateTimer(500, RETRIGGER);

while (TRUE) {

OSQPend(os_pend_timer_id, WAIT_FOREVER);

display();

};

}

void display(void) {

printf("Commanded position = %d\n", getCmdPos());

printf("Measured position = %d\n\n", getMeasPos());

}

void MotorDisplay_initRelations(void) {

MotorDisplay_Init();

}

Code Listing 4-5 MotorController.h

#ifndef MotorController_H

#define MotorController_H

void MotorController_Init(void);

void MotorController_Cleanup(void);

void MotorController_run(void);

void move(void);

void zero(void);

void MotorController_initRelations(void);

#endif

Code Listing 4-13 MotorController.c

#include "MotorData.h"

#include <stdlib.h>

#include "MotorController.h"

static int mc_priority = 8;

static int mc_eventQueue[10];

static int mc_stack[1024];

int motor1Pos;

int motor2Pos;

void MotorController_Init(void) {

/* use a queue of 10 events, and a 1024 element stack */

/* create the OS event queue and the task */

OSQCreate (&mc_eventQueue[0], 10);

OSTaskCreate (MotorController_run, NULL,

(void *)&mc_stack[1024], 5);

/* now run the task */

MotorController_run();

}

void MotorController_Cleanup(void) {

}

void MotorController_run(void) {

int os_pend_timer_id, x, y;

os_pend_timer_id = OSCreateTimer(100, RETRIGGER);

while (TRUE) {

OSQPend(os_pend_timer_id, WAIT_FOREVER);;

move();

};

}

void move(void) {

int m1Pos, m2Pos;

/* this function would read the instrument

panel to set the motor movement settings

to set the position of the motors.

Note that if you want to set only one

of the motors, then set a negative value

for the commanded position of the other

*/

m1Pos = rand();

m2Pos = rand();

if (m1Pos >= 0)

motor1Pos = m1Pos;

if (m2Pos >=0)

motor2Pos = m2Pos;

setCmdPos(100*m1Pos + m2Pos);

}

void zero(void) {

motor1Pos = 0;

motor2Pos = 0;

setCmdPos(0);

}

void MotorController_initRelations(void) {

MotorController_Init();

}

Code Listing 4-14 MotorData.h

#ifndef MotorData_H

#define MotorData_H

#define RETRIGGER (2)

#define WAIT_FOREVER (0)

extern int commandedPosition;

extern int measuredPosition;

int getCmdPos(void);

int getMeasPos(void);

void setCmdPos(int x);

void setMeasPos(int x);

#endif

Code Listing 4-15 MotorData.c

#include "MotorData.h"

int commandedPosition = 0;

int measuredPosition = 0;

int getCmdPos(void) {

return commandedPosition;

}

int getMeasPos(void) {

return measuredPosition;

}

void setCmdPos(int x) {

commandedPosition = x;

}

void setMeasPos(int x) {

measuredPosition = x;

}

Task Coordination Patterns

To be honest, when tasks are truly independent, the concurrency design is pretty simple. The problem is that independence is not the normal condition for concurrent threads. In virtually all embedded systems, tasks are “mostly independent”13 . The four patterns of this section include the Critical Region, Guard Call, Queuing, and Rendezvous Patterns. The first three of these are often used to serialize access to shared resources as well and so could have been reasonably placed in that section of this chapter.

4.4 Critical Region Pattern

The critical region pattern is the simplest pattern for task coordination around. It consists of disabling task switching during a region where it is important that the current task executes without being preempted. This can be because it is accessing a resource that cannot be simultaneously accessed safely, or because it is crucial that the task executes for a portion of time without interruption.

4.4.1 Abstract

In a preemptive multitasking environment (see the Static Priority Pattern, above), there may be periods of time during which a task must not be interrupted or preempted. The Critical Region Pattern address these intervals by disabling task switching or even by disabling interrupts (see the Interrupt Pattern, above). This provides uninterrupted execution by the current task. Once beyond the critical region, the task must reenable task switching (or interrupts) or the system will fail.

4.4.2 Problem

There are two primary circumstances under which a task should not be interrupted or preempted. First, it may be accessing a resource that may not be safely accessed by multiple clients simultaneously. Secondly, the task may be performing actions that must be completed within a short period of time or that must be performed in a specific order. This pattern allows the active task to execute without any potential interference from other tasks.

4.4.3 Pattern Structure

Figure 4-12 shows just how structurally simple this pattern is. In a common variant, the protected element is a resource and not a task; this is shown in Figure 4-13. In either case, within the relevant services, task switching must be disabled before the critical region begins and reenabled before the service ends. Note that unlike the Guarded Call Pattern, the scheduler is not involved in the critical region begin/end process other than to provide services to disable/enable task switching. If the scheduler does not provide such services, then the critical region can be controlled with the user of the C asm directive to turn on or off interrupt processing at the hardware level.

Figure 4-12 Critical Region Pattern

Figure 4-13 Critical Region Pattern with Resource

4.4.4 Collaboration Roles

This section describes the roles for this pattern.

4.4.4.1 CRSharedResource

This element is used in the variant of the Critical Resource Pattern that disables task switching to prevent simultaneous access to the resource. In the case of the pattern, the protected resource is the value attribute. Note that all the relevant services must implement the critical region to protect the resource. In this case, it means that both the setValue() and getValue() functions must separately implement the critical region.

4.4.4.2 TaskWithCriticalRegion

This element is used when the task wants to disable task switching for reasons unrelated to sharing resources (usually related to timing). The criticalService() disables and enables task switching or interrupts during its critical region(s).

4.4.4.3 TaskWithSharedResource

This element stands for the set of all tasks that may want to access the shared resource (hence the ‘*’ multiplicity on the directed association). These tasks have no knowledge about the method for protecting the resource, since that is encapsulated within the shared resource.

4.4.5 Consequences

This pattern enforces the critical region policies well and can be used to turn off the scheduled task switches or, more draconianly, disable interrupts all together. Care must be taken to reenable task switching once the element is out of its critical region. Additionally, the use of this pattern can affect the timing of other tasks. Critical regions are usually short in duration for this reason. There is no problem of unbounded priority inversion because all task switching is disabled during the critical region.

Care must be taken when nesting calls to functions with their own critical regions. If such a function is called within the critical region of another, the nest function call will reenable task switching and end the critical region of the caller function.

4.4.6 Implementation Strategies

This pattern is usually implemented either by using scheduler-provided services such as OS_disable_task_switching() and OS_enable_task_switching() (or something similar) or by introducing assembly language instructions to disable or enable interrupt processing at the hardware level.

4.4.7 Related Patterns

This pattern is only appropriate when multiple tasks can access the same resource or services or when other tasks could interrupt time-critical processing of the current task. For this reason, it is not appropriate with the Cyclic Executive Pattern, but is often used with the Static Priority Pattern.

4.4.8 Example

The example in Figure 4-14 shows a single task CRRobotArmManager with a function moveRobotArm that has a critical region during the actual movement of the arm.

Figure 4-14 Critical Region Pattern

In either case, the critical region is encapsulated entirely with the moveRobotArm() and motorZero() services, as can be seen in Code Listing 4-16 and Code Listing 4-17.

Code Listing 4-6 CRRobotArmManager.h

#ifndef CRRobotArmManager_H

#define CRRobotArmManager_H

struct CRDisplay;

struct RobotArm;

struct UserInput;

typedef struct CRRobotArmManager CRRobotArmManager;

struct CRRobotArmManager {

struct CRDisplay* itsCRDisplay;

struct RobotArm* itsRobotArm;

struct UserInput* itsUserInput;

};

/* Constructors and destructors:*/

void CRRobotArmManager_Init(CRRobotArmManager* const me);

void CRRobotArmManager_Cleanup(CRRobotArmManager* const me);

/* Operations */

void CRRobotArmManager_motorZero(CRRobotArmManager* const me);

void CRRobotArmManager_moveRobotArm(CRRobotArmManager* const me);

struct CRDisplay* CRRobotArmManager_getItsCRDisplay(const CRRobotArmManager* const me);

void CRRobotArmManager_setItsCRDisplay(CRRobotArmManager* const me, struct CRDisplay* p_CRDisplay);

struct RobotArm* CRRobotArmManager_getItsRobotArm(const CRRobotArmManager* const me);

void CRRobotArmManager_setItsRobotArm(CRRobotArmManager* const me, struct RobotArm* p_RobotArm);

struct UserInput* CRRobotArmManager_getItsUserInput(const CRRobotArmManager* const me);

void CRRobotArmManager_setItsUserInput(CRRobotArmManager* const me, struct UserInput* p_UserInput);

CRRobotArmManager * CRRobotArmManager_Create(void);

void CRRobotArmManager_Destroy(CRRobotArmManager* const me);

#endif

Code Listing 4-17 CRRobotArmManager.c

#include "CRRobotArmManager.h"

#include "CRDisplay.h"

#include "RobotArm.h"

#include "UserInput.h"

static void cleanUpRelations(CRRobotArmManager* const me);

void CRRobotArmManager_Init(CRRobotArmManager* const me) {

me->itsCRDisplay = NULL;

me->itsRobotArm = NULL;

me->itsUserInput = NULL;

}

void CRRobotArmManager_Cleanup(CRRobotArmManager* const me) {

cleanUpRelations(me);

}

void CRRobotArmManager_motorZero(CRRobotArmManager* const me) {

int success = 1;

OS_disable_task_switching();

/* critical region code */

success = RobotArm_moveTo(me->itsRobotArm,0,0,0);

/* critical region ends */

OS_enable_task_switching();

if (!success)

CRDisplay_printMsg(me->itsCRDisplay,"Cannot zero!");

}

void CRRobotArmManager_moveRobotArm(CRRobotArmManager* const me) {

/* local stack variable declarations */

int x, y, z, success = 1;

/*noncritical region code */

/* note that the function below has its

own critical region and so cannot be

called inside of the critical region

of this function

*/

CRRobotArmManager_motorZero(me);

x = UserInput_getX(me->itsUserInput);

y = UserInput_getY(me->itsUserInput);

z = UserInput_getX(me->itsUserInput);

/* critical region begins */

OS_disable_task_switching();

/* critical region code */

success = RobotArm_moveTo(me->itsRobotArm,x,y,z);

/* critical region ends */

OS_enable_task_switching();

/* more noncritical region code */

CRDisplay_printInt(me->itsCRDisplay, "Result is ", success);

}

struct CRDisplay* CRRobotArmManager_getItsCRDisplay(const CRRobotArmManager* const me) {

return (struct CRDisplay*)me->itsCRDisplay;

}

void CRRobotArmManager_setItsCRDisplay(CRRobotArmManager* const me, struct CRDisplay* p_CRDisplay) {

me->itsCRDisplay = p_CRDisplay;

}

struct RobotArm* CRRobotArmManager_getItsRobotArm(const CRRobotArmManager* const me) {

return (struct RobotArm*)me->itsRobotArm;

}

void CRRobotArmManager_setItsRobotArm(CRRobotArmManager* const me, struct RobotArm* p_RobotArm) {

me->itsRobotArm = p_RobotArm;

}

struct UserInput* CRRobotArmManager_getItsUserInput(const CRRobotArmManager* const me) {

return (struct UserInput*)me->itsUserInput;

}

void CRRobotArmManager_setItsUserInput(CRRobotArmManager* const me, struct UserInput* p_UserInput) {

me->itsUserInput = p_UserInput;

}

CRRobotArmManager * CRRobotArmManager_Create(void) {

CRRobotArmManager* me = (CRRobotArmManager *) malloc(sizeof(CRRobotArmManager));

if(me!=NULL)

{

CRRobotArmManager_Init(me);

}

return me;

}

void CRRobotArmManager_Destroy(CRRobotArmManager* const me) {

if(me!=NULL)

{

CRRobotArmManager_Cleanup(me);

}

free(me);

}

static void cleanUpRelations(CRRobotArmManager* const me) {

if(me->itsCRDisplay != NULL)

{

me->itsCRDisplay = NULL;

}

if(me->itsRobotArm != NULL)

{

me->itsRobotArm = NULL;

}

if(me->itsUserInput != NULL)

{

me->itsUserInput = NULL;

}

}

A little bit about semaphores

Edsger Dijkstra invented the concept of a software semaphore for the coordination of tasks wanting to share resources. The most common implementation is a counting semaphore which allows, at most n tasks, to use a resource and blocks additional tasks should they attempt to access the resource until one of the tasks that has successfully locked (said to own the semaphore) unlocks the resource. n is said to be the capacity of the semaphore. A binary semaphore is a counting semaphore for which n = 1. A mutual exclusion (mutex) semaphore is a binary semaphore with some extra properties, which are OS dependent. Binary semaphores are used for both mutual exclusion and event notification, while mutexes are used only to enforce mutual exclusion.

Semaphore are initialized with n when whey are created, something like

semaPtr = OS_create_semaphore(int n);

which initializes an internal thread counter to n.

The semaphore offers two other services, lock() and release() (which, being Dutch, Dijkstra called P (for Prolaag or “test”)1 and V(for verhoog or “increase”)). lock() works by decrementing the internal counter by 1. If the count has decremented to 0 or less, then the thread calling the lock() function is blocked by the scheduler. The release() function increments the semaphore counter, so that if there is are threads waiting on that lock, one of them will unblock and run (that is, its call to lock() now returns). Note that you usually can’t test to see if a call to lock() will block or not before the thread calls it; it will either succeed immediately or block the thread and (hopefully) be released later. Put another way, there is usually no way to know if there are threads blocking on that semaphore or not before you call lock(). However, some RTOS give a trylock() function that locks if the resource is available or returns an error code if not.

void lock(Semaphore* s) {

do {

wait();

} while (s->count <= 0);

- -s->count; /* this operation must be implemented as a

critical region */

};

void release(Semaphore* s) {

++s->count; /* this operation must be implemented as a

critical region */

}

Other terms for lock include decrement and wait; other terms for release include free, increment, and signal, used in symmetric pairs, (lock/free, lock/release, wait/signal, decrement/increment) depending on the usage of the semaphore.

Mutexes may be recursive, a useful property in which once a thread has successfully locked the semaphore, further calls it makes to lock() do not block the thread (that is, the thread cannot block itself through multiple calls to the same mutex). For a recursive mutex, the semaphore is freed by a thread when the number of calls it makes to free equals the number of calls it made to lock().

4.5 Guarded Call Pattern

The Guarded Call Pattern serializes access to a set of services that could potentially interfere with each other in some way if called simultaneously by multiple callers. In this pattern, access is serialized by providing a locking mechanism that prevents other threads from invoking the services while locked.

4.5.1 Abstract

The Guarded Call Pattern uses semaphores to protect a related set of services (functions) from simultaneous access by multiple clients in a preemptive multitasking environment, in a process known as mutual exclusion. The Guarded Call Pattern provides timely access to services, provided that they are not in use by other tasks. However, if not mixed with other patterns, the use of this pattern can lead to unbounded priority inversion, such as that shown in Figure 4-5.

4.5.2 Problem

The problem this pattern addresses is the need for a timely synchronization or data exchange between threads. In such cases, it may not be possible to wait for an asynchronous (queuing) rendezvous. A synchronous (function call) rendezvous can be made more timely, but this must be done carefully to avoid data corruption and erroneous computation.

4.5.3 Pattern Structure

The pattern structure for this pattern is shown in Figure 4-15. In this case, multiple PreemptiveTasks access the GuardedResource through its functions. Inside those functions, calls are made to lock and release the resource. The scheduler supports blocking by placing a task on the blocked queue when it invokes a locked semaphore and unblocks it when that semaphore is released. The scheduler must implement the semaphore lock() function as a critical region to eliminate the possibility of race conditions.

Figure 4-15 Guarded Call Pattern

4.5.4 Collaboration Roles

This section describes the roles for this pattern.

4.5.4.1 GuardedResource

The GuardedResource is a shared resource that employs mutex semaphores to enforce mutual exclusion of access to its methods. Inside the relevant functions14 , and prior to the access of the resource per se, a call is made to the lock() function of the associated Semaphore instance. if the Semaphore is in the unlocked state, then it becomes locked; if it is in the locked state, then the Semaphore signals the StaticPriorityScheduler to block the currently running task. It is important that the relevant functions of a specific resource instance must all share the same instance of the Semaphore class. This ensures that they are protected together as a unit from simultaneous access by multiple PreemptiveTasks.

4.5.4.2 PreemptiveTask

The PreemptiveTask represents an «active» class that is run via a preemptive multitasking scheduler. It accesses the GuardedResource by invoking the latter’s functions, which in turn are protected by the Semaphore.

4.5.4.3 Semaphore

The Semaphore is a mutual exclusion semaphore class that serializes access to a GuardedResource. The guarded functions of the GuardedResource invoke the lock() function of the Semaphore whenever they are called and release()once the service is complete. Other client threads that attempt to invoke a service when its associated Semaphore is locked become blocked until the Semaphore is unlocked. The blocking is performed when the Semaphore signals the StaticPriorotyScheduler that a call attempt was made by the currently active thread. The mutexID identifies which Mutex is blocking the task. This element is usually provided by the RTOS. Note this is the same element that appears in the Static Priority Pattern earlier in this chapter.

4.5.4.4 StaticPriorityScheduler

This object orchestrates the execution of multiple threads based on their priority. In this pattern, we are focusing on the responsibility (not shown) of this element to block a task if it attempts to lock an already-locked Semaphore and to unblock all tasks blocked on the mutexID when this lock is released. This element is usually provided by the RTOS. Note this is the same element that appears in the Static Priority Pattern earlier in this chapter.

4.5.5 Consequences

This pattern provides a timely access to a resource and at the same time prevents multiple simultaneous accesses that can result in data corruption and system misbehavior. If the resource is not locked, then access proceeds without delay. If the resource is currently locked, then the caller must block until the lock on the resource is released. Naïve use can result in unbounded priority inversion in the manner described in Figure 4-5. See Section 4.5.7 Related Patterns for a discussion of an RTOS-supported solution.

4.5.6 Implementation Strategies

The tricky part of implementing the pattern lies in the implementation of the Mutex. Usually, the RTOS will provide a Semaphore instance, accessible through operations such as:

  • OSSemaphore OS_create_semaphore()

    creates a semaphore and returns a pointer to it.

  • void OS_destroy_semaphore(OSSemaphore*)

    destroys the semaphore pointed at.

  • void OS_lock_semaphore(OSSemaphore*)

    locks the relevant semaphore. If called with an unlocked OSSemaphore, it becomes locked. If already locked, then it blocks the current task with its internally-stored mutexID. The scheduler must store the task data including the continuation entry point and stack.

  • void OS_release_semaphore(OSSemaphore *)

    releases the lock on the relevant semaphore and unblocks all tasks blocked on its internally-stored MutexID.

Of course, the names and parameters of the services are OS specific.

4.5.7 Related Patterns

The Guarded Call Pattern is used with Static Priority or other preemptive multitasking environments. It is a less draconian policy than is enforced by the Critical Region Pattern (above) because it does not interfere with the execution of higher priority tasks that don’t need access to the resource. It is more responsive than the Queuing Pattern (below) because it doesn’t need to wait for the resource’s thread to become active to check for the request.

There is one very important class of variants of this pattern that solves the problem of unbounded priority inversion that uses a concept of priority inheritance. The basic idea is that each resource has an addition attribute (variable) knows as its priority ceiling, which is equal to the priority of the highest-priority task that can ever access the resource. The underlying problem with naïve implementation of the Guarded Call Pattern is that intermediate priority tasks that don’t need the resource can preempt the low-priority task that currently owns the resource needed by the blocked high-priority task. There can be any number of such intermediate-priority tasks, leading to chained blocking of the high priority task. The priority inheritance solution is to elevate the priority of the low-priority task as soon as the high-priority task attempts to lock the resource owned by the low-priority task. This can be done by creating a semaphore that implements the priority inversion. This is the approach taken, for example, by VxWorks™ from Wind River, which implements a set of mutex semaphore functions such as:

Code Listings

SEM_ID semM;

semM = semMCreate(. . .);

semTake(semM, FLAG);

semGive(semM);

Values for FLAG include WAIT_FOREVER, and – important to this discussion – SEM_INVERSION_SAFE. When the latter flag value is used, VxWorks’ mutex semaphores implement priority inheritance.

In Figure 4-5, if the locking mutex is implemented in this way, the following sequence of events holds:

  1. Task Z runs and locks Resource R at its nominal priority of 100.

  2. Task A runs at priority 1, and attempts to lock the mutex guarding R and blocks. At this point, the priority of Task Z is elevated to that of Task A (i.e., priority 1).

  3. Task Y becomes ready to run. With a normal naïve mutex, it would run and preempt Task Z. However, its priority (99) is less than the priority at which Task Z is running so it remains on the Task Ready queue.

  4. Task Z completes its use of the Resource R and unlocks it. At this point, the priority of Task Z is demoted to its nominal priority (100).

  5. Task A unblocks and now uses the resource and completes within its deadline.

  6. Task Y now, being the highest ready task, runs.

  7. Task Y completes, and finally, Task Z is allowed to complete its execution.

You can see in this simple example how priority inheritance prevented the unbounded priority inversion that a normal counting semaphore would allow.

4.5.8 Example

The example shown in Figure 4-16 is a portion of a flight control system. In it, the ship KinematicData (position and attitude) are managed as a shared resource for multiple clients. Since the setting and getting of these values is not an atomic process, there is a possibility of displaying invalid data if, for example, the FlightDataDisplay showFlightData() function is preempted to update the attitude data by the AttitudeController. The use of semaphores in the methods ensures that this cannot happen.

Figure 4-16 Guarded Call example

The next several code listings (Code Listing 4-19 through Code Listing 4-30) give the code for the elements of the Guarded Call example in Figure 4-16. The first gives the specification for the OS calls used for the semaphores. The next four show the code for the complex data types Position and Attitude, respectively.

Code Listing 4-23 and Code Listing 4-24 give the code for the resource (KinematicData) that uses the OSSemaphore to guard its operations to set and get that data. The initializer KinematicData_Init() creates a semaphore via a call to OS_create_semaphore() while the clean-up function KinematicData_CleanUp() calls OS_destroy_semaphore() to free up its memory.Its accessor and mutator methods to get and set attitude and position data all invoke the OS_lock_semaphore(), do their processing, and then invoke OS_release_semaphore().

The code listings after that show the code for the clients: the FlightDataDisplay, AttitudeController, and Navigator. These all share the information stored in the KinematicData object and so must be protected from data corruption.

Code Listing 4-7 GuardedCallExample.h

#ifndef GuardedCallExample_H

#define GuardedCallExample_H

struct Attitude;

struct AttitudeController;

struct FlightDataDisplay;

struct GuardTester;

struct GuardedCallBuilder;

struct KinematicData;

struct Navigator;

struct Position;

/* OS provided types and functions */

struct OSSemaphore;

struct OSSemaphore* OS_create_semaphore(void);

void OS_destroy_semaphore(struct OSSemaphore* sPtr);

void OS_lock_semaphore(struct OSSemaphore* sPtr);

void OS_release_semaphore(struct OSSemaphore* sPtr);

#endif

Code Listing 4-19 Position.h

#ifndef Position_H

#define Position_H

#include "GuardedCallExample.h"

typedef struct Position Position;

struct Position {

double altitude; /*# # attribute altitude */

double latitude; /*# # attribute latitude */

double longitude; /*# # attribute longitude */

};

/* Constructors and destructors:*/

void Position_Init(Position* const me);

void Position_Cleanup(Position* const me);

/* Operations */

double Position_getAltitude(Position* const me);

double Position_getLatitude(Position* const me);

double Position_getLongitude(Position* const me);

void Position_setAltitude(Position* const me, double x);

void Position_setLatitude(Position* const me, double x);

void Position_setLongitude(Position* const me, double x);

Position * Position_Create(void);

void Position_Destroy(Position* const me);

#endif

Code Listing 4-8 Position.c

#include "Position.h"

void Position_Init(Position* const me) {

}

void Position_Cleanup(Position* const me) {

}

double Position_getAltitude(Position* const me) {

return me->altitude;

}

double Position_getLatitude(Position* const me) {

return me->latitude;

}

double Position_getLongitude(Position* const me) {

return me->longitude;

}

void Position_setAltitude(Position* const me, double x) {

me->altitude = x;

}

void Position_setLatitude(Position* const me, double x) {

me->latitude = x;

}

void Position_setLongitude(Position* const me, double x) {

me->longitude = x;

}

Position * Position_Create(void) {

Position* me = (Position *) malloc(sizeof(Position));

if(me!=NULL) {

Position_Init(me);

}

return me;

}

void Position_Destroy(Position* const me) {

if(me!=NULL) {

Position_Cleanup(me);

}

free(me);

}

Code Listing 4-9 Attitude.h

#ifndef Attitude_H

#define Attitude_H

#include "GuardedCallExample.h"

typedef struct Attitude Attitude;

struct Attitude {

double pitch; /*## attribute pitch */

double roll; /*## attribute roll */

double yaw; /*## attribute yaw */

};

/* Constructors and destructors:*/

void Attitude_Init(Attitude* const me);

void Attitude_Cleanup(Attitude* const me);

/* Operations */

double Attitude_getPitch(Attitude* const me);

double Attitude_getRoll(Attitude* const me);

double Attitude_getYaw(Attitude* const me);

void Attitude_setPitch(Attitude* const me, double x);

void Attitude_setRoll(Attitude* const me, double x);

void Attitude_setYaw(Attitude* const me, double x);

Attitude * Attitude_Create(void);

void Attitude_Destroy(Attitude* const me);

#endif

Code Listing 4-10 Attitude.c

#include "Attitude.h"

void Attitude_Init(Attitude* const me) {

}

void Attitude_Cleanup(Attitude* const me) {

}

double Attitude_getPitch(Attitude* const me) {

return me->pitch;

}

double Attitude_getRoll(Attitude* const me) {

return me->roll;

}

double Attitude_getYaw(Attitude* const me) {

return me->yaw;

}

void Attitude_setPitch(Attitude* const me, double x) {

me->pitch = x;

}

void Attitude_setRoll(Attitude* const me, double x) {

me->roll = x;

}

void Attitude_setYaw(Attitude* const me, double x) {

me->yaw = x;

}

Attitude * Attitude_Create(void) {

Attitude* me = (Attitude *) malloc(sizeof(Attitude));

if(me!=NULL)

{

Attitude_Init(me);

}

return me;

}

void Attitude_Destroy(Attitude* const me) {

if(me!=NULL)

{

Attitude_Cleanup(me);

}

free(me);

}

Code Listing 4-11 KinematicData.h

#ifndef KinematicData_H

#define KinematicData_H

#include "GuardedCallExample.h"

#include "Attitude.h"

#include "OSSemaphore.h"

#include "Position.h"

typedef struct KinematicData KinematicData;

struct KinematicData {

struct Attitude attitude;

struct Position position;

OSSemaphore* sema; /*## mutex semaphore */

};

/* Constructors and destructors:*/

void KinematicData_Init(KinematicData* const me);

void KinematicData_Cleanup(KinematicData* const me);

/* Operations */

Attitude KinematicData_getAttitude(KinematicData* const me);

struct Position KinematicData_getPosition(KinematicData* const me);

void KinematicData_setAttitude(KinematicData* const me, Attitude a);

void KinematicData_setPosition(KinematicData* const me, Position p);

KinematicData * KinematicData_Create(void);

void KinematicData_Destroy(KinematicData* const me);

#endif

Code Listing 4-24 KinematicData.c

#include "KinematicData.h"

void KinematicData_Init(KinematicData* const me) {

Attitude_Init(&(me->attitude));

Position_Init(&(me->position));

me->sema = OS_create_semaphore();

}

void KinematicData_Cleanup(KinematicData* const me) {

OS_destroy_semaphore(me->sema);

}

struct Position KinematicData_getPosition(KinematicData* const me) {

Position p;

/* engage the lock */

OS_lock_semaphore(me->sema);

p = me->position;

/* release the lock */

OS_release_semaphore(me->sema);

return p;

}

void KinematicData_setAttitude(KinematicData* const me, Attitude a) {

/* engage the lock */

OS_lock_semaphore(me->sema);

me->attitude = a;

/* release the lock */

OS_release_semaphore(me->sema);

}

void KinematicData_setPosition(KinematicData* const me, Position p) {

/* engage the lock */

OS_lock_semaphore(me->sema);

me->position = p;

/* release the lock */

OS_release_semaphore(me->sema);

}

KinematicData * KinematicData_Create(void) {

KinematicData* me = (KinematicData *) malloc(sizeof(KinematicData));

if(me!=NULL) {

KinematicData_Init(me);

}

return me;

}

void KinematicData_Destroy(KinematicData* const me) {

if(me!=NULL) {

KinematicData_Cleanup(me);

}

free(me);

}

Attitude KinematicData_getAttitude(KinematicData* const me) {

Attitude a;

/* engage the lock */

OS_lock_semaphore(me->sema);

a = me->attitude;

/* release the lock */

OS_release_semaphore(me->sema);

return a;

}

Code Listing 4-12 AttitudeController.h

#ifndef AttitudeController_H

#define AttitudeController_H

#include "GuardedCallExample.h"

#include "Attitude.h"

struct KinematicData;

typedef struct AttitudeController AttitudeController;

struct AttitudeController {

struct Attitude ownShipAttitude;

struct KinematicData* itsKinematicData;

};

/* Constructors and destructors:*/

void AttitudeController_Init(AttitudeController* const me);

void AttitudeController_Cleanup(AttitudeController* const me);

/* Operations */

void AttitudeController_manageAttitude(AttitudeController* const me);

struct KinematicData* AttitudeController_getItsKinematicData(const AttitudeController* const me);

void AttitudeController_setItsKinematicData(AttitudeController* const me, struct KinematicData* p_KinematicData);

AttitudeController * AttitudeController_Create(void);

void AttitudeController_Destroy(AttitudeController* const me);

#endif

Code Listing 4-13 AttitudeController.c

#include "AttitudeController.h"

#include "KinematicData.h"

static void cleanUpRelations(AttitudeController* const me);

void AttitudeController_Init(AttitudeController* const me) {

Attitude_Init(&(me->ownShipAttitude));

me->itsKinematicData = NULL;

}

void AttitudeController_Cleanup(AttitudeController* const me) {

cleanUpRelations(me);

}

void AttitudeController_manageAttitude(AttitudeController* const me) {

KinematicData_setAttitude(me->itsKinematicData, me->ownShipAttitude);

}

struct KinematicData* AttitudeController_getItsKinematicData(const AttitudeController* const me) {

return (struct KinematicData*)me->itsKinematicData;

}

void AttitudeController_setItsKinematicData(AttitudeController* const me, struct KinematicData* p_KinematicData) {

me->itsKinematicData = p_KinematicData;

}

AttitudeController * AttitudeController_Create(void) {

AttitudeController* me = (AttitudeController *) malloc(sizeof(AttitudeController));

if(me!=NULL) {

AttitudeController_Init(me);

}

return me;

}

void AttitudeController_Destroy(AttitudeController* const me) {

if(me!=NULL) {

AttitudeController_Cleanup(me);

}

free(me);

}

static void cleanUpRelations(AttitudeController* const me) {

if(me->itsKinematicData != NULL) {

me->itsKinematicData = NULL;

}

}

Code Listing 4-14 Navigator.h

#ifndef Navigator_H

#define Navigator_H

#include "GuardedCallExample.h"

#include "Position.h"

struct KinematicData;

typedef struct Navigator Navigator;

struct Navigator {

struct Position ownShipPosition;

struct KinematicData* itsKinematicData;

};

/* Constructors and destructors:*/

void Navigator_Init(Navigator* const me);

void Navigator_Cleanup(Navigator* const me);

/* Operations */

void Navigator_updatePosition(Navigator* const me);

struct KinematicData* Navigator_getItsKinematicData(const Navigator* const me);

void Navigator_setItsKinematicData(Navigator* const me, struct KinematicData* p_KinematicData);

Navigator * Navigator_Create(void);

void Navigator_Destroy(Navigator* const me);

#endif

Code Listing 4-15 Navigator.c

#include "Navigator.h"

#include "KinematicData.h"

static void cleanUpRelations(Navigator* const me);

void Navigator_Init(Navigator* const me) {

Position_Init(&(me->ownShipPosition));

me->itsKinematicData = NULL;

}

void Navigator_Cleanup(Navigator* const me) {

cleanUpRelations(me);

}

void Navigator_updatePosition(Navigator* const me) {

KinematicData_setPosition(me->itsKinematicData, me->ownShipPosition);

}

struct KinematicData* Navigator_getItsKinematicData(const Navigator* const me) {

return (struct KinematicData*)me->itsKinematicData;

}

void Navigator_setItsKinematicData(Navigator* const me, struct KinematicData* p_KinematicData) {

me->itsKinematicData = p_KinematicData;

}

Navigator * Navigator_Create(void) {

Navigator* me = (Navigator *) malloc(sizeof(Navigator));

if(me!=NULL) {

Navigator_Init(me);

}

return me;

}

void Navigator_Destroy(Navigator* const me) {

if(me!=NULL) {

Navigator_Cleanup(me);

}

free(me);

}

static void cleanUpRelations(Navigator* const me) {

if(me->itsKinematicData != NULL) {

me->itsKinematicData = NULL;

}

}

Code Listing 4-16 FlightDataDisplay.h

#ifndef FlightDataDisplay_H

#define FlightDataDisplay_H

#include "GuardedCallExample.h"

struct KinematicData;

typedef struct FlightDataDisplay FlightDataDisplay;

struct FlightDataDisplay {

struct KinematicData* itsKinematicData;

};

/* Constructors and destructors:*/

void FlightDataDisplay_Init(FlightDataDisplay* const me);

void FlightDataDisplay_Cleanup(FlightDataDisplay* const me);

/* Operations */

void FlightDataDisplay_showFlightData(FlightDataDisplay* const me);

struct KinematicData* FlightDataDisplay_getItsKinematicData(const FlightDataDisplay* const me);

void FlightDataDisplay_setItsKinematicData(FlightDataDisplay* const me, struct KinematicData* p_KinematicData);

FlightDataDisplay * FlightDataDisplay_Create(void);

void FlightDataDisplay_Destroy(FlightDataDisplay* const me);

#endif

Code Listing 4-30 FlightDataDisplay.c

#include "FlightDataDisplay.h"

#include "KinematicData.h"

#include <stdio.h>

static void cleanUpRelations(FlightDataDisplay* const me);

void FlightDataDisplay_Init(FlightDataDisplay* const me) {

me->itsKinematicData = NULL;

}

void FlightDataDisplay_Cleanup(FlightDataDisplay* const me) {

cleanUpRelations(me);

}

void FlightDataDisplay_showFlightData(FlightDataDisplay* const me) {

Attitude a;

Position p;

a = KinematicData_getAttitude(me->itsKinematicData);

p = KinematicData_getPosition(me->itsKinematicData);

printf("Roll, pitch, yaw = %f, %f, %f \n", a.roll, a.pitch, a.yaw);

printf("Lat, Long, Altitude = %f, %f, %f\n", p.latitude, p.longitude, p.altitude);

}

struct KinematicData* FlightDataDisplay_getItsKinematicData(const FlightDataDisplay* const me) {

return (struct KinematicData*)me->itsKinematicData;

}

void FlightDataDisplay_setItsKinematicData(FlightDataDisplay* const me, struct KinematicData* p_KinematicData) {

me->itsKinematicData = p_KinematicData;

}

FlightDataDisplay * FlightDataDisplay_Create(void) {

FlightDataDisplay* me = (FlightDataDisplay *) malloc(sizeof(FlightDataDisplay));

if(me!=NULL) {

FlightDataDisplay_Init(me);

}

return me;

}

void FlightDataDisplay_Destroy(FlightDataDisplay* const me) {

if(me!=NULL) {

FlightDataDisplay_Cleanup(me);

}

free(me);

}

static void cleanUpRelations(FlightDataDisplay* const me) {

if(me->itsKinematicData != NULL) {

me->itsKinematicData = NULL;

}

}

4.6 Queuing Pattern

The Queuing Pattern is the most common implementation of task asynchronous communication. It provides a simple means of communication between tasks that are uncoupled in time. The Queuing Pattern accomplishes this communication by storing messages in a queue, a nominally first-in-first-out data structure. The sender deposits a message in the queue and some time later, the receiver withdraws the message from the queue. It also provides a simple means of serializing access to a shared resource. The access messages are queued and handled at a later time and this avoids the mutual exclusion problem common to sharing resources.

4.6.1 Abstract

The Message Queuing Pattern uses asynchronous communications, implemented via queued messages, to synchronize and share information among tasks. This approach has the advantage of simplicity and does not experience mutual exclusion problems because no resource is shared by reference. Any information shared among threads is passed by value to the separate thread. While this limits the complexity of the collaboration among the threads, this approach in its pure form is immune to the standard resource corruption problems that plague concurrent systems that share information passed by reference. In passed by value sharing, a copy of the information is made and sent to the receiving thread for processing. The receiving thread fully owns the data it receives and so may modify them freely without concern for corrupting data due to multiple writers or due to sharing them among a writer and multiple readers. A downside of the pattern is that the message isn’t processed immediately after the sender posts it; the process waits until the receiver task runs and can process the waiting messages.

4.6.2 Problem

In multithreaded systems, tasks must synchronize and share information with others. Two primary things must be accomplished. First, the tasks must synchronize to permit sharing of the information. Secondly, the information must be shared in such a way that there is no chance of corruption or race conditions. This pattern addresses such task interaction.

4.6.3 Pattern Structure

The Queuing Pattern structure is shown in Figure 4-17. The QUEUE_SIZE declaration determines the maximum number of elements that the queue can retain. Care must be taken to ensure that this is large enough to handle the worst-case situation for your system usage, while not being so large as to waste memory.

Figure 4-17 Queuing Pattern

4.6.4 Collaboration Roles

This section describes the roles for this pattern.

4.6.4.1 Message

The Message class is an abstraction of a message of relevance between QTasks. It can be anything from a simple data value to an elaborate datagram structure such as used in TCP/IP messaging.

4.6.4.2 MessageQueue

The MessageQueue is the storage of information being exchanged between QTasks. The MessageQueue can store up to a maximum of QUEUE_SIZE message elements. The MessageQueue provides the following methods:

  • int MessageQueue_getNextIndex(MessageQueue* me, int index)

    A private (static) function that uses modulo arithmetic to compute the next valid index value.

  • int MessageQueue_insert(MessageQueue* me, Message m)

    A public function that, if the MessageQueue isn’t full, inserts a Message into the queue at the head position and updates the head index. It returns 0 if the insert succeeds and 1 if the MessageQueue was full.

  • Message* MessageQueue_remove(MessageQueue* me)

    A public function that, if the MessageQueue isn’t empty, removes the oldest message, creates a new heap variable to hold the data, updates the tail index, and returns that pointer to the caller. It returns a pointer to a Message if successful and a NULL pointer if not.

  • unsigned char MessageQueue_isFull(MessageQueue* me)

    A private function that returns 1 if the MessageQueue is full and 0 otherwise.

  • unsigned char MessageQueue_isEmpty(MessageQueue* me)

    A private function that returns 1 if the MessageQueue is empty and 1 otherwise.

4.6.4.3 Mutex

The Mutex is a mutual exclusion semaphore object that serializes access to a MessageQueue. The guarded functions of the MessageQueue invoke the lock() function of the Mutex whenever they are called and release() once the service is complete. Other client threads that attempt to invoke a service when its associated Mutex is locked become blocked until the Mutex is unlocked. This element is usually provided by the RTOS. Note this is the same element that appears in the Static Priority Pattern earlier in this chapter.

4.6.4.4 Qtask

The QTask class is a client of the MessageQueue and invokes either the insert() or remove() functions to access the data held there.

4.6.5 Consequences

The Queuing Pattern provides good serialization of access to the data as they are passed among tasks. The use of the mutex ensures that the MessageQueue itself isn’t corrupted by simultaneous access to the MessageQueue’s services. The reception of the data is less timely than the Guarded Call Pattern since the Queuing Pattern implements asynchronous communication.

The number of elements held by the queue can be quite large; this is useful as the products of the elements run in a “bursty” fashion, allowing for the consumer of the elements to catch up later in time. Poorly sized queues can lead to data loss (if too small) or wasted memory (if too large). Very small queues (that can hold one or two elements) can be used for simple asynchronous data exchange when the problem of data loss isn’t present.

4.6.6 Implementation Strategies

The simplest implementation of a queue is as an array of message elements. This has the advantage of simplicity but lacks the flexibility of a linked list.

Queues are fairly simple to implement but there are many possible variants. Sometimes some messages are more urgent or important than others and should be processed before waiting for lower priority messages. The MessageQueue can be modified to implement priority by adding either multiple buffers (one for each priority) or by inserting the element in the queue on the basis of the message priority (although this can increase the timing necessary for an element insertion and deletion). In this case, the message should have a priority attribute to enable the MessageQueue to properly insert and remove the element.

In complex systems, it may not be feasible to predict an optimal queue size. In such cases, extensible queues can be implemented. When the MessageQueue is full, more memory is allocated. A linked list implementation of the MessageQueue is more appropriate for an extensible queue. It may be difficult to determine under what circumstances it is appropriate to shrink the queue and release unused memory.

It sometimes happens that the potential number of elements exceeds memory capacity. In this case, a cached queue can be created; in such a queue, there are essentially three storage locations; a local memory newest data buffer, a local memory oldest data buffer, and a file that holds data of intermediate age. When the local new data buffer becomes full, it is emptied to the file system. When the local old data buffer becomes empty, the data from the file system are loaded into the local old data buffer and removed from the disk (or copied from the new data buffer if the file is empty). This allows storage of potentially huge volumes of data, but caching the data can be time intensive.

4.6.7 Related Patterns

The Queuing Pattern serializes access to data by queuing data or commands, allowing the recipient to handle them one at a time. It does this asynchronously, so the time between the sending of the message and its processing is uncoupled. This may not meet the performance needs of the system. The Guarded Call Pattern also serializes access but does so synchronously, so that data and command transfers generally happen closer together in time. However, the Guarded Call Pattern can lead to unbounded priority inversion if not used carefully. In addition, if the receiver isn’t ready to receive the message from the sender, the sender must block or take another action.

4.6.8 Example

Figure 4-18 shows an example of producers and consumers that share data via a queue. The key element for this pattern is the GasDataQueue. It implements a small 10-element queue of GasData. GasData is a struct that contains an enumerated GAS_TYPE (with enumerated values O2_GAS, N2_GAS, HE_GAS, and UNKNOWN_GAS), the gas concentration (a double), and the flow rate in ccs per minute (int). Code Listing 4-31 gives the #defines and general declarations. Code Listing 4-32 and Code Listing 4-33 give the code for the GasData struct.

Code Listing 4-17 GasDataExample.h

#ifndef QueuingExample_H

#define QueuingExample_H

struct GasController;

struct GasData;

struct GasDataQueue;

struct GasDisplay;

struct GasProcessingThread;

struct HeSensor;

struct N2Sensor;

struct O2Sensor;

struct OSSemaphore;

struct SensorThread;

typedef enum GAS_TYPE {

O2_GAS,

N2_GAS,

HE_GAS,

UNKNOWN_GAS

} GAS_TYPE;

/* define the size of the queue */

#define GAS_QUEUE_SIZE (10)

/* OS semaphore services */

struct OSSemaphore* OS_create_semaphore(void);

void OS_destroy_semaphore(struct OSSemaphore* sPtr);

void OS_lock_semaphore(struct OSSemaphore* sPtr);

void OS_release_semaphore(struct OSSemaphore* sPtr);

#endif

Code Listing 4-32 GasData.h

#ifndef GasData_H

#define GasData_H

#include "QueuingExample.h"

typedef struct GasData GasData;

struct GasData {

double conc;

unsigned int flowInCCPerMin;

GAS_TYPE gType;

};

/* Constructors and destructors:*/

void GasData_Init(GasData* const me);

void GasData_Cleanup(GasData* const me);

GasData * GasData_Create(void);

void GasData_Destroy(GasData* const me);

#endif

Code Listing 4-33 GasData.c

#include "GasData.h"

void GasData_Init(GasData* const me) {

me->conc = 0.0;

me->flowInCCPerMin = 0;

me->gType = UNKNOWN_GAS;

}

void GasData_Cleanup(GasData* const me) {

}

GasData * GasData_Create(void) {

GasData* me = (GasData *) malloc(sizeof(GasData));

if(me!=NULL) {

GasData_Init(me);

}

return me;

}

void GasData_Destroy(GasData* const me) {

if(me!=NULL) {

GasData_Cleanup(me);

}

free(me);

}

Figure 4-18 Queuing Example

The GasDataQueue is the center of the pattern. Its two operations insert() and remove() enqueue new data and dequeue oldest data, respectively. The insert() function returns TRUE if it succeeded and FALSE otherwise. The remove() function returns a pointer to a GasData instance for which the client is responsible for deleting. If the remove() function fails, it returns a NULL pointer.

The class also includes some private (static) helper functions such as getNextIndex() which computes the next index using modulo arithmetic to wrap around the top of the queue, isEmpty(), which returns TRUE if the queue has no elements, and isFull(), which returns TRUE if there is no space to hold additional elements.

Code Listing 4-34 and Code Listing 4-35 give the C code for the GasDataQueue.

Code Listing 4-34 GasDataQueue.h

#ifndef GasDataQueue_H

#define GasDataQueue_H

#include "QueuingExample.h"

#include "GasData.h"

#include "OSSemaphore.h"

typedef struct GasDataQueue GasDataQueue;

struct GasDataQueue {

int head;

OSSemaphore * sema;

int size;

int tail;

struct GasData itsGasData[GAS_QUEUE_SIZE];

};

/* Constructors and destructors:*/

void GasDataQueue_Init(GasDataQueue* const me);

void GasDataQueue_Cleanup(GasDataQueue* const me);

/* Operations */

int GasDataQueue_insert(GasDataQueue* const me, GasData g);

GasData * GasDataQueue_remove(GasDataQueue* const me);

int GasDataQueue_getItsGasData(const GasDataQueue* const me);

GasDataQueue * GasDataQueue_Create(void);

void GasDataQueue_Destroy(GasDataQueue* const me);

#endif

Code Listing 4-35 GasDataQueue.c

#include "GasDataQueue.h"

#include <stdio.h>

/* private (static) methods */

static void cleanUpRelations(GasDataQueue* const me);

static int getNextIndex(GasDataQueue* const me, int index);

static unsigned char isEmpty(GasDataQueue* const me);

static unsigned char isFull(GasDataQueue* const me);

static void initRelations(GasDataQueue* const me);

void GasDataQueue_Init(GasDataQueue* const me) {

me->head = 0;

me->sema = NULL;

me->size = 0;

me->tail = 0;

initRelations(me);

me->sema = OS_create_semaphore();

}

void GasDataQueue_Cleanup(GasDataQueue* const me) {

OS_destroy_semaphore(me->sema);

cleanUpRelations(me);

}

/*

Insert puts new gas data elements into the queue

if possible. It returns 1 if successful, 0 otherwise.

*/

int GasDataQueue_insert(GasDataQueue* const me, GasData g) {

OS_lock_semaphore(me->sema);

if (!isFull(me)) {

me->itsGasData[me->head] = g;

me->head = getNextIndex(me, me->head);

++me->size;

/* instrumentation */

/* print stuff out, just to visualize the insertions */

switch (g.gType) {

case O2_GAS:

printf("+++ Oxygen ");

break;

case N2_GAS:

printf("+++ Nitrogen ");

break;

case HE_GAS:

printf("+++ Helium ");

break;

default:

printf("UNKNWON ");

break;

};

printf(" at conc %f, flow %d\n",g.conc,g.flowInCCPerMin);

printf(" Number of elements queued %d, head = %d, tail = %d\n",

me->size, me->head, me->tail);

/* end instrumentation */

OS_release_semaphore(me->sema);

return 1;

}

else { /* return error indication */

OS_release_semaphore(me->sema);

return 0;

}

}

/*

remove creates a new element on the heap, copies

the contents of the oldest data into it, and

returns the pointer. Returns NULL if the queue

is empty

*/

GasData * GasDataQueue_remove(GasDataQueue* const me) {

GasData* gPtr;

OS_lock_semaphore(me->sema);

if (!isEmpty(me)) {

gPtr = (GasData*)malloc(sizeof(GasData));

gPtr->gType = me->itsGasData[me->tail].gType;

gPtr->conc = me->itsGasData[me->tail].conc;

gPtr->flowInCCPerMin = me->itsGasData[me->tail].flowInCCPerMin;

me->tail = getNextIndex(me, me->tail);

--me->size;

/* instrumentation */

switch (gPtr->gType) {

case O2_GAS:

printf("--- Oxygen ");

break;

case N2_GAS:

printf("--- Nitrogen ");

break;

case HE_GAS:

printf("--- Helium ");

break;

default:

printf("--- UNKNWON ");

break;

};

printf(" at conc %f, flow %d\n",gPtr->conc,gPtr->flowInCCPerMin);

printf(" Number of elements queued %d, head = %d, tail = %d\n",

me->size, me->head, me->tail);

/* end instrumentation */

OS_release_semaphore(me->sema);

return gPtr;

}

else { /* if empty return NULL ptr */

OS_release_semaphore(me->sema);

return NULL;

}

}

static int getNextIndex(GasDataQueue* const me, int index) {

/* this operation computes the next index from the

first using modulo arithmetic

*/

return (index+1) % QUEUE_SIZE;

}

static unsigned char isEmpty(GasDataQueue* const me) {

return (me->size == 0);

}

static unsigned char isFull(GasDataQueue* const me) {

return (me->size == GAS_QUEUE_SIZE);

}

int GasDataQueue_getItsGasData(const GasDataQueue* const me) {

int iter = 0;

return iter;

}

GasDataQueue * GasDataQueue_Create(void) {

GasDataQueue* me = (GasDataQueue *) malloc(sizeof(GasDataQueue));

if(me!=NULL) {

GasDataQueue_Init(me);

}

return me;

}

void GasDataQueue_Destroy(GasDataQueue* const me) {

if(me!=NULL) {

GasDataQueue_Cleanup(me);

}

free(me);

}

static void initRelations(GasDataQueue* const me) {

int iter = 0;

while (iter < GAS_QUEUE_SIZE){

GasData_Init(&((me->itsGasData)[iter]));

iter++;

}

}

static void cleanUpRelations(GasDataQueue* const me) {

int iter = 0;

while (iter < GAS_QUEUE_SIZE){

GasData_Cleanup(&((me->itsGasData)[iter]));

iter++;

}

}

In the example (Figure 4-18) two threads run clients. The SensorThread owns three clients itself, an O2Sensor, N2Sensor, and HeSensor. When this thread runs the main function of this thread (SensorThread_updateData()), it calls the getXXData() function of each to produce the data, and then marshalls the data up into GasData structs and sends them off. The relevant code (less the task initiation and supporting functions) for the SensorThread is given in Code Listing 4-36 and Code Listing 4-37.

Code Listing 4-36 SensorThread.h

/* initial declaratons stuff above */

typedef struct SensorThread SensorThread;

struct SensorThread {

struct GasDataQueue* itsGasDataQueue;

struct HeSensor itsHeSensor;

struct N2Sensor itsN2Sensor;

struct O2Sensor itsO2Sensor;

};

/* Operations */

void SensorThread_updateData(SensorThread* const me);

/* other operations declared too */

#endif

Code Listing 4-37 updateData function of SensorThread.c

/*

updateData runs every period of the sensor thread

and calls the getXXData() function of each of

its sensors, then uses a random number generator

to decide which data should be published to the

GasDataQueue.

*/

void SensorThread_updateData(SensorThread* const me) {

unsigned char success;

GasData g;

GasData_Init(&g);

O2Sensor_getO2Data(&(me->itsO2Sensor));

N2Sensor_getN2Data(&(me->itsN2Sensor));

HeSensor_getHeData(&(me->itsHeSensor));

if (rand() > RAND_MAX / 3) {

g.gType = HE_GAS;

g.conc = me->itsHeSensor.conc;

g.flowInCCPerMin = me->itsHeSensor.flow;

success = GasDataQueue_insert(me->itsGasDataQueue, g);

if (!success)

printf("Helium Gas Data queue insertion failed!\n");

};

if (rand() > RAND_MAX / 3) {

g.gType = N2_GAS;

g.conc = me->itsN2Sensor.conc;

g.flowInCCPerMin = me->itsN2Sensor.flow;

success = GasDataQueue_insert(me->itsGasDataQueue, g);

if (!success)

printf("Nitrogen Gas Data queue insertion failed!\n");

};

if (rand() > RAND_MAX / 3) {

g.gType = O2_GAS;

g.conc = me->itsO2Sensor.conc;

g.flowInCCPerMin = me->itsO2Sensor.flow;

success = GasDataQueue_insert(me->itsGasDataQueue, g);

if (!success)

printf("Oxygen Gas Data queue insertion failed! \n");

}

}

The sensors in this example are pretty simple – they just augment their values over time but they show that they just produce data; the SensorThread_updateData() function is responsible for pushing the data out to the queue. Code Listing 4-38 and Code Listing 4-39 show the code for one of these sensors.

Code Listing 4-38 O2Sensor.h

#ifndef O2Sensor_H

#define O2Sensor_H

#include "QueuingExample.h"

typedef struct O2Sensor O2Sensor;

struct O2Sensor {

double conc;

unsigned int flow;

};

/* Constructors and destructors:*/

void O2Sensor_Init(O2Sensor* const me);

void O2Sensor_Cleanup(O2Sensor* const me);

/* Operations */

void O2Sensor_getO2Data(O2Sensor* const me);

O2Sensor * O2Sensor_Create(void);

void O2Sensor_Destroy(O2Sensor* const me);

#endif

Code Listing 4-39 O2Sensor.c

#include "O2Sensor.h"

#include <stdlib.h>

#include <stdio.h>

void O2Sensor_Init(O2Sensor* const me) {

me->conc = 0.0;

me->flow = 0;

}

void O2Sensor_Cleanup(O2Sensor* const me) {

}

/*

getO2Data() is where the sensor class would

actually acquire the data. Here it just

augments it.

*/

void O2Sensor_getO2Data(O2Sensor* const me) {

me->conc += 20;

me->flow += 25;

}

O2Sensor * O2Sensor_Create(void) {

O2Sensor* me = (O2Sensor *) malloc(sizeof(O2Sensor));

if(me!=NULL) {

O2Sensor_Init(me);

}

return me;

}

void O2Sensor_Destroy(O2Sensor* const me) {

if(me!=NULL) {

O2Sensor_Cleanup(me);

}

free(me);

}

On the consumer side, the GasProcessingThread_processData() runs periodically and pulls data out. If there are data available, it calls the function GasController_handleGasData() of its nested instance of GasController. This function just prints out the data (by calling the GasDisplay_printGasData() function) but one can imagine that it might find more interesting things to do with the data, such as maintain a constant gas delivery of appropriate concentrations. Code Listing 4-40 through Code Listing 4-43 show the code for these two elements.

Code Listing 4-40 GasController.h

#ifndef GasController_H

#define GasController_H

#include "QueuingExample.h"

#include "GasData.h"

struct GasDisplay;

typedef struct GasController GasController;

struct GasController {

struct GasDisplay* itsGasDisplay;

};

/* Constructors and destructors:*/

void GasController_Init(GasController* const me);

void GasController_Cleanup(GasController* const me);

/* Operations */

void GasController_handleGasData(GasController* const me, GasData * gPtr);

struct GasDisplay* GasController_getItsGasDisplay(const GasController* const me);

void GasController_setItsGasDisplay(GasController* const me, struct GasDisplay* p_GasDisplay);

GasController * GasController_Create(void);

void GasController_Destroy(GasController* const me);

#endif

Code Listing 4-18 GasController.c

#include "GasController.h"

#include "GasDisplay.h"

#include <stdio.h>

static void cleanUpRelations(GasController* const me);

void GasController_Init(GasController* const me) {

me->itsGasDisplay = NULL;

}

void GasController_Cleanup(GasController* const me) {

cleanUpRelations(me);

}

void GasController_handleGasData(GasController* const me, GasData * gPtr) {

if (gPtr) {

GasDisplay_printGasData(me->itsGasDisplay, gPtr->gType, gPtr->conc,      gPtr->flowInCCPerMin);

free(gPtr);

};

}

struct GasDisplay* GasController_getItsGasDisplay(const GasController* const me) {

return (struct GasDisplay*)me->itsGasDisplay;

}

void GasController_setItsGasDisplay(GasController* const me, struct GasDisplay* p_GasDisplay) {

me->itsGasDisplay = p_GasDisplay;

}

GasController * GasController_Create(void) {

GasController* me = (GasController *) malloc(sizeof(GasController));

if(me!=NULL)

{

GasController_Init(me);

}

return me;

}

void GasController_Destroy(GasController* const me) {

if(me!=NULL)

{

GasController_Cleanup(me);

}

free(me);

}

static void cleanUpRelations(GasController* const me) {

if(me->itsGasDisplay != NULL)

{

me->itsGasDisplay = NULL;

}

}

Code Listing 4-19 GasDisplay.h

#include "QueuingExample.h"

#include "GasData.h"

typedef struct GasDisplay GasDisplay;

struct GasDisplay {

int screenWidth;

};

/* Constructors and destructors:*/

void GasDisplay_Init(GasDisplay* const me);

void GasDisplay_Cleanup(GasDisplay* const me);

/* Operations */

void GasDisplay_printGasData(GasDisplay* const me, const GAS_TYPE gasType, double gas_conc, unsigned int gas_flow);

GasDisplay * GasDisplay_Create(void);

void GasDisplay_Destroy(GasDisplay* const me);

#endif

Code Listing 4-43 GasDisplay.c

#include "GasDisplay.h"

#include <stdio.h>

void GasDisplay_Init(GasDisplay* const me) {

me->screenWidth = 80;

}

void GasDisplay_Cleanup(GasDisplay* const me) {

}

void GasDisplay_printGasData(GasDisplay* const me, const GAS_TYPE gasType, double gas_conc, unsigned int gas_flow) {

printf("\n");

switch (gasType) {

case O2_GAS:

printf("Oxygen ");

break;

case N2_GAS:

printf("Nitrogen ");

break;

case HE_GAS:

printf("Helium ");

break;

default:

printf("UNKNWON ");

break;

};

printf("Conc %f, Flow in CC/Min %d\n", gas_conc, gas_flow);

printf("\n");

}

GasDisplay * GasDisplay_Create(void) {

GasDisplay* me = (GasDisplay *) malloc(sizeof(GasDisplay));

if(me!=NULL) {

GasDisplay_Init(me);

}

return me;

}

void GasDisplay_Destroy(GasDisplay* const me) {

if(me!=NULL) {

GasDisplay_Cleanup(me);

}

free(me);

}

Figure 4-19 shows a four-second run of the example in which the GasProcessingThread runs first at a period of 1000 ms and the SensorThread runs second at a period of 500 ms and a queue size of 10. You can see that even though each of the three gas sensors only has a 1/3 chance of producing data during the interval, data are put in faster than they are removed and the queue eventually fills up. The insert operation is instrumented with a printf() statement to show when data are inserted. The GasControllers handleGasData() is invoked every period of the GasProcessingThread and it calls the GasDisplays printGasData() function.

Figure 4-19 Sample run of the Queuing example

4.7 Rendezvous Pattern

Tasks must synchronize in various ways. The Queuing and Guarded Call Patterns provide mechanisms when the synchronization occurs around a simple function call, sharing a single resource, or passing data. But what if the conditions required for synchronization are more complex? The Rendezvous Pattern reifies the strategy as an object itself and therefore supports even those most complex needs for task synchronization.

4.7.1 Abstract

A precondition is something that is specified to be true prior to an action or activity. A precondition is a type of constraint that is usually generative, that is, it can be used to generate code either to force the precondition to be true or to check that a precondition is true.

The Rendezvous Pattern is concerned with modeling the preconditions for synchronization or rendezvous of threads as a separate explicit object with its own data and functions. It is a general pattern and easy to apply to ensure that arbitrarily complex sets of preconditions can be met at run-time. The basic behavioral model is that as each thread becomes ready to rendezvous, it registers with the Rendezvous class, and then blocks until the Rendezvous class releases it to run. Once the set of preconditions is met, then the registered tasks are released to run using whatever scheduling policy is currently in force.

4.7.2 Problem

The Rendezvous Pattern solves the problems when a set of tasks must synchronize in a complex fashion.

4.7.3 Pattern Structure

Figure 4-20 shows the structure of the Rendezvous Pattern. Its structural simplicity belies its useful to manage complex task interactions.

Figure 4-20 Rendezvous Pattern

4.7.4 Collaboration Roles

This section describes the roles for this pattern.

4.7.4.1 Rendezvous

The Rendezvous class manages the synchronization. It does this through its two primary functions:

  • void reset(void)

    This function resets the synchronization criteria back to its initial condition.

  • void synchronize(void)

    This function is called when a task wants to synchronize. If the criteria are not met, then the task is blocked in some way (see Implementation Strategies, below). This is commonly done either through the use of the Observer Pattern (see Chapter 2) or the Guarded Call Pattern.

The complexity of this class resides primarily in the synchronize() function; this evaluates the synchronization preconditions to see if they are met. These conditions may be internal (such as the Thread Barrier variant of the pattern used for the example) or external, or any combination of the two.

It is important to note that the Rendezvous object itself doesn’t normally identify the threads as they rendezvous; as far as the Rendezvous object is concerned the tasks are anonymous. If they are not, such as when the strategy calls for tasks to rendezvous in a specific order, then the synchronize() function can accept that information in the form of one or more parameters.

4.7.4.2 Semaphore

This is a normal counting semaphore. The semaphore provides the following operations:

  • Semaphore* create_semaphore(void)

  • destroy_semaphore(Semaphore* )

  • lock_semaphore(Semaphore* )

  • release_semaphore(Semaphore* )

4.7.4.3 SynchronizingThread

This element represents a thread that uses the Rendezvous object for synchronization. It will run until its synchronization point and then call synchronize() function.

There are a number of ways to implement the pattern that result in minor changes to the parameters and return types for the pattern elements.

4.7.5 Consequences

In this pattern, two or more tasks can be synchronized by any arbitrarily complex strategy that can be encoded in the Rendezvous class. The pattern itself is simple and flexible and easily lends itself to specialization and adaptation.

The pattern is most appropriate when the tasks must halt at their synchronization point; that is, the pattern implements a blocking rendezvous. If a nonblocking rendezvous is desired, then this pattern can be mixed with a Queuing Pattern or the Observer Pattern with callbacks to a notify() function within the SynchronizingThread.

4.7.6 Implementation Strategies

This pattern may be implemented with the Observer Pattern from Chapter 3 or with the Guarded Call Pattern from this chapter. If the pattern is implemented with the Observer Pattern, then the Tasks must register with the address of a function to call when the synchronization criteria have been met. If the pattern is implemented with a Guarded Call Pattern, then each Rendezvous object owns a single semaphore that is enabled when the Rendezvous object is created. Each calling task then calls the register() function when it wants to synchronize. When the synchronization criteria is met, the semaphore is released, and the tasks are then all free to run according to the normal scheduling strategy.

4.7.7 Related Patterns

This pattern may be overkill when simpler solutions such as Guarded Call or Queuing Patterns may suffice.

The pattern may be implemented in a number of different ways, as well, using the Guarded Call or Observer Patterns to manage the thread blocking and unblocking.

4.7.8 Example

In this example (Figure 4-21), we have implemented a specific form of the Rendezvous Pattern knows as the Thread Barrier Pattern. In this pattern, the Rendezvous object only knows that a certain number of tasks are expected to rendezvous. When that number of tasks have called the synchronize() function, the set of blocked tasks are released. The example also implemented the pattern with the use of semaphores (Guarded Call Pattern) and lets the operating system block and unlock the tasks.

Figure 4-21 Rendezvous Example

The algorithm for the Thread Barrier is well understood using counting semaphores. See the code for the synchronize operation in Code Listing 4-45.

Code Listing 4-20 ThreadBarrier.h

#ifndef ThreadBarrier_H

#define ThreadBarrier_H

/*# # auto_generated */

#include <oxf/Ric.h>

/*# # auto_generated */

#include "RendezvousExample.h"

/*# # auto_generated */

#include <oxf/RiCTask.h>

/*# # package RendezvousPattern::RendezvousExample */

/*# # class ThreadBarrier */

typedef struct ThreadBarrier ThreadBarrier;

struct ThreadBarrier {

int currentCount;

int expectedCount;

OSSemaphore* barrier;

OSSemaphore* mutex;

};

/* Constructors and destructors:*/

void ThreadBarrier_Init(ThreadBarrier* const me);

void ThreadBarrier_Cleanup(ThreadBarrier* const me);

/* Operations */

void ThreadBarrier_reset(ThreadBarrier* const me, int x);

void ThreadBarrier_synchronize(ThreadBarrier* const me);

ThreadBarrier * ThreadBarrier_Create(void);

void ThreadBarrier_Destroy(ThreadBarrier* const me);

#endif

Code Listing 4-21 ThreadBarrier.c

#include "ThreadBarrier.h"

void ThreadBarrier_Init(ThreadBarrier* const me) {

me->currentCount = 0;

me->expectedCount = 3;

me->mutex = OSSemaphore_Create();

me->barrier = OSSemaphore_Create();

if (me->barrier) {

OSSemaphore_lock(me->barrier);

printf("BARRIER IS LOCKED FIRST TIME\n");

};

}

void ThreadBarrier_Cleanup(ThreadBarrier* const me) {

OSSemaphore_Destroy(me->barrier);

OSSemaphore_Destroy(me->mutex);

}

void ThreadBarrier_reset(ThreadBarrier* const me, int x) {

me->expectedCount = x;

me->currentCount = 0;

}

void ThreadBarrier_synchronize(ThreadBarrier* const me) {

/*

protect the critical region around

the currentCount

*/

OSSemaphore_lock(me->mutex);

++me->currentCount; /* critical region */

OSSemaphore_release(me->mutex);

/*

are conditions for unblocking met?

if so, then release the first blocked

thread or the highest priority blocked

thread (depending on the OS)

*/

if (me->currentCount == me->expectedCount) {

printf("Conditions met\n");

OSSemaphore_release(me->barrier);

};

/*

lock the semaphore and when released

then release it for the next blocked thread

*/

OSSemaphore_lock(me->barrier);

OSSemaphore_release(me->barrier);

}

ThreadBarrier * ThreadBarrier_Create(void) {

ThreadBarrier* me = (ThreadBarrier *) malloc(sizeof(ThreadBarrier));

if(me!=NULL)

ThreadBarrier_Init(me);

return me;

}

void ThreadBarrier_Destroy(ThreadBarrier* const me) {

if(me!=NULL)

ThreadBarrier_Cleanup(me);

free(me);

}

Deadlock Avoidance Patterns

The last problem I want to address in this chapter is deadlock. Deadlock is a situation in which multiple clients are simultaneously waiting for conditions that cannot, in principle, occur.

Figure 4-22 shows an example of deadlock. In this example, the priority of Task 1 is higher than that of Task 2. The figure shows a timing diagram overlayed with a class diagram showing the structure. In addition to the tasks, there are two resources, R1 and R2, shared by both tasks. The problem arises if the tasks lock the resources in reverse order and Task 1 preempts Task 2 between Task 1 between locking R1 and R2. The following description refers to the named points in Figure 4-22.

  1. At this point, Task 2 runs.

  2. Task 2 locks R1 and is just about to lock R2.

  3. Task 1 runs. Since it is higher priority than Task 2, it preempts Task 2.

  4. Task 1 locks R2.

  5. Task 1 now attempts to lock R1, however, R1 is currently locked by Task 2. If Task 1 blocks to allow Task 2 to run, Task 2 must block as soon as it tries to lock R1. The system is in deadlock.

Figure 4-22 Deadlock example

It is enough to break any of the four required conditions for deadlock (below) conditions to ensure that deadlock cannot appear. The simultaneous locking, ordered locking, and critical region patterns all avoid the problems of deadlock. For example, had Task 2 enabled a critical region (i.e., disallowed task switching) before it locked R1, then Task 1 would not have been able to preempt it before it locked R2. If both tasks locked the resources in the same order, deadlock would have been prevented. If Task 2 locks both resources at the same time, there would be no problem. Deadlock is easy to prevent, but you must take careful steps to do so.

Leaving aside the common problems of software errors, deadlocks require four conditions to be true:

  1. mutual exclusion locking of resources

  2. some resources are locked while others are requested

  3. preemption while resources are locked is allowed

  4. a circular waiting condition exists

Deadlock can be avoided by breaking any of these four conditions. Using the Critical Region Pattern, for example, breaks rules #1 and #3. The Queuing Pattern, because it uses asynchronous message passing avoids #1 (other than the lock on the queue access itself). This chapter finishes up with two other patterns that avoid deadlock.

4.8 Simultaneous Locking Pattern

The Simultaneous Locking Pattern is a pattern solely concerned with deadlock avoidance. It achieves this by locking breaking condition #2 (holding resources while waiting for others). The pattern works in an all-or-none fashion. Either all needed resources are locked at once or none are locked.

4.8.1 Abstract

Deadlock can be solved by breaking any of the four required conditions. This pattern prevents the condition of holding some resources by requesting others by allocating them all at once. However, this pattern has an advantage over the Critical Region Pattern by allowing higher priority tasks to run if they don’t need any of the locked resources.

4.8.2 Problem

The problem of deadlock is a serious enough one in highly reliable computing that many systems design in specific mechanisms to detect it or avoid it. As previously discussed, deadlock occurs when a task is waiting on a condition that can never, in principle, be satisfied. There are four conditions that must be true for deadlock to occur, and it is sufficient to deny the existence of any one of these. The Simultaneous Locking Pattern breaks condition #2, not allowing any task to lock resources while waiting for other resources to be free.

4.8.3 Pattern Structure

The structure of the pattern is shown in Figure 4-23.

Figure 4-23 Simultaneous Locking Pattern general case

In general, a MultimasteredResource can be a part of any number of different sets of resources, where a separate instance of ResourceMaster manages one such set. This potential sharing of resources across multiple ResourceMasters complicates the situation. When this flexibility is not needed, the situation simplifies to that shown in Figure 4-24. While the difference may not appear to be great, it removes the requirement for a tryLock() function on the Mutex and it is algorithmically much simpler.

Figure 4-24 Simplified Simultaneous Locking Pattern in which each resource must be assigned to a single Resource Master

4.8.4 Collaboration Roles

This section describes the roles for this pattern.

4.8.4.1 MasteredResource

This class is an element of the Simplified Simultaneous Locking Pattern. It represents a shared resource that is owned by a single instance of ResourceMasterSimplified. If it can be determined that each MasteredResource can be uniquely assigned to a set controlled by a single instance of ResourceMasterSimplified, life is much simpler; in this case the resource does not need its own Mutex because it will execute under the auspices of the Mutex of the ResourceMasterSimplified. If the resource must be shared in multiple resource groups, then its becomes a MultimasteredResource, which is a part of the more general pattern.

4.8.4.2 MultimasteredResource

This class is an element of the general Simultaneous Locking Pattern. It is managed by multiple ResourceMasters, then it requires its own mutex (via its like itsPartMutex). The Mutex it uses must provide the tryLock() function so that it can be determined by the ResourceMaster whether all the attempted locks will succeed or fail.

4.8.4.3 Mutex

This element is a normal mutex semaphore as used in previous patterns; if offers lock() and release() functions that work in the way standard for a mutex semaphore.

4.8.4.4 QueryMutex

This element is a normal mutex semaphore except that it also provides a tryLock() function for the general pattern case. This function works like the normal lock() function except that it returns with an error code if the lock fails rather than blocking the current thread. In the specialized form of the pattern (in which a single resource is owned by a single ResourceMaster), then the tryLock() function isn’t needed.

4.8.4.5 ResourceClient

This element is a client that wants to access a set of resources all at once to avoid potential deadlock. Although it has direct access (via its itsMultimasteredResource link) to the clients, it doesn’t access them until it has successfully received a lock on the ResourceMaster. This requires discipline on the part of the developers of the ResourceClients to ensure that they never attempt to access any of the client services without first getting a lock, and once the client is done with the resources, it frees the ResourceMaster.

4.8.4.6 Resource Master

The ResourceMaster controls the lock of the entire set of resources. It is used in the general case of the pattern and it must return a lock if and only if it can lock all the resources. While it can be implemented with mutexes solely on the parts (instances of ResourceMaster) because it may be blocked multiple times while it attempts to lock the set of individual MultimasteredResources, it may have unacceptable execution times in practice. A solution, implemented here, is to use the tryLock() function of the mutex to test. If the tryLock() calls for the entire set of MultimasteredResources to all succeed, then the ResourceMaster itself is locked and the ResourceClient is allowed to use its link to the MultimasteredResources. If the ResourceClient cannot lock all of the MultimasteredResources, then the ResourceMaster unlocks all of the MultimasteredResources it locked (before it found one where it failed), and returns with an error code to the ResourceClient.

4.8.4.7 ResourceMasterSimplified

The Simplified Simultaneous Locking Pattern doesn’t require the additional developer discipline required for the general case. The ResourceClient no longer has direct links to the MasterResources per se; instead, the ResourceMasterSimplified republishes all the services of its MasteredResources and delegates the functions directly to them. All of the functions share the same mutex.

4.8.5 Consequences

The Simultaneous Locking Pattern prevents deadlock by removing required condition #2 – some resources are locked while others are requested – by locking either all of the resources needed at once, or none of them. However, while the pattern removes the possibility of deadlock, it may increase the delay of the execution of other tasks. This delay may occur even in situations in which there is no actual resource conflict. The problem is far more pronounced for resources that are widely shared than for resources that are shared more narrowly. In addition, the pattern does not address priority inversion and in fact may make it worse, unless some priority inversion bounding pattern is deployed, such as the Guarded Call Pattern with priority inheritance. The general case for the pattern, shown in Figure 4-24, requires developer discipline in that they must avoid using the direct links to the MultimasteredResources until they have successfully been given a lock. In large project teams, this discipline can be ignored resulting in subtle and hard-to-identify errors. The simplified model, shown in Figure 4-24, doesn’t have this problem because the ResourceMasterSimplified republishes the resource access methods.

Of course this pattern only applies when deadlock is a possibility, such as when multiple resources are shared with at least two in common by at least two threads.

4.8.6 Implementation Strategies

The implementation of this pattern should pose no difficulties other than the need to use, in the general case, a mutex semaphore that supports a tryLock() operation, and the need to ensure that the precondition of access (successful locking of the MultimasteredResource) is achieved before accessing the resource directly.

4.8.7 Related Patterns

Deadlock can be prevented by avoiding any of the required four conditions. The Critical Regions Pattern avoids by breaking condition #1 – mutual exclusion locking of resources. The Ordered Locking Pattern, described later in this chapter, prevents deadlock by breaking condition #4 – circular waiting condition.

Because this pattern doesn’t address the concern of unbounded priority inversion, it can be mixed with the Guarded Call Pattern with priority inheritance.

4.8.8 Example

Figure 4-25 shows an example of the simplified form of Figure 4-24. In this case, the SensorMaster replicates the functions of the sensors (although the names are munged with the names of the servers to disambiguate services). When a client (such as the PositionPredictor) invokes one of the services, the SensorMaster locks the mutex and releases it before returning to the client. The SensorMaster owns a mutex semaphore for this purpose.

Code Listing 4-22 SensorMaster.h

#ifndef SensorMaster_H

#define SensorMaster_H

#include "Position.h"

struct DopplerSpeedSensor;

struct GPSPositionSensor;

struct OpticalSpeedSensor;

struct SimMutex;

typedef struct SensorMaster SensorMaster;

struct SensorMaster {

struct DopplerSpeedSensor* itsDopplerSpeedSensor;

struct GPSPositionSensor* itsGPSPositionSensor;

struct OpticalSpeedSensor* itsOpticalSpeedSensor;

struct SimMutex* itsSimMutex;

};

/* Constructors and destructors:*/

void SensorMaster_Init(SensorMaster* const me);

void SensorMaster_Cleanup(SensorMaster* const me);

/* Operations */

void SensorMaster_doppler_configure(SensorMaster* const me, short sampleRate);

void SensorMaster_doppler_disable(SensorMaster* const me);

void SensorMaster_doppler_enable(SensorMaster* const me);

double SensorMaster_doppler_getSpeed(SensorMaster* const me);

void SensorMaster_gps_activate(SensorMaster* const me);

void SensorMaster_gps_configure(SensorMaster* const me, short reqSatellites, int useFast);

void SensorMaster_gps_deactivate(SensorMaster* const me);

struct Position SensorMaster_gps_getPosition(SensorMaster* const me);

void SensorMaster_optical_configure(SensorMaster* const me, int wheelSize, int sensitivity);

void SensorMaster_optical_disable(SensorMaster* const me);

void SensorMaster_optical_enable(SensorMaster* const me);

double SensorMaster_optical_getSpeed(SensorMaster* const me);

struct DopplerSpeedSensor* SensorMaster_getItsDopplerSpeedSensor(const SensorMaster* const me);

void SensorMaster_setItsDopplerSpeedSensor(SensorMaster* const me, struct DopplerSpeedSensor* p_DopplerSpeedSensor);

struct GPSPositionSensor* SensorMaster_getItsGPSPositionSensor(const SensorMaster* const me);

void SensorMaster_setItsGPSPositionSensor(SensorMaster* const me, struct GPSPositionSensor* p_GPSPositionSensor);

struct OpticalSpeedSensor* SensorMaster_getItsOpticalSpeedSensor(const SensorMaster* const me);

void SensorMaster_setItsOpticalSpeedSensor(SensorMaster* const me, struct OpticalSpeedSensor* p_OpticalSpeedSensor);

struct SimMutex* SensorMaster_getItsSimMutex(const SensorMaster* const me);

void SensorMaster_setItsSimMutex(SensorMaster* const me, struct SimMutex* p_SimMutex);

SensorMaster * SensorMaster_Create(void);

void SensorMaster_Destroy(SensorMaster* const me);

#endif

Code Listing 4-23 SensorMaster.c

#include "SensorMaster.h"

#include "DopplerSpeedSensor.h"

#include "GPSPositionSensor.h"

#include "OpticalSpeedSensor.h"

#include "SimMutex.h"

static void cleanUpRelations(SensorMaster* const me);

void SensorMaster_Init(SensorMaster* const me) {

me->itsDopplerSpeedSensor = NULL;

me->itsGPSPositionSensor = NULL;

me->itsOpticalSpeedSensor = NULL;

me->itsSimMutex = NULL;

}

void SensorMaster_Cleanup(SensorMaster* const me) {

cleanUpRelations(me);

}

void SensorMaster_doppler_configure(SensorMaster* const me, short sampleRate) {

SimMutex_lock(me->itsSimMutex);

DopplerSpeedSensor_configure(me->itsDopplerSpeedSensor,sampleRate);

SimMutex_release(me->itsSimMutex);

}

void SensorMaster_doppler_disable(SensorMaster* const me) {

SimMutex_lock(me->itsSimMutex);

DopplerSpeedSensor_disable(me->itsDopplerSpeedSensor);

SimMutex_release(me->itsSimMutex);

}

void SensorMaster_doppler_enable(SensorMaster* const me) {

SimMutex_lock(me->itsSimMutex);

DopplerSpeedSensor_enable(me->itsDopplerSpeedSensor);

SimMutex_release(me->itsSimMutex);

}

double SensorMaster_doppler_getSpeed(SensorMaster* const me) {

double speed;

SimMutex_lock(me->itsSimMutex);

speed = DopplerSpeedSensor_getSpeed(me->itsDopplerSpeedSensor);

SimMutex_release(me->itsSimMutex);

return speed;

}

void SensorMaster_gps_activate(SensorMaster* const me) {

SimMutex_lock(me->itsSimMutex);

GPSPositionSensor_activate(me->itsGPSPositionSensor);

SimMutex_release(me->itsSimMutex);

}

void SensorMaster_gps_configure(SensorMaster* const me, short reqSatellites, int useFast) {

SimMutex_lock(me->itsSimMutex);

GPSPositionSensor_configure(me->itsGPSPositionSensor,reqSatellites,useFast);

SimMutex_release(me->itsSimMutex);

}

void SensorMaster_gps_deactivate(SensorMaster* const me) {

SimMutex_lock(me->itsSimMutex);

GPSPositionSensor_deactivate(me->itsGPSPositionSensor);

SimMutex_release(me->itsSimMutex);

}

struct Position SensorMaster_gps_getPosition(SensorMaster* const me) {

Position p;

SimMutex_lock(me->itsSimMutex);

p = GPSPositionSensor_getPosition(me->itsGPSPositionSensor);

SimMutex_release(me->itsSimMutex);

return p;

}

void SensorMaster_optical_configure(SensorMaster* const me, int wheelSize, int sensitivity) {

SimMutex_lock(me->itsSimMutex);

OpticalSpeedSensor_configure(me->itsOpticalSpeedSensor,wheelSize, sensitivity);

SimMutex_release(me->itsSimMutex);

}

void SensorMaster_optical_disable(SensorMaster* const me) {

SimMutex_lock(me->itsSimMutex);

OpticalSpeedSensor_disable(me->itsOpticalSpeedSensor);

SimMutex_release(me->itsSimMutex);

}

void SensorMaster_optical_enable(SensorMaster* const me) {

SimMutex_lock(me->itsSimMutex);

OpticalSpeedSensor_enable(me->itsOpticalSpeedSensor);

SimMutex_release(me->itsSimMutex);

}

double SensorMaster_optical_getSpeed(SensorMaster* const me) {

double speed;

SimMutex_lock(me->itsSimMutex);

speed = OpticalSpeedSensor_getSpeed(me->itsOpticalSpeedSensor);

SimMutex_release(me->itsSimMutex);

return speed;

}

struct DopplerSpeedSensor* SensorMaster_getItsDopplerSpeedSensor(const SensorMaster* const me) {

return (struct DopplerSpeedSensor*)me->itsDopplerSpeedSensor;

}

void SensorMaster_setItsDopplerSpeedSensor(SensorMaster* const me, struct DopplerSpeedSensor* p_DopplerSpeedSensor) {

me->itsDopplerSpeedSensor = p_DopplerSpeedSensor;

}

struct GPSPositionSensor* SensorMaster_getItsGPSPositionSensor(const SensorMaster* const me) {

return (struct GPSPositionSensor*)me->itsGPSPositionSensor;

}

void SensorMaster_setItsGPSPositionSensor(SensorMaster* const me, struct GPSPositionSensor* p_GPSPositionSensor) {

me->itsGPSPositionSensor = p_GPSPositionSensor;

}

struct OpticalSpeedSensor* SensorMaster_getItsOpticalSpeedSensor(const SensorMaster* const me) {

return (struct OpticalSpeedSensor*)me->itsOpticalSpeedSensor;

}

void SensorMaster_setItsOpticalSpeedSensor(SensorMaster* const me, struct OpticalSpeedSensor* p_OpticalSpeedSensor) {

me->itsOpticalSpeedSensor = p_OpticalSpeedSensor;

}

struct SimMutex* SensorMaster_getItsSimMutex(const SensorMaster* const me) {

return (struct SimMutex*)me->itsSimMutex;

}

void SensorMaster_setItsSimMutex(SensorMaster* const me, struct SimMutex* p_SimMutex) {

me->itsSimMutex = p_SimMutex;

}

SensorMaster * SensorMaster_Create(void) {

SensorMaster* me = (SensorMaster *) malloc(sizeof(SensorMaster));

if(me!=NULL)

SensorMaster_Init(me);

return me;

}

void SensorMaster_Destroy(SensorMaster* const me) {

if(me!=NULL)

SensorMaster_Cleanup(me);

free(me);

}

static void cleanUpRelations(SensorMaster* const me) {

if(me->itsDopplerSpeedSensor != NULL)

me->itsDopplerSpeedSensor = NULL;

if(me->itsGPSPositionSensor != NULL)

me->itsGPSPositionSensor = NULL;

if(me->itsOpticalSpeedSensor != NULL)

me->itsOpticalSpeedSensor = NULL;

if(me->itsSimMutex != NULL)

me->itsSimMutex = NULL;

}

Figure 4-25 Simultaneous Locking Pattern example