1 Basic Model: Single Server, Single Flow – Deterministic Network Calculus

1
Basic Model: Single Server, Single Flow

Network calculus is a theory designed to compute upper bounds of flow delays and server backlogs in networks. Before presenting how to compute such bounds, we present in this chapter the modeling process so that the network calculus model is an abstraction of a real system. This includes modeling of the data that circulate in a network (flows) and modeling of the processing of data (servers). The modeling process also justifies the hypothesis we made in this book.

1.1. Modeling principles

Formal methods, as described in Figure 1.1, are used to compute properties in real systems (such as there is no loss of message): if a model is built from a system Σ, then any property P of the system must be translated into a formal property Φ of the model. Such a property can, for example, be that there is no buffer overflow.

Figure 1.1. Formal methods for guaranteeing properties on systems

In our context, we want to ensure that if the model states that Φ is satisfied, then P is also satisfied. We then want a conservative modeling: it can never happen that a good property (i.e. the system satisfies all of the desired requirements) is satisfied in the model, but not in the real system. Indeed, the system’s behavior would not be guaranteed.

Hence, we want to emphasize our modeling choices here and show how they fit systems.

1.2. Constant rate server

In this section, we present an example that will serve as an illustration in the whole chapter. Consider a server that transmits exactly R bits of data per unit time. Our aim is to find upper bounds on the transmission delay, assuming that data exit the server in their arrival order, or to bound the amount of data that can be present in the server.

Figure 1.2. Server: a relation between arrival and departure

Let us first introduce some notations. For all t ≥ 0, we denote by A(t) the amount of data that arrive in the system up to time t, and by D(t) the amount of data that departed the system up to time t. We assume that there is no loss and no creation of data in the system and that initially the system is empty: A(0) = D(0) = 0. The function A is called the cumulative arrival function (or process) of the system and D is called the cumulative departure function (or process) of the system. Let us now derive the relation between A and D.

Suppose that during the time interval (u, t], the system is never empty (we also say that it is always backlogged), meaning that during this period of time, exactly R · (t - u) bits of data exit the system:

[1.1]

The assumption that the system is always backlogged is mandatory. Otherwise, we would have D(t) - D(u) ≤ R . (t - u), which is not a guarantee for the service offered.

As A(0) = D(0), the last instant s0 before t at which the system is empty exists: A(s0) = D(s0) and s0 = sup{st | A(s) = D(s)}. Using the latter formula, we obtain

If ss0, then D(t) ≤ D(s) + R . (t - s) ≤ A(s) + R . (t - s) as D(s) ≤ A(s).

If ss0, then A(s) + R . (t - s) ≥ A(s) + D(t) - D(s) ≥ D(t), and we finally obtain

[1.2]

Figure 1.3. Input/output relation for a constant-rate server. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

This formula corresponds to the (min,plus) convolution of A and that will be introduced in the next chapter. In short, we denote D = Aβ.

Before explaining how performance guarantees can be computed, let us focus on the two key elements in the description of this system: the modeling of the flows and of the relation between the arrival and departure flows model (server).

1.3. Flow model

A data flow is usually a set of information, encoded into bits or bytes, grouped or fragmented into frames or packets, through the different network layer. Network calculus only focuses on performances, and parts of this information are forgotten. In particular, we focus only on the amount of information and not on its content, which in general does not affect the performance of the system. In particular, the size of the packets is not modeled in the basic model, whereas it may have an impact depending on the scheduling policy. Chapter 8 will be devoted to such aspects.

In the basic model, a flow is represented by a cumulative function that models an amount of data.

DEFINITION 1.1 (Cumulative function).– The cumulative function of a flow is a function A, defined on+, that is non-decreasing, piecewise continuous and left-continuous and such that A(0) = 0. Denote by the set of functions that satisfy these properties. The quantity A(t) represents the amount of data sent by the flow in the interval [0, t).

An example of a cumulative function A is drawn in Figure 1.4. The slope between times u and u' may represent the fluid arrival of a packet of size 1, as well as the successive arrival of two packets of respective sizes and The interval [u', v] not only could be the period between the arrival of two packets, but may also be some pause inside the arrival of one packet. The discontinuity at w can represent the instantaneous arrival of one or several packets: cumulative functions represent only an amount of data and give no information about packet boundaries or content.

Figure 1.4. Cumulative flow function

After this short illustration, we more precisely discuss the following three points:

  • – the choice of the time domain;
  • – the use of a cumulative amount of data instead of throughput;
  • – the continuity of curves.

Time domain: the time domain is restricted to ℝ+, excluding negative time values. We assume the existence of a starting time in the system: at time 0, all servers are empty and no flow has started sending data in the system.

The choice of a dense time domain might surprise some readers since a computer is basically a discrete-time system, scheduled by a clock. However, while considering a network, each element has its own clock, and different clocks may drift with regard to the others. As a result, the set of natural numbers for the time domain is excluded1. Still, we could choose ℚ+, but it is much more convenient to work with ℝ+.

Cumulative amount of data: the choice of the cumulative amount of data, where the network community often represents a flow by its throughput, is mainly motivated by mathematical requirements. The amount of data sent by the flow from a time origin up to the current time t is always defined, whereas the instantaneous throughput is not defined at every time scale, especially because we assume that a positive amount of data may arrive instantaneously (as in Figure 1.4 at time w)

It is clear that a cumulative function is non-decreasing.

The left-continuity assumption: the point that deserves the longest discussion relies on the informal expression up to time t, and is related to the continuity of A.

There are several interpretations of this sentence: the quantity A(t) represents the amount of data that arrived during [0, t], [0, t) or (0, t].

The last case can be immediately discarded as A(t) would never take into account the arrival of a packet at time 0 (even A(0)).

Initially, [0, t] might seem more natural than [0, t): with the right-open interval, a packet arriving at time t will not be taken into account in A(t), but just after (at see equation [A. 1]); with the right-closed version, a packet arrival at time t is taken into account in A(t).

The key point is that we are not only interested in the amount of data from 0 up to t, but also in any interval of time. Let s and t be two instants with st and A(s, t) be the amount of data arriving between times s and t. Intuitively, we want this amount of data to satisfy Chasles’s relation:

and to be written as A(s, t) = A(t) − A(s).

If A(s, t) is the amount of data that arrived in the interval [s, t], then we have A(s, t) + A(t, u) = A(s, u) + A(t, t), where A(t, t) might be positive. Therefore, we have to reject this model.

As has already been pointed out, if A(s, t) is the amount of data that arrived in the interval (s, t], then we have A(s, t) + A(t, u) = A(s, u), but as mentioned above, the data that might arrive at time 0 will never be taken into account.

Therefore, finally, we consider A(s, t) to be the amount of data that arrived in the interval [s, t). We now have A(t, t) = 0 for all t ∈ ℝ+, so Chasles’s relation is satisfied, and we can set A(t) = A(0, t):

This choice thus conforms to our requirements.

This choice implies two mathematical requirements on cumulative functions. First, it implies A(0) = 0 since the amount of data produced in the empty interval [0, 0) is null. Second, it implies left-continuous functions (the arrival of a single packet of size 1 at time 1 is represented by a function A such that A(t) = 0 for any t ∈ [0, 1] and A(t) = 1 for t > 1).

The last hypothesis, assuming piecewise continuity, is reasonable considering computer-based behavior: we assume that only a finite number of events can happen in a finite interval of time, so there is a finite number of discontinuities in any finite interval.

Nevertheless, after these presentations of the technical reasons why left-continuous cumulative curves are best suited to model cumulative amounts of data, we may wonder what would happen to the theory if right-continuous functions were chosen to model arrival curves. As already mentioned, this modeling seems more natural for many people since a packet arriving at time t is represented by the value of A(t).

Section 5.3.3.3 will show that the choice of continuity for cumulative functions has no influence on the values of the delay and backlog, as they will be defined in section 1.5, and section 5.3.3.3 will provide more arguments that allow us to consider that continuity is a technical choice that can change the ways to get results, but not the results themselves.

1.4. Server model

In the example in section 1.2, the server was described as transmitting exactly R bits of data per unit of time, and we could derive a relation between the cumulative arrival function A and the cumulative departure function D. In network calculus, a server is a relation between some arrival and departure cumulative functions.

DEFINITION 1.2 (Server).- A server SCC is a left-total relation between flow cumulative functions that satisfies (A, D) ∈ SAD. We denote for (A, D) ∈ S.

We use the pointwise order: AD means The relation represents the fact that if A is the arrival cumulative function (or input) of S, then D is a possible departure cumulative function (or output). If S is a function (i.e. it has a unique output for each input), then S is said to be a deterministic server, otherwise, it is an non-deterministic server.

In the sequel, the notation will be used as a shortcut for or depending on the context. We now make some comments about this model.

A server can model a single network element, such as a switch; a sub-part of an element, such as an internal crossbar; or a whole network.

Flow alteration (compression, losses, etc.): the definition states that if (A, D) ∈ S, then DA. Implicitly, we assume that the server does not alter the data flow: the output flow is made up of the same data as the input flow, and there is no loss, no compression, no transformation of the data (e.g. adding a header), etc. Compression or expansion of data can be modeled by a scaling function. A particular case of scaling, constant scaling, will be used to prove an instability result in cyclic networks in section 12.4.2. More details can be found in [FID 06b]. There are very few results concerning the modeling of losses in the context of deterministic network calculus. One reason is that when guaranteeing bounds, we attempt to avoid losses. Both notions of scaling and losses have been jointly studied under the notion of flow transformation in the context of stochastic network calculus [WAN 14].

Non-determinism of the server: although computers are mainly deterministic systems, modeling non-determinism is a way to model the fact that the system’s behavior is not exactly known, and a non-deterministic server models a set of potential behaviors. Moreover, we can remark that the non-determinism is a generalization of the deterministic case.

Let us illustrate this through an example: consider a server that takes exactly one unit of time to serve one packet. Suppose that one packet arrives every unit of time exactly: A(t) = [t] and D(t) = max(0, [t - 1]).

However, clocks are not perfect and there could be a drift in the clock, or some jitter: if we know that there exists (ji)i∈ℕ such that the i-th service is at time i + ji, with –1 <  ji < 1, without knowing the values of (ji)i∈ℕ exactly, then we can model this system by a non-deterministic server that accepts all of the behaviors that are modeled by a sequence (ji). The actual behavior of the system will then be taken into account by the server. Figure 1.5 illustrates two possible behaviors: the perfect behavior (left), with no jitter, and a possible behavior with jitters (right).

Figure 1.5. Non-determinism of the server. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

1.5. Delay and memory usage models

Now that we have modeled flows and servers, we can focus on the performance of the system. More precisely, we focus on the delay and the memory usage.

DEFINITION 1.3 (Delay and backlog).– Let S be a server, and let us consider that A and D are cumulative arrival and departure functions, respectively, such that

For t ≥ 0, d(A, D, t) is the virtual delay associated with flow A at time t and b(A, D, t) is the backlog (i.e. the amount of data that is still in the system2) of the server at time t:

[1.3]
[1.4]

In the example of a server transmitting R bits of data per unit time, we have

so the backlog can be easily expressed as dependent only on the arrival function and the description of the server. Graphically, it can also be interpreted as the vertical deviation between A and D at time t. The delay at time t is a little more complex to express, but graphically, t + d is the time when D becomes larger than A(t). Therefore, assuming that bits of data are served in their arrival order, which will be discussed in the next paragraph, the bit of data that arrives at t will depart at t + d, and so d is its delay. It can then be interpreted as the horizontal deviation between A and D at height A(t).

In network calculus, we are mainly interested in the worst-case performance bounds, i.e. when the delay or backlog is the largest.

DEFINITION 1.4 (Worst-case delay and backlog).– Let S be a server, and (A, D) ∈ S. The worst-case virtual delay and backlog for these arrival and departure cumulative functions A and D are

[1.5]
[1.6]

The worst-case delay and backlog of the arrival cumulative function A in server S are

The latter definitions are useful when the server is non-deterministic. In the example of the server with jitter described in the previous paragraph, if there is no jitter, then the backlog is exactly 1, but when there is some jitter, the worst-case backlog is 2: whenever a jitter is positive, two packets will simultaneously exist in the server.

Delays and backlogs at time t can be interpreted as deviations between the functions A and D. Therefore, following the same interpretation, the worst-case delay and backlog can be interpreted as the maximal deviations between those two functions. For example, Figure 1.6 shows an arrival and departure cumulative function and the worst-case delay associated with it.

The notions of vertical and horizontal deviations can be defined for all functions. our notation distinguishes from these operators to highlight the semantic of the operators (computing backlogs and delay bounds). However, we will still need (mainly for proofs) notations independent of the interpretation.

Figure 1.6. Illustration of delay. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

DEFINITION 1.5 (Horizontal and vertical deviations).– Let f, g be two functions defined on +. The vertical and horizontal deviations are defined by

[1.7]
[1.8]
[1.9]
[1.10]

Figure 1.7. Horizontal and vertical deviations. For a color version of this figure, see www.iste.co.uk/bouillard/calculus.zip

We then have the relations:

Let us comment on some aspects of the definition of the delay (the definition of the backlog is straightforward).

Virtual delay or FIFO delay: the definition of virtual delay depends only on the quantity of data that arrives or departs the system, but not on the content or the identity of the packets or data. Therefore, the delay is computed as if data were processed FIFO (first arrived first served). This is not always the case, especially when several flows (of different type of data) transit through the same server. To deal with this, in Chapter 7 we will introduce servers with several inputs and outputs and differentiate the service policy according to the inputs (and the FIFO assumption per input becomes a natural assumption).

To illustrate this, consider the cumulative arrival/departure functions in Figure 1.8. The horizontal deviation is constant (∀t : d(A, D, t) = 4). The value of the virtual delay considered in Definition 1.4 is also 4. However, data can be served in several ways: the most natural one assumes a FIFO behavior – the n-th packet enters the system between instants [2(n - 1), 2n - 1] and exits during the interval [2n + 4, 2n + 5]. Another server could consider that at the n-th packets, n ≥ 2 is served during [2n, 2n + 1], and that bits entering between times 2 and 3 exit between times 4 and 5. Then, the first packet is never served, and has an infinite delay.

Figure 1.8. Permanent backlog illustration

Bit or packet delay: the reader might be interested in the delay incurred either by every bit of data or by a packet. Clearly, the definition given here measures the delay of every bit of data. However, when a packet is sent, all of the data contained in it can be modeled as being sent instantaneously. Then, the delay incurred by the packet (difference between the departure time of the last bit of the packet and the arrival time of the first bit of the data) is also the delay incurred by the last bit of data, and as a result, it can be computed by our model.

1.6. Summary

System

Model

Math properties

Data flow

Cumulative arrival/departure functions A(t) is the amount of data arrived during [0, t)

AC: A(0) = 0 non-decreasing, left-continuous, piecewise continuous

Server

Relation between arrival and departure functions A and

Left-total relation

AD

Backlog at t of a flow in the server

Vertical deviation at t b(A, D, t) Maximal vertical deviation b(A, D) " " "f or possible flow relations b(A, S)

Delay at t of a flow in the server

Horizontal deviation d(A, D, t) Maximal horizontal deviation d(A, D) " " " for possible flow relations d(A, S)