Chapter 13: Analysis of Dynamic Circuits by Laplace Transforms – Electric Circuit Analysis

Chapter 13

Analysis of Dynamic Circuits by Laplace Transforms

CHAPTER OBJECTIVES
  • Expansion of right-sided time-domain waveforms in terms of eσt cos(ωt +ɸ).
  • Definition of Laplace transform and inverse integral.
  • Various properties of Laplace transform of a real function of time.
  • Convergence of inverse integral and region of convergence in s-plane.
  • Basic Laplace transform pairs.
  • Solution of differential equations using Laplace transforms.
  • Method of partial fractions for inversion of Laplace transforms.
  • S-domain equivalent circuit and applying them for circuit analysis.
  • System function H(s) and Laplace transform of impulse response.
  • Network functions and pole-zero plots.
  • Graphical interpretation of frequency response function.
INTRODUCTION

We solved first-order and second-order circuits using the differential equation model in earlier chapters. Differential equation model can be used to solve multi-mesh, multi-node circuits containing R, L, C, M, linear dependent sources and independent sources too.

A linear time-invariant circuit containing R, L, C, M, linear dependent sources and a single input source is described by a linear ordinary differential equation with constant coefficients. The differential equation for such a circuit can be expressed in a standard format as below.

The variable y is any circuit voltage or current variable chosen as the describing variable for the circuit and x is the input source function. Standard mesh analysis or nodal analysis technique along with variable elimination will help us to arrive at this equation. However, the variable elimination involved can be considerably tricky in the case of large circuits containing many energy storage elements. This is a serious shortcoming of time-domain analysis by differential equation model.

The order of a circuit is equal to the order of the differential equation that describes it. Order of the circuit will be equal to the total number of independent inductors and capacitors – (number of all-inductor nodes + number of all-capacitor loops in the circuit).

The order of a circuit depends also on the kind and location of input. The same circuit will have different order if voltage source input is replaced by current source input.

The coefficients an1a0 and bmb0 are decided by the circuit parameters. They are real-valued. an–1…a0 are positive real numbers if the circuit is passive i.e., if it contains only R, L, C and M. They can be zero or negative real numbers if the circuit contains dependent sources. bm…b0 can be positive or negative or zero in all circuits.

The format of left-side of the differential equation that describes a circuit is independent of the particular circuit variable chosen as the describing variable in general. That is, an–1…a0 will remain the same even if some other circuit variable is used as the variable y. However, bm…b0 will depend on the variable chosen.

This nth-order differential equation requires n initial values for solving it if the input function is known only for a range of values of t rather than over entire t-axis. The required initial values are

The differential equation can be solved using x(t) for t ≥ 0+ and these initial values.

The initial values available in a circuit are the initial current values for all inductors and initial voltage values for all capacitors. It requires considerable effort to translate these values into the required initial values in the case of a circuit containing many energy storage elements. This is another serious shortcoming of the differential equation approach.

Laplace transformation technique converts a linear differential equation with constant coefficients into an algebraic equation. Thus, the task of solving a collection of simultaneous differential equations will be reduced to a much simpler task of solving a set of simultaneous algebraic equations involving Laplace transforms of input signals and Laplace transform of desired output signal. This set of equations may be solved for the Laplace transform of output and the time-function may be obtained by inverting the transformation. Moreover, we will see later that the initial conditions specified for inductor currents and capacitor voltages can be used directly in Laplace transform method of solving a circuit. This makes it sound as if the technique of Laplace transforms is just a mathematical artifice for solving linear differential equations. Just as Logarithm is not a mathematical artifice for converting multiplication into addition, Laplace transform is not just a mathematical artifice to make solution of differential equations easier.

There is a very compelling reason why we take Laplace transform of a function. The reason is that (i) complex exponential signals are eigen functions of linear time-invariant circuits, (ii) linear time-invariant circuits obey superposition principle and (iii) Laplace transform expresses a given arbitrary input function as a sum of complex exponential signals.

Therefore, we commence our study of Laplace transform method of solving a circuit by examining the circuit response to a complex exponential input.

13.1 CIRCUIT RESPONSE TO COMPLEX EXPONENTIAL INPUT

Let the nth-order differential equation describing an nth-order linear time-invariant circuit be

where y is some circuit variable identified as the output variable and x is some independent voltage/current source function. Let x(t) = 1 est be a complex exponential function of unit amplitude and complex frequency s = σ + jω. Let y = A est be the trial solution, where A is a complex number to be determined. Substituting the trial solution in Eqn. 13.1-1, we get

Thus, when the input to a linear time-invariant circuit is a complex exponential function est, the output is the same complex exponential function multiplied by a complex number. Therefore, the complex frequency of output remains same as that of the input. The output will have a different phase compared to that of input since A is a complex number in general and has an angle. The value of this complex scaling factor depends on the coefficients of circuit differential equation (i.e., on the circuit parameters) and the complex frequency s of the input. In consonance with the symbol H(jω) used for a similar complex number that relates the output to an input of ejωt, we use the symbol H(s) to represent this number A from this point onwards. Therefore,

 

When x(t) = est in a linear time-invariant circuit, y(t) = H(s) est, where

However, which component of response is this? Since x(t) = 1 est, the complex exponential function was taken to be applied to the circuit from t = –∞ onwards. Therefore, there is only one component in response and that is the forced response. Therefore, the response given above is the forced response as well as the total response. But if x(t) = 1 est u(t), then, the above expression yields the forced response component only. The natural response terms in zero-state response and the natural response terms in zero-input response have to be found out from initial conditions. However, those terms too will be complex exponential functions since natural response terms of a linear time-invariant circuit are complex exponential functions.

The complex function H(s) of a complex variable s can also be written in polar form as |H(s)| ∠ θ and in exponential form as |H(s)| e, where θ is its angle. Therefore, the output y(t) can be expressed as y(t) = |H(s)| est + = |H(s)| eσt ej(ωt + θ). H(s) may be viewed as a generalised frequency response function. Its magnitude gives the ratio between the amplitudes of output complex exponential function and input complex exponential function. Its angle gives the phase angle by which the output complex exponential function leads the input complex exponential function.

The signal est with a complex value for s goes through a linear time-invariant circuit and comes out as a scaled replica of itself. The scaling factor is H(s). An input function that goes through a system and comes out as a scaled copy of itself is called an eigen function of the system. The scaling factor is called an eigen value of the system.

 

Complex exponential signals of est format with a complex-valued s are eigen functions of linear time-invariant circuits.

13.2 EXPANSION OF A SIGNAL IN TERMS OF COMPLEX EXPONENTIAL FUNCTIONS

The set of est format signals for all possible values of s and can be represented as a collection of points in a two-dimensional space. The horizontal axis of this plane represents the real part of s and the vertical axis represents the imaginary part of s. Then a point s = σ + jω in this plane will stand for a signal est = e(σ+ )t = eσt ejωt. Such a point which acts as a stand-in for a complex exponential signal is called a signal point. The complex number representing that point i.e., s, is called the complex frequency of the signal est. The real part of s has nepers/s as its unit and the imaginary part has radians/s as its unit. Collection of all such signal points – i.e., the plane – is called the signal plane or signal space. The same space is also called the s-domain in circuit studies. The shape of signal for various signal point locations in s-plane is shown in Fig. 13.2-1.

Fig. 13.2-1 Signal point in s-domain versus signal shape

Let v(t) = f(t) u(t) (i.e., right side of some function that is possibly two-sided) and |v(t)| < Meαt for some M and α, then is its Laplace transform, where s = σ + jω is the general complex frequency with σ α. The Laplace transform exists and the inverse integral converges to v(t) only for those values of s that have Re(s) > α. The region formed by all those values of s in the s-plane for which the Laplace transform of a time-function is defined and is convergent is called the region of convergence (ROC) of the Laplace transform. Obviously the ROC of Laplace transform of a right-sided function is the region to the right of Re(s) = α line. This is a vertical straight line parallel to jω axis and crossing σ-axis at α.

The time-function can be obtained from its Laplace transform by carrying out the inversion integral given below.

The Laplace transform defined this way returns the right-side of the underlying function f(t) on inversion. The left-side returned will be zero. In this sense, this Laplace transform may be termed as a unilateral Laplace transform. We deal with only unilateral Laplace transform in this chapter.

Note that the evaluation of inversion integral has to be performed on a line parallel to -axis in s-plane with the line crossing the σ-axis within the ROC of the Laplace transform.

 

Let v(t) be a right-sided function that is bounded by Meαt with some finite value of M and α. Then the Laplace transform pair is defined as

where s = σ + jω is the complex frequency variable standing for the complex exponential function est with σα. The ROC of V(s) is the entire plane to the right of Re(s) = α line.

13.2.1 Interpretation of Laplace Transform

The Laplace transform V(s) is a ‘complex amplitude density function’. Equation 13.2-2 makes it clear that Laplace transform expresses the given time-function as a sum of infinitely many complex exponential functions of infinitesimal complex amplitudes. Thus, Laplace transform is an expansion of v(t) in terms of complex exponential functions. The entire ROC is available for evaluating Laplace transform. We will consider an example to clarify this matter.

Example: 13.2-1

Find the Laplace transform of v(t) = u(t).

Solution

Therefore, V(s) = 1/s with ROC of Re(s) > 0.

Thus, the inversion integral can be evaluated on any vertical straight-line on the right-half in s-plane. But does not that mean that a steady function like u(t) is being synthesised from oscillations that grow with time? It precisely means that. The synthesis Eqn. 13.2-2 reveals that infinite growing complex exponential functions of infinitesimal amplitudes, which start at –∞ and go up to + ∞ in time, participate in making the transient time-function u(t). The contribution from a band of complex frequencies around a complex frequency value s is approximately V(s) × Δs × est, where Δs is the width of complex frequency band. A similar contribution comes from the band located around s*. These two contributions together will form a growing sinusoidal function as shown below.

Thus, similarly located bands in the two half-sections of the vertical line on which the inversion integral is being evaluated result in a real valued contribution as shown above. Now the inversion integral for 1/s can be written as

Thus, infinitely many exponentially growing sinusoids of frequencies ranging from zero to infinity, each with infinitesimal amplitude, interfere with each other constructively and destructively from t = –∞ to t = + ∞ to synthesise the unit step waveform. Moreover, the exponentially growing sinusoids that participate in this waveform construction process are not unique. The value of σ can be any number > 0. Therefore, each vertical line located in the right-half of s-plane yields a distinct set of infinitely many exponentially growing sinusoids which can construct the unit step waveform.

That infinitely many exponentially growing sinusoids interfere with each other to produce a clean zero for all t < 0 and a clean 1 for all t > 0 is indeed counter-intuitive and quite surprising when heard first. The inversion integral in Eqn. 13.2-3 was evaluated using a short computer program for various values of σ and over finite length sections on the vertical line. In effect, the program calculated the partial integral of the form

for various values of σ and ω0. Figure 13.2-2 shows the resulting waveforms for σ = 0.1 and ω0 = 10, 20 and 50.

Fig. 13.2-2 Partial inversion integral for unit step function for σ = 0.1 and (a) ω0 = 10 (b) ω0 = 20 (c) ω0 = 50

Even a short range of 10 rad/s shows the tendency of the integral to approach step waveform. With ω0 = 50 rad/s the integral has more or less yielded step waveform – at least in the range – 1 s to 4 s. We also observe the familiar Gibb’s oscillations at discontinuities. Figure 13.2-3 shows the results of partial evaluation of inversion integral for σ = 1 and ω0 = 10, 20 and 50.

This set of simulation result shows that we have to include more and more components in the partial integral to converge to unit step waveshape in a given time-interval as we let the components grow at a faster rate, i.e., for higher values of σ. And, keeping σ at a fixed value, we would need to include more and more frequency components when we increase the time-range over which we want convergence. However, we have infinite components at our disposal and it will be possible to include enough of them to recover the u(t) shape up to any finite t however large it may be.

Therefore, Laplace transform expands a transient right-sided time-function in terms of infinitely many complex exponential functions of infinitesimal amplitudes. The ROC of such a Laplace transform will include portions of right-half of s-plane and hence the time-domain waveform gets constructed by growing complex exponential functions though that appears counter-intuitive.

Fig. 13.2-3 Partial inversion integral for step function for σ = 1 and (a) ω0 = 10 (b) ω0 = 20 (c) ω0 = 50

13.3 LAPLACE TRANSFORMS OF SOME COMMON RIGHT-SIDED FUNCTIONS

Integral of sum of two functions is sum of integrals of each function. Thus, Laplace transformation is a linear operation. If v1(t) and v2(t) are two right-sided functions and a1 and a2 are two real numbers, then, a1v1(t) + a2v2(t) ⇔ a1V1(s) + a2V2(s) is a Laplace transform pair. This is called Property of Linearity of Laplace transforms. Now we work out the Laplace transforms for many commonly used right-sided functions using the defining integral and property of linearity.

Let v(t) = esotu(t) be a right-sided complex exponential function with a complex frequency of so. Then,

Therefore, esotu(t) ⇔ 1/ (s so ) is a Laplace transform pair with ROC Re(s) > Re(so).

The special case of v(t) = u(t) is covered by this transform pair with so = 0.

Therefore, u(t) ⇔ 1/ s is a Laplace transform pair with ROC Re(s) > 0.

The special case of v(t) = cosωot u(t) is covered by expressing v(t) as = (ejωot + eot) / 2 by employing Euler’s formula and then applying property of linearity of Laplace transforms.

Therefore, cosωot u(t) is a Laplace transform pair with ROC Re(s) > 0.

Similarly, sinωot u(t) is a Laplace transform pair with ROC Re(s) > 0.

Consider v(t) = eαtcosβt u(t). This can be expressed as = [e(e α + jβ)t + e(e α jβ)t ] / 2 by Euler’s formula. Then,

Therefore, eαt cosβt u(t) is a Laplace transform pair with ROC of Re(s) >α.

Similarly, eαt sin βt u(t) is a Laplace transform pair with ROC of Re(s) >α.

Now consider v(t)

The Laplace transform of this function can be found from the defining integral as

Now we send v(t) to a limit as Δs → 0.

Therefore, Laplace transform of tesotu(t)

Therefore, tesotu(t) ⇔ 1 / (sso )2 is a Laplace transform pair with ROC Re(s) > Re(so ).

The special case of v(t) = t u(t) is covered by this transform pair with so = 0.

Therefore, tu(t) ⇔ 1/ s2 is a Laplace transform pair with ROC Re(s) > 0.

And finally, we consider Thus, δ(t) ⇔ 1 is a Laplace transform pair with ROC of entire s-plane. It requires all complex exponential functions with equal intensity to synthesise an impulse function in time-domain.

These commonly used Laplace transform pairs are listed in Table 13.3-1. Some of them have been derived in this section. Others will be taken up later.

 

Table 13.3-1 Basic Laplace Transform Pairs

13.4 THE s-DOMAIN SYSTEM FUNCTION H(s)

We saw in Section 13.1 that when an input est is applied to a linear time-invariant circuit described by an nth-order differential equation

the response is given by H(s)est where

H(s) in this context is the ratio of complex amplitude of forced response component in output to the complex amplitude of input complex exponential function with a complex frequency of s. There is only forced response in this context and forced response itself is the total response.

In Section 13.2, we observed that a right-sided function x(t) can be expressed as a sum of infinitely many complex exponential functions of frequency between σ – j∞ to σ + j∞ with the line Re(s) = σ falling within the ROC of Laplace transform of x(t). We combine these two facts along with superposition principle to arrive at the zero-state response of a linear time-invariant circuit to a right-sided input function.

Consider a particular value of complex frequency s and a small band of complex frequency Δs centered on it. This band contributes complex exponential functions of frequencies between (s – 0.5Δs) and (s + 0.5Δs). For sufficiently small Δs, we may take all these complex exponential functions to be evolving approximately at the centre frequency of the band, i.e., at s itself. In that case, all the infinitesimal contributions coming from this band may be consolidated into a signal ≈ X(s) Δs est.

This single complex frequency component with complex amplitude of X(s) Δs will produce a total response component of H(s) X(s) Δs est in the output. We get the zero-state response of the circuit by adding all such contributions over the line Re(s) = σ falling within the ROC of X(s) and sending the sum to a limit by making Δs → 0. The result will be the following integral.

Compare Eqn. 13.4-1 with the synthesis equation of Laplace transform given by Eqn. 13.2-2. It is evident that Eqn. 13.4-1 is the synthesis equation of the Laplace transform H(s)X(s). But then, a synthesis equation which returns y(t) must be synthesising it from the Laplace transform Y(s) of the time-function y(t). Therefore, Y(s) = H(s)X(s). This important result requires restatement.

 

The Laplace transform of zero-state response = Laplace transform of input source function × Ratio of complex amplitude of forced response to the complex amplitude of input complex exponential function at a complex frequency of s.

Now comes a definition. The ratio of Laplace transform of zero-state response to Laplace transform of input source function is defined as the s-domain System Function. And these two are seen to be the same.

∴ The s-domain System Function,

Note carefully that System Function is independent of initial conditions in the circuit since it is the zero-state response to a right-sided input that is employed in its definition. This function is also called a Transfer Function when both x and y are similar quantities, i.e., when x and y are voltages or x and y are currents and is denoted by T(s). It is called an Input Impedance Function and is denoted by Zi(s) if y is the voltage across a terminal pair and x is the current entering the positive terminal. It is called an Input Admittance Function and is denoted by Yi(s) if y is the current into a terminal pair and x is the voltage across the terminal pair. These two, i.e., Zi(s) and Yi(s), together are at times referred to as immittance functions.

If the quantities x and y are voltage–current or current–voltage pair and they refer to different terminal pairs in the circuit, we call the s-domain System Function a Transfer Impedance Function or Transfer Admittance Function, as the case may be. They are represented by Zm(s) and Ym(s), respectively.

We have an expression for H(s) as a ratio of rational polynomials in s in Eqn. 13.4-2. Rational polynomials are polynomials containing only integer powers of the independent variable. However, there is another more interesting interpretation possible for H(s).

Let us try to find the impulse response of the circuit by this transform technique. We remember that ‘impulse response’ means ‘zero-state response to unit impulse input’ by definition. Hence, we can use the System Function to arrive at the Laplace transform of impulse response as H(s)X(s). But x(t) = δ(t) and therefore X(s) = 1. Hence, for a linear time-invariant circuit, the following statement holds.

 

Laplace transform of impulse response = s-domain System Function, and, Impulse response = inverse Laplace transform of s-domain system function This result was anticipated in naming the System Function as H(s).

Once the System Function and Laplace transform of input source function are known, one can obtain the Laplace transform of zero-state response by inverting the product of input transform and System Function. We will take up the task of inverting Laplace transforms in later sections.

13.5 POLES AND ZEROS OF SYSTEM FUNCTION AND EXCITATION FUNCTION

H(s) is the System Function, X(s) is the excitation function and Y(s) is the output function referred to in this section.

We observed that is a ratio of rational polynomials in complex frequency variable s. Further, we observe from Table 13.3-1 that the excitation functions corresponding to commonly employed input source functions are also in the form of ratio of rational polynomials in s. Thus the output function also turns out to be a ratio of rational polynomials in s. Therefore, we can write

where Q(s) is an mth-order polynomial on s and P(s) is an nth-order polynomial on s. They are the numerator polynomial and denominator polynomial of System Function, respectively. Similarly, Qe(s) and Pe(s) are the numerator and denominator polynomials on s for the excitation function.

Let the n roots of P(s) be represented as p1, p2,…, pn and the m roots of Q(s) be represented as z1, z2,..., zm. These roots can be complex in general. p1, p2,…, pn are the n values of complex frequency s at which the System Function goes to infinity. They are defined as poles of System Function. z1, z2,…, zm are the m values of complex frequency s at which the System Function goes to zero value. They are defined as the zeros of System Function.

Similarly the values of s at which X(s) goes to infinity are called the excitation poles and the values of s at which X(s) goes to zero are called the excitation zeros. They are the same as roots of Pe(s) and Qe(s), respectively.

Obviously, the System Function poles and excitation function poles together will form the poles of output function. Similarly, the System Function zeros and excitation function zeros together will form the output function zeros. These statements assume that no pole-zero cancellation takes place.

A diagram that shows the complex signal plane, i.e., the s-plane, with all poles of a Laplace transform marked by ‘×’ symbol and all zeros marked by ‘o’ symbol is called the pole-zero plot of that Laplace transform. Some poles and zeros may have multiplicity greater than 1. In that case, the multiplicity is marked near the corresponding pole or zero in the format ‘r = k’ where r indicates the multiplicity and k is the actual value of multiplicity. The default value of r = 1 is not marked.

Example: 13.5-1

Obtain the pole-zero plot of the transfer function Vo(s)/Vs(s), excitation function and the output function in the circuit shown in Fig. 13.5-1 with vs(t) = 10 e–1.5t cos2t u(t) V.

Fig. 13.5-1 Circuit for Example: 13.5-1

Solution

The differential equation describing the second mesh current in this circuit is derived below. The two mesh equations are

Adding the two mesh equations results in

Differentiating the second mesh equation twice with respect to time and using the above equation in the result gives us

vo(t) is numerically equal to i2(t) and hence the differential equation governing vo(t) is

The characteristic equation is s3 + s2 + 2s + 1 = 0 and its roots are s1 = –0.2151 + j1.307, s2 = –0.2151– j1.307 and s3 = –0.5698. The roots of a polynomial of degree higher than 2 will normally require the help of root-finding software or numerical methods. There is a pair of complex conjugate roots.

The System Function

Laplace transform of e–1.5t cos2t u(t) is

Therefore,

We observe that the denominator polynomial is the same as the left side of the characteristic equation of the governing differential equation. This will always be so. Hence poles of System Function (they are also called ‘system poles’) will be the same as the natural frequencies of the circuit for any linear time-invariant circuit.

Therefore the system poles are p1 = –0.2151 + j1.307, p2 = –0.2151– j1.307 and p3 = –0.5698.

The numerator polynomial of System Function in this case is trivial and there are no ‘system zeros’.

The excitation poles are at pe1 = –1.5 + j2 and pe1 = –1.5– j2 and excitation zero is at ze1 = –1.5.

The pole-zero plots are shown in Fig. 13.5-2.

Fig. 13.5-2 Pole-zero plots in Example: 13.5-1 (a) for System Function (b) for excitation function (c) for output function

13.6 METHOD OF PARTIAL FRACTIONS FOR INVERTING LAPLACE TRANSFORMS

Any Laplace transform can be inverted by evaluating the synthesis integral in Eqn. 13.2-2 on a suitably selected vertical line extending from –∞ to ∞ in the s-plane within the ROC of the transform being inverted. However, simpler methods based on Residue Theorem in Complex Analysis exist for special Laplace transforms. We do not take up the detailed analysis based on Residue Theorem here. However, the reader has to bear in mind the fact that the ‘method of partial fractions’ for inverting certain special types of Laplace transforms is based on Residue Theorem in Complex Analysis.

Linear time-invariant circuits are described by linear constant-coefficient ordinary differential equations. All the coefficients are real. Such a circuit will have only real-valued natural frequencies or complex-conjugate natural frequencies. Thus, the impulse response of such a circuit will contain only complex exponential functions. Each complex exponential function will have a Laplace transform of the form where so is the complex frequency of that particular term. Laplace transformation is a linear operation. Hence, Laplace transform of the sum of impulse response terms will be the sum of Laplace transform of impulse response terms. Therefore, Laplace transform of impulse response of a linear time-invariant circuit will be the sum of finite number of terms of the type. Such a sum will finally become a ratio of rational polynomials in s. The order of denominator polynomial will be the same as the number of first-order terms of type that entered the sum.

Many of the normally employed excitation functions in linear time-invariant circuits are also of complex exponential nature. Input functions that can be expressed as linear combinations of complex exponential functions will have Laplace transforms that are ratios of rational polynomials in s as explained in the last paragraph.

Product of Laplace transforms that are ratios of rational polynomials in s will result in a new Laplace transform which will also be a ratio of rational polynomials in s. Hence, the Laplace transform of output of a linear time-invariant circuit excited by an input source function, that can be expressed as a linear combination of complex exponential functions, will be a ratio of rational polynomials in s.

A Laplace transform that is in the form of a ratio of rational polynomials in s can be inverted by the method of partial fractions.

Let Y(s) = Q(s)/P(s) be such a Laplace transform. Let the degree of denominator polynomial be n and that of numerator be m. The degree of numerator polynomial will usually be less than n. If the Laplace transform of output of a linear time-invariant circuit shows n m, it usually implies that the circuit model employed to model physical processes has been idealised too much. We assume that m < n in this section. If m is equal to n or more than n, then Y(s) can be written as and we employ method of partial fractions on only.

Let p1, p2,…, pn be the n roots of denominator polynomial. They may be real or complex. If there is a complex root, the conjugate of that root will also be a root of the polynomial. We identify two cases. In the first case all the n roots (i.e., poles of Y(s)) are distinct.

Case-1 All the n roots of P(s) are distinct

Then we can express Y(s) as a sum of first-order factors as below.

Each term in this expansion is a partial fraction. The value of Ai appearing in the numerator of ith partial fraction is the ‘residue at the pole pi ’. The problem of partial fractions involves the determination of these residues. Multiply both sides of Eqn. 13.6-1 by (spi), where pi is the pole at which the residue Ai is to be evaluated. Remember that Y(s) will contain (spi) as a factor in the denominator. Hence, the multiplication by (spi) results in cancellation of this factor in Y(s).

Now we evaluate both sides of Eqn. 13.6-2 at s = pi to get Ai = (s – pi)Y(s)|s=pi . This calculation is repeated for i = 1 to n to complete all the partial fractions.

Each partial fraction of the type can be recognised as the Laplace transform of Aiepit u(t) by consulting relevant entry in Table 13.3-1. But, though we know that epit u(t) has a Laplace transform of , how do we know that is the only time-function that will have as its Laplace transform? It is vital to be sure about that if we want to assert that the time-function is epit u(t)whenever we see a Laplace transform The ‘Theorem of Uniqueness of Laplace transforms’ states that a Laplace transform pair is unique. That is, if we have, by some method or other, found out that F(s) is the Laplace transform of f(t), then this theorem assures us that only f(t) will have this F(s) as its Laplace transform and no other function will have F(s) as its Laplace transform. Therefore, whenever we see

a we can write epit u(t)as its inverse.

Therefore,

Case-2 One root of multiplicity r and n-r distinct roots for P(s)

In this case the partial fraction expansion is as shown below.

The first root p is assumed to repeat r times. It may be real or complex. The remaining (nr) roots are designated as pr + 1, pr + 2,…,pn. That Eqn. 13.6-3 is the partial fraction expansion in this case can be shown by an application of Residue Theorem. We take this as a fact and proceed.

The procedure for evaluating the (nr) residues at the (nr) non-repeating poles of Y(s) is the same as in Case-1. Therefore,

 

Ai = (s – pi) Y (s)| s = pi   for i = r +1 to n

 

We multiply both sides of Eqn. 13.6-3 by (s – p)r for evaluating the r residues at the repeating pole. The result is

(s – p)r is a factor of denominator of Y(s). Therefore, multiplication of Y(s) by (sp)r will cancel this factor in the denominator. Now, evaluating both sides with s = p, we get

 

Ai = (s – p)r Y (s)| s = pi

 

Now, we differentiate Eqn. 13.6-4 on both sides with respect to s and substitute s = p to get

Similarly, successive differentiation with respect to s and substitution of s = p leads to

The reader may verify that the partial fraction terms corresponding to the non-repeating roots will contribute only zero values in all stages of this successive differentiation.

Once all residues are calculated, Eqn. 13.6-4 is inverted to get the following time-function:

We have used the Laplace transform pair tkeptu(t) in arriving at this result. This Laplace transform pair will be proved in a later section.

Example: 13.6-1

Determine (i) the impulse response (ii) the step response and (iii) the zero-state response when vs(t) = 2e–2t u(t) for vo(t) in the circuit in Fig. 13.6-1.

Fig. 13.6-1 Circuit for Example: 13.6-1

Solution

The mesh equation of the circuit is where i is the current flowing in the mesh. But since i flows in the capacitor and vo (t) is the voltage across capacitor. Therefore, Therefore, the differential equation governing the output voltage is

The System Function

The roots of denominator polynomial are –2.618 and –0.382. The factors of the denominator polynomial are (s + 2.618) and (s + 0.382).

  1. The impulse response of a linear time-invariant circuit is the same as the inverse transform of its System Function.
  2.  vs(t) = u(t) ⇒ Vs(s) = 1/s. Therefore the Laplace transform of step response Expressing this in partial fractions,
  3.  vs(t) = 2e-2tu(t) ⇒ Vs(s) = 2/(s + 2). The Laplace transform of zero-state response is given by product of System Function and Laplace transform of input function.

    Expressing this in partial fractions,

Example: 13.6-2

The resistor value in Fig. 13.6-1 under Example 13.6-1 is changed to 2Ω. (i) Find the step response of vo(t) (ii) Determine the zero-state response of vo(t) if vs(t) = e–t u(t) V.

Solution

The differential equation governing the output voltage with 2Ω resistor instead of 3Ω is

The System Function

The roots of denominator polynomial are –1 and –1. The factors of the denominator polynomial are (s + 1) and (s + 1). Therefore, the root at –1 has a multiplicity of 2.

(i) With vs(t) = u(t) the Laplace transform of output is Expressing this in partial fractions, We can find out the residues by applying the expressions developed earlier in this section. Or we may proceed as below.

Now comparing the coefficients of various powers of s in the numerator, we get

 

A1 = 1; 2A1+ A2 + A3 = 0 and A1+ A3 = 0

Solving these equations, we get, A1 = 1, A2 = –1 and A3 = –1

∴ The step response vo(t) = (1 – tetet) u(t) V

(ii) With vs(t) = e–t u(t) the Laplace transform of input is the Laplace transform of output is There is no need for partial fractions in this case.

Example: 13.6-3

Find the impulse response of the circuit in Fig. 13.5-1 under Example 13.5-1 in this chapter.

Solution

The System Function was shown to be in Example 13.5-1.

The characteristic equation is s3 + s2 + 2s + 1 = 0 and its roots are s1 = –0.215 + j1.307, s2 = –0.215–j1.307 and s3 = –0.57. There is a pair of complex conjugate roots.

Impulse response is obtained by inverting the System Function.

13.7 SOME THEOREMS ON LAPLACE TRANSFORMS

The property of linearity of Laplace transforms was already noted and made use of in earlier sections. We look at other interesting properties of Laplace transform in this section.

13.7.1 Time-Shifting Theorem

If v(t) = f(t) u(t) has a Laplace transform V(s) then vd(t) = v(ttd ) = f(ttd ) u(ttd ) has a Laplace transform Vd(s) = V(s)estd.

The shifting operation implied in this theorem is illustrated in Fig. 13.7-1. Note that there is a difference between f(t – td ) u(t) and f(t – td ) u(t – td ).

Fig. 13.7-1 Illustrating the time-shift operation envisaged in shifting theorem on laplace transforms

This theorem follows from the defining equation for Laplace transforms.

Use variable substitution τ = t td

Example: 13.7-1

Find the zero-state response of a series RC circuit with a time constant of 2 s excited by a rectangular pulse voltage of 10 V height and 2 s duration starting from t = 0. The voltage across the capacitor is the output variable.

Solution

The differential equation governing the voltage across capacitor in a series RC circuit excited by a voltage source is where v is the voltage across the capacitor and vs is the source voltage.

In this case, the equation is Therefore, the System Function is H(s) = 0.5 / (s + 0.5).

The rectangular pulse voltage can be expressed as the sum of 10u(t) and –10u(t – 2). i.e., vs = 10[u(t) – u(t – 2)]. Therefore, its Laplace transform is

We express in partial fractions as and determine A1 and A2 as

Inverse transform of10e–2s / s is –10u(t – 2) by Time-shifting Theorem. Similarly, inverse transform of –10e–2s / (s + 0.5) is –10e–0.5(t–2) u(t – 2). Therefore, the output voltage is given by

Figure 13.7-2 shows the two components of response in dotted curves.

13.7.2 Frequency-Shifting Theorem

If v(t) = f(t) u(t) has a Laplace transform V(s), then, vd(t) = v(t)esot has a Laplace transform Vd(s) = V(s – so ).

. 

Fig. 13.7-2 Output response and its components in the circuit in Example 13.7-1

This theorem follows from the defining equation for Laplace transforms.

13.7.3 Time-Differentiation Theorem

If v(t) = f(t) u(t) has a Laplace transform V(s), then, has a Laplace transform

Vd(s) = sV(s) – v(0). Note that

The function v(t)e–st will be a decaying function for any value of s in the ROC of V(s). Otherwise, the Laplace transform will not converge for that value of s. Therefore, it will go to zero as t →

 

Vd(s) = sV(s) − v(0 )

Now, by using mathematical induction, we may show that,

Laplace transform of

and that, in general, Laplace transform of

13.7.4 Time-Integration Theorem

If v(t) = f(t) u(t) has a Laplace transform V(s), then, has a Laplace transform

The function v(t) is stated to possess a Laplace transform. This implies that there is an exponential function Meαt with some positive value of M and some real value for α such that |v(t)| < Meαt. Otherwise, v(t) would not have a Laplace transform. Therefore, the function will satisfy the inequality and therefore is bounded. Therefore, the Laplace transform of will exist. That is, it is possible to select a value for s such that the function is a decaying function. For such an s, i.e., for a value of s in the ROC of Laplace transform of the value of will go to zero as t → ∞. And, the value of e–st at t = 0is zero in any case.

 

Example: 13.7-2

Find the Laplace transform of tnu(t).

Solution

The function tu(t) is the integral of u(t). Therefore, tu(t) ⇔ 1/ s2. Now the function t2u(t) is 2 times the integral of tu(t). Therefore, t2u(t) ⇔ 2 / s3. Proceeding similarly to the power n, we get, tnu(t) ⇔ n!/ sn+1.

13.7.5 s-Domain-Differentiation Theorem

If v(t) = f(t) u(t) has a Laplace transform V(s), then, –tv(t) has a Laplace transform

We show this by determining the Laplace transform of –tv(t) from the defining integral.

We have repeatedly used the limit many times before. We use this limit again with α = –s within the integral.

Since the limiting operation is on s and integration is on t we may interchange the order of these two operations.

13.7.6 s-Domain-Integration Theorem

If v(t) = f(t) u(t) has a Laplace transform V(s) and is finite, then, has a Laplace transform

If the integration is carried out in the right-half of s-plane (ROC of right-sided functions will have at least a part of right-half s-plane in it), then et in the last step in the equation above will vanish. Then,

13.7.7 Convolution Theorem

 

If x(t) and y(t) are two right-sided time-functions with Laplace transforms X(s) and Y(s) respectively and Z(s) = X(s)Y(s), then,

We use the inverse integral to show this.

The integral is called the Convolution Integral between x(t) and y(t) and is denoted by x(t)*y(t). Linear System Theory predicts that convolving the impulse response of a linear time-invariant circuit with its input source function gives the zero-state response. Hence, the convolution theorem stated here corroborates the fact that Laplace transform of zero-state response is given by the product of Laplace transform of input source function and Laplace transform of impulse response. We had termed the Laplace transform of impulse response as the s-domain System Function.

13.7.8 Initial Value Theorem

If v(t) = f(t) u(t) has a Laplace transform V(s) and exists, then,

We know that the Laplace transform of Therefore,

We assume that s → ∞ with its real part is always positive. This is consistent with the fact that ROC of Laplace transform of right-sided functions will contain the right-half of s-plane or at least portions of right-half s-plane. Evaluation of the term est with t → 0 and s → ∞ simultaneously is to be avoided. Hence, we write the integral as below.

Therefore, . Now we apply the limit s → ∞ with its real part always positive. Then the integral vanishes. Therefore,

13.7.9 Final Value Theorem

If v(t) = f(t) u(t) has a Laplace transform V(s) and exists and all the poles of sV(s) have negative real part, then,

We know that the Laplace transform of Therefore,

One has to be very careful in applying this theorem. This theorem works only if all the poles of sV(s) are in the open left-half plane in s-domain. That is, all the poles of sV(s) must have negative real part. Only then will the function v(t) reach a unique and steady final value with time. Otherwise, the value returned by the application of this theorem will not be the final value of v(t). For that matter v(t) may not have a final value at all. Let v(t) be sin ωt. Then V(s) = ω/(s2 + ω2) and sV(s) = /(s2 + ω2). Application of final value theorem says that v(∞) = 0. But there is no unique final value for a sinusoidal waveform. This conflict comes about because of wrong application of the theorem. The sV(s) function in this case has poles on jω-axis and hence the final value theorem is not applicable.

13.8 SOLUTION OF DIFFERENTIAL EQUATIONS BY USING LAPLACE TRANSFORMS

One of the important applications of Laplace transform is in solving linear constant-coefficient ordinary differential equations with initial conditions. The procedure is illustrated below through an example.

 

Example: 13.8-1

Find y(t) for t>0+ for x(t) = δ(t) in the given differential equation with y(0) = 1, y′(0) = –1 and y′′(0) = 0.

Solution

Since the differential equation is an equation, both sides of it can be multiplied by e–st. Since the differential equation is satisfied at all instants of time in a given interval, both sides of it can be integrated with respect to time from 0to ∞. In short, the Laplace transform operation can be carried out on both sides. Laplace transformation is a linear operation and hence Laplace transform of a sum of terms is equal to sum of Laplace transforms of individual terms. Therefore,

Now we apply the ‘Differentiation in Time Theorem on Laplace transforms’ to get

The transform terms that depend only on the input function result in zero-state response. The transform terms that depend only on the initial conditions on output and its derivatives result in zero-input response.

Since x(t) = δ(t), X(s) = 1 in this example. Substituting the values for initial conditions and Laplace transform of x(t), we get

We need to factorise the denominator polynomial in order to arrive at the partial fraction expansion. It is a third-order polynomial with real coefficients. Complex roots, if any, will have to occur in conjugate pairs for such a polynomial. Therefore, a polynomial of odd degree with real coefficients will necessarily possess a real-valued root. We try to locate that real root by the method of bisection.

Try two different values of s such that the polynomial evaluates to a positive and a negative number.

s3 + 2.5s2 + 2.5s + 1.5 evaluates to 1.5 for s = 0 and –1.5 for s = –2. Therefore, there must be root between 0 and –2. We try the mid-value of –1 and see that the polynomial evaluates to 0.5. Therefore, the root must be between –1 and –2. Hence we try the mid-value –1.5. The polynomial evaluates to 0. Hence the root is s = –1.5. In practice, many iterations may be needed to arrive at a root by this technique.

Now we know that (s + 1.5) is a factor of s3 + 2.5s2 + 2.5s + 1.5. Therefore, we get the remaining second-order factor by long division as s2 + s + 1. The roots of this factor are –0.5 ± j0.866.

Now we expand each response term in partial fractions. Normally we expand a transform in terms of first-order partial fractions. However, a second-order factor with complex conjugate roots may be expanded more conveniently in a form illustrated below.

Comparing the coefficients of various powers of s in the numerator, we get A + B = 0, A + 1.5B + C = 0 and A + 1.5C = 1. Solving for the unknowns, we get A = 0.5715, B = –0.5715 and C = 0.2857.

Therefore, the zero-state response = Inverse of

∴ Zero-state response = [0.5715e–1.5t – 0.5715e–0.5t cos0.866t+0.66e–0.5t sin0.866t]; t ≥ 0+

The zero-input response is given by the inverse of

∴ Zero-input response = [e–0.5t cos0.866t – 0.5774e–0.5t cos0.866t]; t ≥ 0+

Total response y(t) is the sum of zero-input response and zero-state response. Therefore,

If we can solve a differential equation with non-zero initial conditions completely in one stroke using Laplace transforms, then, we can indeed solve linear time-invariant circuits with non-zero initial conditions for their total response using Laplace transforms. We have to derive the nth order linear constant-coefficient differential equation describing the circuit in terms of a single chosen variable first. In the second step, we have to determine the initial values for that chosen circuit variable and its (n–1) derivatives from the known initial values of inductor currents and capacitor voltages in the circuit. Then we are ready to employ Laplace transform technique to solve for zero-input response and zero-state response in one step as illustrated in this example.

However, the derivation of differential equation and determination of initial values of chosen variable and its derivatives are the toughest tasks in a circuit analysis problem. Can’t Laplace transform technique help us to simplify these two stages of circuit analysis?

13.9 THE s-DOMAIN EQUIVALENT CIRCUIT

Yes, it can. Laplace transform technique tells us that we do not even have to derive the circuit differential equation and initial values required to solve it. Let us see how.

The circuit equations arising out of applying KVL in loops and KCL at nodes are equations that remain true at all values of t. Therefore, such equations can be differentiated and integrated with respect to time without changing their truth content. Moreover, of course, being equations they can be multiplied by the same constant or function on both sides.

We choose to multiply all KCL and KVL equations in a linear time-invariant circuit by a function e–st, where s is a complex frequency value drawn from s-plane, with a real part of suitable value such that each term in the equation is converted into an absolutely integrable function of time (so that Laplace transform for that term will converge). Then we choose to integrate the equations from 0 to ∞ in time-domain. We apply the principle that integral of sum of terms is sum of integrals of individual terms.

The result will be a conclusion – (i) the algebraic sum of Laplace transforms of element voltages in any loop in a circuit is zero (ii) the algebraic sum of Laplace transforms of element currents at any node in a circuit is zero.

 

The Laplace transforms of voltage variables and current variables in a linear time-invariant circuit obey KVL and KCL, respectively.

Now, suppose we know the relation between the Laplace transform of element voltage and Laplace transform of element current for all circuit elements. Then, we can write the node equations and mesh equations in terms of Laplace transforms of variables straightaway – i.e., we can write the circuit equations in s-domain straightaway instead of writing them in time-domain and transforming them into s-domain at the end of solution process. Hence, we derive the element relationships in s-domain first.

13.9.1 s-Domain Equivalents of Circuit Elements

A resistor R is described by the time-domain element equation vR(t) = R iR(t). Multiplying both sides by e–s t and integrating from 0 to ∞, we get VR(s) = R IR(s). Note that we use upper case letters for Laplace transforms. This relation makes it clear that a resistor is represented by a multiplying factor of R that connects the Laplace transform of current through it to the Laplace transform of voltage across it. The ratio of Laplace transform of voltage across an element to the Laplace transform of current through it is defined as its ‘s-domain impedance’. Thus, s-domain impedance of a resistor is R itself.

An inductor L is described by the time-domain element equation where vL(t) and iL(t)are its voltage and current variables as per passive sign convention. Applying Laplace transformation on both sides of this element equation, we get VL(s) = sLIL(s) – LiL(0) where iL(0) is the initial current in the inductor at t = 0. The Laplace transform Li represents an impulse voltage LiL(0) δ(t) in time-domain and hence it is consistent with the fact that a non-zero initial condition in an inductor can be replaced with an impulse voltage source in series with the inductor. This s-domain equation for an inductor suggests the following s-domain equivalent circuit shown in Fig. 13.9-1 (a) for an inductor with sL as its s-domain impedance function. The second equivalent circuit shown in Fig. 13.9-1 (b) follows from and indicates the fact that the s-domain admittance of an inductor is 1/sL and that non-zero initial condition in an inductor can be replaced with a step current source in parallel with it.

Fig. 13.9-1 s-Domain equivalent circuits for an inductor

Note that we use the same graphic symbol for inductor in s-domain as the one we used in time-domain. This will be the case with all other circuit elements too.

A capacitor C is described by the time-domain element equation where vC(t) and iC (t) are its voltage and current variables as per passive sign convention. Applying Laplace transformation on both sides of this element equation, we get IC (s) = sCVC (s) – CvC (0) where vC (0) is the initial voltage across the capacitor at t =0. The Laplace transform CvC (0) represents an impulse current CvC (0) δ(t) in time-domain and hence it is consistent with the fact that a non-zero initial condition in a capacitor can be replaced with an impulse current source in parallel with the capacitor. This s-domain equation for a capacitor suggests the following s-domain equivalent circuit shown in Fig. 13.9-2 (a) for a capacitor with sC as its s-domain admittance function. The second equivalent circuit shown in Fig. 13.9-2 (b) follows from and indicates the fact that the s-domain impedance of a capacitor is 1/sC and that non-zero initial condition in a capacitor can be replaced with a step voltage source in series with it.

Fig. 13.9-2 s-Domain equivalent circuits for a capacitor

s-domain impedance is assigned the unit of Ω (ohms) and s-domain admittance is assigned the unit of S (Siemens).

Now we can construct the s-domain equivalent circuit for a circuit in time-domain by replacing all sources by their Laplace transforms and replacing all other circuit elements by their s-domain equivalents. The resulting equivalent circuit will have Laplace transforms of voltages and Laplace transforms of currents as the circuit variables instead of the time-domain variables. Each energy storage element will result in an extra independent source representing its initial condition in s-domain equivalent circuit.

Applying KVL and KCL in this circuit will result in algebraic equations involving Laplace transforms of voltages and currents, respectively. Thus the problem of solving a coupled set of integro-differential equations involving functions of time in the time-domain circuit is translated to solving a coupled set of algebraic equations involving Laplace transforms of variables in the s-domain equivalent circuit. The time-functions may be determined by inverting the Laplace transforms after they are obtained.

Transforming a time-domain circuit into an s-domain circuit makes it similar to a memoryless circuit with DC excitation since both are described by algebraic equations. Thus, all concepts and techniques developed in the context of analysis of memoryless circuits (and used later in the analysis of phasor equivalent circuits under sinusoidal steady-state) will be directly applicable in the analysis of s-domain equivalent circuits too.

In particular, (i) the concepts of series and parallel equivalent impedances apply without modification (ii) the concepts of input resistance (i.e., driving-point resistance) and input conductance (i.e., driving-point conductance) apply without modification except that it is ‘input impedance function Zi (s)’ and ‘input admittance function Yi (s)’ in the case of s-domain circuits.

Moreover, the techniques of nodal analysis and mesh analysis can be applied in s-domain circuits. All the circuit theorems, except maximum power transfer theorem, can be applied in the context of s-domain equivalent circuits.

However, Laplace transform of instantaneous power is not equal to product of Laplace transforms of voltage and current. In fact, the s-domain convolution of V(s) and I(s) gives the Laplace transform of p(t) = v(t) i(t). Therefore, dealing with power and energy variables in the s-domain is better avoided. They are better dealt with in time-domain itself.

We observe that the s-domain equivalent circuit makes use of the stated initial values of inductor currents and capacitor voltages right at the start. The s-domain equivalent circuit takes care of these initial values in the form of additional source transforms. Therefore, the circuit solution arrived at by the analysis of s-domain equivalent circuit will contain both zero-input response components and zero-state response components in one step. Thus, Laplace transform technique yields the total response in a single-step solution process.

13.9.2 Is s-domain Equivalent Circuit Completely Equivalent to Original Circuit?

No, it is not. It is equivalent to the original circuit only for t ≥ 0+.

Given a non-zero initial value for current at t = 0 in an inductor we have two ways of taking that into account when we try to solve the circuit for t ≥ 0 + . We may adopt the view that unknown voltage was applied across the inductor in the past which resulted in this initial current. We accept the possibility of other circuit elements that were switched out at t = 0. We can solve for the current for t ≥ 0 + using this initial condition and the known input function. However, we will specify the range of applicability of the circuit solution as t ≥ 0 + specifically.

The second view that we may adopt is that the inductor current was zero at t = 0; but an impulse voltage of LI0 V-s was applied in series with the inductor with suitable polarity. This view will explain the initial current in the inductor becoming I0 though it was taken to be zero at t = 0. Hence, the circuit solution will be the same as the one we obtained by adopting the first point of view.

However, in the second point of view, we are fixing the voltage applied to the inductor in the past at zero value (since the impulse voltage is zero valued for t <0). Thus, we are assuming that the circuit was the same from t = –∞ onwards, no sources were active in it till t =0and some impulse sources acted at t = 0 to change the initial energy storage in some elements abruptly. Thus, the circuit and the sources active in the circuit are completely known from t = –∞ onwards in this point of view. Therefore, the circuit solution is valid from t = –∞ In fact, the solution will be zero for (–∞, 0]. This is usually denoted by multiplying all time-functions that appear in the circuit solution by u(t). The solution will be correct for all t as far as the fictitious circuit that we assumed in this point of view is concerned. However, that is not the actual circuit that we wanted to solve. The actual circuit that we wanted to solve is the one we described under first point of view. Therefore, time-functions multiplied by u(t) cannot be the solution in the actual circuit. We cannot solve the actual circuit for t < 0since the input is really unknown in this time-range. Therefore, only the right side of the solution arrived at from the fictitious circuit, that takes into account initial conditions by bringing in impulse sources, should be accepted as the solution for the actual circuit. Hence, the circuit solution should be specified as time-functions with the range of applicability specified as t ≥ 0 +. The solution for t < 0is left unspecified. It is understood that the circuit cannot be solved for t < 0 with the given data.

s-Domain equivalent of a circuit uses the second point of view described above. Hence, it is equivalent to the circuit only for t > 0 +. Circuit solution from s-domain equivalent circuit is obtained by inverting Laplace Transforms. That will yield time-functions multiplied by u(t). We have to replace u(t) by ‘for t 0 +before we accept the solution from s-domain equivalent circuit as the solution for actual time-domain circuit.

However, in practice, this step is skipped often and the solution is left in the ‘time-function x u(t)’ format itself. This does not lead to errors in practice since we are usually interested in circuit variables for t > 0 + only. However, in the strict sense, it is a bad practice.

13.10 TOTAL RESPONSE OF CIRCUITS USING S-DOMAIN EQUIVALENT CIRCUIT

The application of s-domain equivalent circuit in obtaining the total response of a circuit is illustrated through a set of examples in this section.

Example: 13.10-1

The inductor L1has an initial current of 1 A and the inductor L2 has an initial current of 1 A in the directions marked in the circuit in Fig. 13.10-1 (i) Find the voltage transfer function Vo(s)/Vs(s) and the input impedance function Vs(s)/I(s) (ii) Determine the total response of vo(t) if vS(t) = 2u(t) V.

Fig. 13.10-1 Circuit for Example: 13.10-1

Solution

(i) Transfer functions and immittance functions are defined in the s-domain equivalent circuit. They are defined under zero-state response conditions. They are defined as ratios of Laplace transforms of relevant quantities under zero-state response conditions. Therefore, they are defined under zero initial conditions. The s-domain equivalent circuit with zero initial conditions is shown in Fig. 13.10-2.

Fig. 13.10-2 The transformed equivalent circuit for circuit with zero initial conditions

Series–parallel equivalents and voltage–current division principle may be employed to arrive at the required ratios. Input impedance function Zi(s) = Vs(s) / I(s).

The transformed current in the second 1Ω resistor may be found out in terms of I(s). Then Vo(s) may be obtained from that current by multiplying by 1Ω. Let this current transform be called Io(s). Then,

But

Therefore,

The poles of transfer function are at s = –2.618 and s = –0.382. The zero is at s = 0.

(ii) The total response of vo(t) with vS(t) = 2u(t) V may be solved by mesh analysis or by applying superposition principle. Both methods are illustrated below. The s-domain equivalent circuit with initial conditions accounted and mesh current transforms identified is shown in Fig. 13.10-3.

The mesh equations are

 

–Vs (s) + I1(s)[s+1] + I2(s)[–s] –1 = 0
1 + I1(s)[–s] + I2(s)[2s + 1] –1 = 0

Fig. 13.10-3 Transformed equivalent circuit of circuit in Fig. 13.10-1 with initial condition sources included

These are expressed in matrix form as below.

Solving for I2(s), we get

Since

The roots of denominator polynomial (i.e., poles of transfer function) are at s = –2.618 and s = –0.382. The input transform Vs(s) = 2/s.

Therefore, zero-state response

= 1.1708(e–2.618t – 0.1708e–0.382t) u(t) V

The second component of output is expanded in partial fractions as below.

Therefore, zero-input response

= 0.8945(e–0.382te–2.618t) u(t) V

Total response in the actual time-domain circuit is the sum of zero-state response and zero-input response accepted for only t ≥ 0 + and is given by

 

vo(t) = (0.2763e–0.382t + 0.7237e–2.618t) V for t ≥ 0+

 

It is not necessary to split the two components of the response this way always. It was done here only to demonstrate how the Laplace transform technique brings out both together in one step. Inverting the total response transform would have resulted in total response straightaway. Indeed

 

The same solution can be arrived at by using Superposition Theorem. This theorem can be applied only for the zero-state response components due to various inputs. But then, all the initial conditions get translated into sources in the transformed equivalent circuit and hence the circuit analysis problem in the s-domain is always a zero-state response problem. Therefore, superposition principle can be freely applied in transformed equivalent circuits. The solution term due to initial condition sources will be understood as the zero-input response once we get back to time-domain.

There are three sources in this transformed equivalent circuit. The component circuits required to find out the individual response components are shown in Fig. 13.10-4.

Fig. 13.10-4 Component circuits for applying superposition theorem in Example: 13.10-1

The transfer function of the first circuit is already known as and . Therefore, the output transform in the first circuit is

(s + 1)//1 Ω shares –1 with s Ω in series in the second circuit. Therefore, the voltage transform across (s + 1) Ω is . This voltage transform is divided between s Ω and 1 Ω to yield at the output.

s + 1//s Ω shares 1 in series with 1 Ω to produce across the output.

The total output voltage transform is given by the sum of three output voltage transforms. Therefore, This is the same output transform we obtained in the mesh analysis too.

The time-domain function will be v0(t) = (0.2763e0.382t + 0.7237e–2.618t)u(t) V.

Example: 13.10-2

The circuit in Fig. 13.10-5 was in DC steady-state at t = 0. The switch in the circuit closes at t = 0 introducing a new 1 Ω into the circuit. Determine the voltage across the inductor as a function of time.

Solution

The circuit was in DC steady-state prior to switching at t = 0. A capacitor can be modelled by an open-circuit and an inductor by a short-circuit for DC steady-state analysis. Therefore, the circuit for DC steady-state prior to t = 0 is as shown in Fig. 13.10-6.

Fig. 13.10-5 Circuit for Example: 13.10-2

Fig. 13.10-6 Circuit under DC steady-state for t < 0

Therefore, the voltage across the capacitor at t = 0 was 20/3 V and current through the inductor at t = 0 was 10/3 A. The circuit solution after t = 0 can be obtained by solving a new circuit with 10u(t) as input, vc(0 ) = 20/3 V and iL(0 ) = 10/3 A.

The new circuit can be analysed by mesh analysis or nodal analysis techniques. We opt for nodal analysis since the desired output is a node voltage variable straightaway. It will be convenient to use a current source in parallel with capacitor and a current source in parallel with inductor to account for initial conditions since we have opted for nodal analysis. The transformed equivalent circuit required is shown in Fig. 13.10-7.

Fig. 13.10-7 Transformed equivalent circuit in Example: 13.10-2

The first source in series with 1 Ω may be replaced by a current source in parallel with 1 Ω. The node equations in matrix form will be

The determinant of Nodal Admittance Matrix is

Solving for V3(s)

Therefore,

The voltage across the inductor is the same as v3(t). Note that the final value of inductor voltage is zero. This is expected since under DC steady-state condition the inductor behaves like a short-circuit.

Example: 13.10-3

Verify the initial value theorem and final value theorem on Laplace transforms for i(t) and vo (t) in the initially relaxed circuit shown in Fig. 13.10-8 when driven by vs(t) = u(t).

Fig. 13.10-8 Circuit for Example: 13.10-3

Solution

The s-domain equivalent circuit required for analysis is shown in Fig. 13.10-9.

Fig. 13.10-9 The s-domain equivalent circuit of the circuit in Fig. 13.10-8

Two mesh current transforms are identified in the s-domain equivalent circuit. The mesh equations in matrix form is written by inspection by using the rule that diagonal entry is the sum of all impedances in the corresponding mesh and off-diagonal entries are negative of the sum of impedances shared by the two meshes in question.

Solving for I1(s) and I2(s) by Kramer’s rule, we get

Now,

The input is an u(t) function and its transform is 1/s. Therefore,

The poles of sI(s) are in the left-half of s-plane and hence the final value theorem on Laplace transforms is applicable to I(s).

The poles of sVo(s) are in the left-half of s-plane and hence the final value theorem on Laplace transforms is applicable to Vo(s).

The initial current at t = 0 through the inductor was zero and initial voltage across the capacitor at that instant was zero. There was no impulse content in voltage at input. Therefore, the inductor current at t = 0 + remains at zero. There was no impulse current in the circuit. Therefore, the voltage across the capacitor remains at zero at t = 0 + . The time-domain equivalent circuit at t = 0 + is shown in Fig. 13.10-10 (a). The initial value of current i(t) at 0 + is clearly zero and the initial value of vo(t) is also zero.

Fig. 13.10-10 Equivalent circuits at (a) t = 0 + and (b) t →

The inductor is replaced by a short-circuit and the capacitor by an open-circuit under DC steady-state conditions. The resulting circuit is shown in Fig. 13.10-10 (b). Hence, the final value of i(t) 1/10 = 0.1 A and the final value of vo(t) is equal to input voltage i.e., 1 V

Hence the initial value theorem and the final value theorem on Laplace transforms are verified for vo(t) and i(t).

Example: 13.10-4

(a) Find the transfer function in the Opamp circuit shown in

Fig. 13.10-11 assuming ideal Opamp. (b) Show its pole-zero plot for k = 2.9 and k = 3.1 and find its zero-state response to vS(t) = 0.01δ(t) in both cases with R = 10kΩ and c = 1μF. (c) What is the maximum value of k that can be used in the circuit without making it an unstable one?

Fig. 13.10-11 The Opamp circuit in Example: 13.10-4

Solution

(a) The Opamp circuit from its non-inverting input to its output is a simple non-inverting amplifier of gain k and can be replaced with a dependent source as shown in the s-domain equivalent circuit in Fig. 13.10-12.

Fig. 13.10-12 Transformed equivalent circuit of the circuit in Fig. 13.10-11

The node voltage transforms V1(s) and V2(s) are identified in the transformed equivalent circuit. Writing the node equations at these two nodes, we get

Therefore,

Substituting for V1(s) in terms of V2(s) and simplifying, we get

This is the transfer function of the circuit.

(b) The denominator polynomial with R = 10 kΩ, C =1 μF and k = 2.9 is s2 + 10s +10000. Therefore, the poles are at s = –5 ± j 99.88. The zero of the transfer function is at s = 0, i.e., at the origin in the s-plane.

With k = 3.1, the denominator polynomial is s2 10s + 10000and the poles are at s = 5 ± j 99.88. The zero of the transfer function is again at s = 0. The pole-zero plots are shown in Fig. 13.10-13.

Fig. 13.10-13 Pole-zero plots for the Opamp circuit in Fig. 13.10-11 with k = 2.9 and k = 3.1

The zero-state response to vS (t) = 0.01δ(t) is nothing but the impulse response of the circuit scaled by 0.01.

With k = 2.9

The transfer function H(s) Inverse transform of H(s) gives the impulse response of the circuit. We complete the squares in the denominator and identify the inverse transforms as shown below.

This is a stable impulse response since

The zero-state response to 0.01δ(t) = = 2.9e–5t cos(99.88t + 2.860)u(t).

With k = 3.1

The transfer function H(s) Inverse transform of H(s) gives the impulse response of the circuit. We complete the squares in the denominator and identify the inverse transforms as shown below.

This is an unstable impulse response since it is unbounded. The circuit is an unstable one as evidenced by its poles located in the right-half of s-plane.

The zero-state response to 0.01δ(t) = = 3.1e5t cos(99.88t – 2.860)u(t).

(c) The transfer function has poles on-axis when k = 3. The poles will lie on the right-half of s-plane for all values of k > 3. Therefore, k < 3 is the constraint on k value for stability in the circuit.

The circuit is marginally stable at k = 3. It can function as a sinusoidal oscillator with k = 3. But additional circuitry will be needed to stabilize its amplitude of oscillation.

Note that the circuit is a pure RC circuit with one dependent source in it. A passive RC circuit, i.e., a circuit containing only resistors and capacitors and no dependent sources, will have all its poles in the negative real axis. The dependent source is responsible for making the poles complex conjugate in such a circuit. Complex conjugate poles are often necessary in filter circuits to tailor the filter frequency response function suitably to meet filtering specifications.

This circuit is used as a band-pass filter in practice. The value of k will be decided by the bandwidth required in the band-pass filter and will be < 3 at any rate.

Example: 13.10-5

(a) Obtain the transfer function of the filter circuit shown in Fig. 13.10-14 and identify the type of filter. (b) Determine the zero-state response for vs(t) = 0.1 δ(t). R = 10kΩ, C = 1μF and k = 1

Fig. 13.10-14 The Opamp-rc filter circuit in Example: 13.10-5

Solution

The transformed equivalent circuit is shown in Fig. 13.10-15. Initial condition sources are not required since the circuit is needed for determining transfer function and for evaluating zero-state response. The Opamp is assumed to be ideal. There is only one node that has a free node voltage variable. This node and the assigned node voltage transform are also indicated in the equivalent circuit.

Fig. 13.10-15 The transformed equivalent circuit of circuit in Fig. 13.10-14

Virtual short across the Opamp input terminals and zero input current drawn by the Opamp inverting pin makes the current in the feedback capacitor equal to V(s)/R. This results in the output voltage transform becoming equal to –V(s)/sCR.

Now, we write the node equation at the node-1 marked in Fig. 13.10-15.

The denominator has its roots in the left-half of s-plane for all positive values of k since a second-order polynomial with positive coefficients will have both roots in left-half of s-plane. Therefore, the impulse response is stable and will be absolutely integrable. Therefore, the impulse response will have a Fourier transform. If Fourier transform of a time-function exists, then, the Laplace transform of the same function evaluated on -axis is its Fourier transform. Fourier transform of impulse response is the frequency response function. Therefore, the sinusoidal steady-state frequency response function of a stable circuit is given by its Laplace transform evaluated with s = jω.

Therefore, This frequency response function has a magnitude of unity at ω = 0 and 0 at ω = ∞. Therefore, it is a low-pass filter.

Substituting the numerical values, we get

The impulse response of the circuit is obtained by inverting the transfer function. The roots of the denominator are at s = –261.8 and s = –38.2.

A and B can be evaluated as –44.72 and 44.72, respectively.

Therefore, h(t) = 44.72(e–38.2t e–261.8t)u(t)

Therefore, zero-state response for vS (t) = 0.1δ(t) is 4.47(e–38.2t – e–261.8t)u(t)V

Example: 13.10-6

A single Opamp, a resistor and a capacitor can form a good differentiator circuit. This circuit is shown in Fig. 13.10-16. (i) Derive the transfer function of the circuit and show that it is a differentiator.

(ii) An ideal Opamp is too good to be true. A practical Opamp suffers from many non-idealities. The particular non-ideality that compromises the circuit performance will vary depending on circuit function. For example, we found that integrator circuit is severely compromised by offsets in a practical Opamp. We will see in this example that it is the limited gain and bandwidth of Opamp that compromises the performance of differentiator circuit.

An amplifier contains many capacitors – intentional as well as parasitic – in it and hence represents a high-order dynamic circuit. However, some Opamps like IC 741 can be modelled approximately as a single-time constant amplifier. That is, its gain function is of the form with A ≈ 250,000 and τ ≈ 4 ms. Obtain the transfer function of differentiator circuit with R = 10kΩ and C = 1μF using IC 741 and find its impulse response.

Fig. 13.10-16 An Opamp differentiator circuit

(iii) Suggest a method to modify the oscillatory impulse response to critically damped impulse response.

Solution

(i) This is essentially an inverting amplifier structure. The transfer function = It is a differentiator since multiplication by s in s-domain is equivalent to differentiation in time-domain according to time-differentiation theorem on Laplace transforms. It is an inverting differentiator.

Fig. 13.10-17 Transformed equivalent circuit for differentiator circuit in Fig. 13.10-16

(ii) The Opamp is to be modelled as a dependent source that senses the voltage transform between non-inverting pin and inverting pin and produces a voltage transform at its output with respect to ground, where Vd(s) is the transform of voltage of non-inverting pin with respect to inverting pin. The s-domain equivalent circuit incorporating this model for Opamp is shown in Fig. 13.10-17.

Let the node voltage transform at the inverting pin be V1(s). Writing the node equation at inverting pin, we get

Simplifying this equation results in

Therefore,

The DC gain A of any practical Opamp is in thousands and hence A + 1 ≈ A. Therefore,

Note that the order of the circuit is two. The Opamp contributes one extra order to the circuit. Substituting R = 10kΩ, C = 1μF, A = 250000 and τ = 4ms, we get

Step response is the inverse of transfer function multiplied by 1/s.

Of course, if a unit step is really applied to this circuit, the output of Opamp will saturate. But what is to be noted is that the response is highly under-damped. The oscillation is at 12.6 kHz and oscillation period is about 0.08 ms. But the time constant of damping exponential is 40 ms. It takes about 5 time constants for an exponential transient to settle down. That implies that the 12.6 kHz transient oscillations will last for about 200 ms before they die down. That is a bad transient performance.

(iii) The solution is to add a little damping by means of a resistor in series with the input capacitor. This will make the differentiator imperfect however.

13.11 NETWORK FUNCTIONS AND POLE-ZERO PLOTS

Network function is a ratio of Laplace transforms. It is the ratio of Laplace transform of zero-state response to the Laplace transform of the right-sided input function causing this response. We called it s-domain System Function till now. We use the name ‘Network Function’ synonymously from this section onwards.

But ratio of Laplace transform of zero-state response to Laplace transform of which input function? There should only be one! That is, a network function can be defined and evaluated only in an s-domain equivalent circuit containing one input source transform. Thus, the circuit should have only one independent source active when a network function is evaluated. Hence, there cannot be initial condition sources too. That is why the definition specifies that it is the ratio of Laplace transform of zero-state response to Laplace transform of input function.

The response may be measured across or through any circuit element or combinations of such elements in general. Hence, a variety of network functions are defined in a circuit. In particular, when the response variable and excitation variable pertain to same terminal pair, the network function can only be of driving-point impedance or driving-point admittance type. The two together are referred as immittance functions.

13.11.1 Driving-Point Functions and Transfer Functions

Input impedance function or driving-point impedance function is where V(s)and I(s) are transforms of voltage and current in a terminal pair as per passive sign convention. Input admittance where V(s)and I(s) are transforms of function or driving-point admittance function is voltage and current in a terminal pair as per passive sign convention. They are reciprocals of each other at the same terminal pair. These functions are a special class of network functions.

Transfer impedance function is Zm(s) where Vij (s) is the transform of voltage developed at ith terminal with respect to jth terminal due to a current source Ipq(s) delivering current to pth terminal from qth terminal. The terminal pairs p–q and i–j are not the same. Transfer admittance function is where Ipq(s) is the transform of current developed from qth terminal to pth terminal due to a voltage source Vij (s) between ith terminal and jth terminal. The terminal pairs p–q and i–j are not the same though they may share a common terminal. These functions are a special class of network functions.

Voltage transfer function is Aν(s) where Vpq (s) is the Laplace transform of zero-state voltage response developed across terminal pair p–q due to a voltage source transform Vij(s) applied across terminal pair i–j. The terminal pairs p–q and i–j are not the same though they may share a common terminal.

Current transfer function is Ai(s) where Ipq(s) is the Laplace transform of zero-state current response developed through terminal pair p–q due to a current source transform Iij(s) applied through terminal pair i–j. The terminal pairs p–q and i–j are not the same though they may share a common terminal.

Thus there are two types of driving-point network functions and four types of transfer network functions.

We will use the symbol H(s) when we refer to network functions in general and use Zi(s), Yi(s), Zm(s), Ym(s), Av(s) and Ai(s) when we refer to specific network functions.

13.11.2 The Three Interpretations for a Network Function H(s)

The first interpretation is the definition of a network function itself. That is, a network function is the ratio of Laplace transform of zero-state response to the Laplace transform of input source function causing the response.

Two circuit variables are clearly identified in the definition of network function – they are the variable used to measure the zero-state response and the variable that was decided by the input source function. These two variables can be identified in the s-domain equivalent circuit, and circuit analysis in s-domain using nodal analysis or mesh analysis can be performed to arrive at the desired network function. The result will be H(s) in the form of a ratio of rational polynomials in s. Thus, from this point of view, we expect to get an H(s) in the following format. We have chosen to make the coefficient of highest power in s in the denominator 1.

The second interpretation for H(s) comes from the meaning of Laplace transform. Laplace transform of a right-sided function is an expansion of that function in terms of functions of est type with value of s ranging from σj∞ to σ +j. The value of σ is such that the expansion converges to the time-function at all t. The components in expansion, i.e., signals of type est are from –∞ to ∞ in time-domain. Thus, Laplace transform converts a right-sided input into the sum of everlasting complex exponential inputs. Therefore, the problem of zero-state response with a right-sided input is translated into that of forced response with everlasting complex exponential inputs. And, the ratio of Laplace transform of zero-state response to Laplace transform of input source function must then be the same as the ratio between forced response to input when input is est (not est × u(t)).

Forced response to an everlasting complex exponential input of 1est was seen to be a scaled version est itself; the scaling factor being a complex number that depends on the complex frequency value s. (Refer Section 13.1)

The time-domain circuit can be analysed using nodal analysis or mesh analysis to arrive at the nth-order differential equation relating the response variable (y) to excitation variable (x). The result will be

Then the scaling factor connecting an input of 1est to the output is

Therefore,

In this expression for H(s), the coefficients come from the coefficients of differential equation governing the circuit. In the expression for H(s) in Eqn. 13.11-1, the coefficients were the result of circuit analysis in s-domain. We conclude that n' = n, m' = m, all a' values are equal to corresponding a values and all b' values are equal to corresponding b values.

The third interpretation comes from the definition of network function itself. If the input source function is assumed to be δ(t), then H(s) becomes a Laplace transform – it becomes the Laplace transform of impulse response. Thus, H(s) is a ratio of Laplace transforms and a Laplace transform at the same time. It is a Laplace transform when we invert it in order to find the impulse response.

 

The three faces of H(s)

H(s), the network function, is a Laplace transform if we invert it to find the impulse response. H(s), the network function, is a complex gain if we evaluate it at a particular value of s. In that case, it gives the complex amplitude of the forced response with an input of est with the value of s same as the value at which H(s) was evaluated. H(s), the network function, functions as a ratio of Laplace transforms when we multiply it by the Laplace transform of input source function and invert the product to determine the zero-state response in time-domain.

13.11.3 Poles and Zeros of H(s) and Natural Frequencies of the Circuit

A network function goes to infinite magnitude at certain values of s. These values are obviously the values of s at which the denominator polynomial evaluates to zero, i.e., at the roots of denominator polynomial. These values of s are called poles of the network function. Thus, poles are roots of denominator polynomial of a network function. Similarly, a network function attains zero magnitude at certain values of s. They are roots of numerator polynomial. They are called zeros of the network function.

A diagram showing the pole points by ‘×’ marking and zero points by ‘o’ marking in complex signal plane (i.e., s-plane) is called the pole-zero plot of the network function.

We note from the discussion in the previous subsection that the denominator polynomial of a network function apparently has the same order and same coefficients as that of the characteristic polynomial of differential equation describing the linear time-invariant circuit. The roots of the characteristic polynomial have been defined as the natural frequencies of the circuit. Does this mean that (i) the degree of denominator polynomial in a network function is the same as the degree of characteristic polynomial (ii) the poles and natural frequencies are the same?

The order of a differential equation is the order of highest derivative of dependent variable. The order of a circuit and order of the describing differential equation are the same. It will also be equal to the total number of independent inductors and capacitors – (number of all-capacitor-voltage source loops + number of all-inductor-current source nodes).

The order of a network function is the degree of denominator polynomial, i.e., the highest power of s appearing in the denominator polynomial.

Thus we are raising the question – is the order of a network function in a linear time-invariant circuit same as the order of the circuit?

The characteristic polynomial of a differential equation is quite independent of right-hand side of differential equation. But, a network function is very much dependent on the right-hand side of the differential equation. Therefore, there exists a possibility of cancellation of some of the denominator factors by numerator factors in the case of a network function. Therefore, the order of a network function can be lower than the order of the circuit. It cannot, however, be higher. This will also imply that the order of two network functions defined within the same network need not be the same.

For instance, let the differential equation describing a linear time-invariant circuit be

The characteristic equation is s2 + 3s + 2 = 0 and the order of circuit is 2. The natural frequencies are s = –1 and s = –2. The zero-input response can contain e t and e2t terms. But it may contain only one of them for certain combination of initial conditions. Consider y(0) = 1 and y'(0) = –1. Then y(t) = e–t and it will not contain e2t. Therefore, not all natural response terms need be present in all circuit variables under all initial conditions.

Now consider the network function. It is

The order of network function is 1. It has one pole at s = –2. Therefore, zero-state response to any input will not contain e–t term. This is the effect that a pole-zero cancellation in a network function has on circuit response. But, note that the same circuit may have other network functions that may not involve such pole-zero cancellation. It is only this particular circuit variable denoted by y that refuses to have anything to do with the natural response term e–t.

Therefore, we conclude the following:

  • The order of a network function and the order of the circuit can be different due to possible pole-zero cancellations in a particular network function.
  • Poles of any network function defined in a linear time-invariant circuit will be natural frequencies of the circuit.
  • However, all natural frequencies need not be present as poles in all network functions defined in that circuit.
  • However, all natural frequencies will appear as poles in some network function or other.
  • Thus, poles of a network function is a sub-set of natural frequencies of the circuit and natural frequencies will be union-set of poles of all possible network functions in the circuit.
  • A complex frequency that is not a natural frequency of the circuit cannot appear as a pole in any network function in that circuit.
  • Both the denominator polynomial and the numerator polynomial of a network function in a linear time-invariant circuit have real coefficients. Therefore, poles and zeros of a network function either will be real-valued or will occur in complex conjugate pairs.

13.11.4 Specifying a Network Function

A network function H(s) is specified in three ways. In the first method, it is specified as a ratio of rational polynomials in s.

In the second method, it is specified as the ratio of product of first-order factors in numerator and denominator with a gain factor multiplying the entire ratio.

There are m factors in the numerator and n factors in the denominator. z1,z2,…,zm are the zeros of the network function and p1,p2,…,pn are the poles of the network function. Note that though the degree of denominator polynomial is shown as n, which is the order of the circuit, pole-zero cancellation may take place leaving the denominator polynomial of network function with a degree less than n.

In the third method of specifying a network function, the pole-zero plot along with the gain factor K is given. The gain factor K may be directly given or indirectly in the form of value of H(s) evaluated at a particular value of s.

Example: 13.11-1

The circuit shown in Fig. 13.11-1 is the small signal equivalent circuit of a transistor amplifier for analysis of its behaviour for sinusoidal input at high frequency. Obtain the transfer function between the output voltage and input source voltage.

Fig. 13.11-1 Small-signal equivalent circuit of a transistor amplifier in Example: 13.11-1

Solution

We find the Norton’s equivalent of the circuit to the left of 100pF capacitor first.

Fig. 13.11-2 Sub-circuits for determining Norton’s equivalent

The sub-circuits needed for determining this equivalent are shown in Fig. 13.11-2. The short-circuit current in the first circuit is The Norton’s equivalent resistance is [(50Ω//2kΩ) + 50Ω] //1kΩ = 89.9 Ω. Thus the required Norton’s equivalent is 9.876×10–3vs(t) A in parallel with 89.9 Ω. The original circuit with this Norton’s equivalent in place is shown in circuit of Fig. 13.11-3 (a) with R1 = 89.9 Ω, R2 = 2 kΩ, C1 = 100 pF, C2 = 5 pF and gm = 0.08. The corresponding s-domain equivalent circuit is shown in circuit (b) of Fig. 13.11-3.

Fig. 13.11-3 (a) Reduced version of circuit in Fig. 13.11-1 and (b) Its s-domain equivalent

The node equations written for the two node voltage transforms V(s) and Vo(s) are as follows:

Solving for Vo(s) and simplifying the expression, we get

Substituting the numerical values for various parameters, we get

The poles are at s = –109 nepers/s and s = –11.07×106 nepers/s. The zero is at s = 1.6×1010 nepers/s.

Note that compared to the pole at –11.07×106 the other pole and the zero are located two orders away from it. The natural response term contributed by the pole at s = –11.07×106 nepers/s will have a time constant of 90.3 ns whereas the natural response term contributed by the pole at s = –109 nepers/s will have a time constant of 1 ns. Thus the natural response term contributed by the pole at s = –109 nepers/s will disappear in about 5% of the time constant of the other term. Therefore, the time constant of 90.3 ns is the dominant time constant in this amplifier and the corresponding pole at –11.07×106 is the dominant pole. The amplifier transfer function can be approximated by neglecting the zero and the non-dominant pole to

Example: 13.11-2

Find (i) Input impedance function and (ii) in the circuit shown in Fig. 13.11-4.

Fig. 13.11-4 Circuit for Example: 13.11-2

Solution

This is a case of cancellation of all poles by zeros leaving a real value for a network function. The input impedance of the circuit is purely resistive at all values of s. This implies that the current drawn by the circuit behaves as in a memoryless circuit. But, this does not mean that the order of the circuit is zero.

The voltage transfer function has a pole at s = –1 and zero at s = 1.

This innocuous circuit challenges our notions on the order of a circuit. It contains two energy storage elements. Hence it must be a second-order circuit. But no network function defined in this circuit will be second-order function if the excitation is a voltage source. Even the zero-input response obtained by shorting the voltage source with initial conditions on inductor and capacitor will contain only e–t. The reader is encouraged to analyse the general situation that develops when many sub-circuits with same set of poles in their input admittance functions are connected in parallel and driven by a common voltage source. Similarly, he is encouraged to ponder over the order of a circuit resulting from series connection of many sub-circuits with the same set of poles in their input impedance functions driven by a common current source. The reader may also note that the current in the circuit in Fig. 13.11-4 will have e– t and t e– t terms in zero-input response due to initial energy storage in inductor and capacitor with input open-circuited (i.e., zero-input response for current source excitation), thereby confirming it is a second-order circuit.

13.12 IMPULSE RESPONSE OF NETWORK FUNCTIONS FROM POLE-ZERO PLOTS

Let be a network function defined in a linear time-invariant circuit. Then the impulse response of this network function is given by its inverse transform. The transform H(s) × 1 (1 is the Laplace transform of δ(t)) can be expressed in partial fractions as below.

We have assumed that all poles are non-repeating ones. If there are repeating poles we may assume that the poles are slightly apart by ∆p and evaluate the limit of h(t) as ∆p → 0 after we complete the inversion. We will need a familiar limit for this. This strategy will help us to view all poles as non-repeating ones at the partial fractions stage.

Thus each pole contributes a complex exponential function to impulse response. The complex frequency of the complex exponential function contributed by a pole to impulse response is the same as the value of the pole frequency itself.

A point s in the complex signal space (i.e., the s-plane) stands for the complex exponential signal est for all t. But, when a point s is marked out as a pole of a network function by a ‘×’ mark, that signal point contributes est u(t) to the impulse response and not est. Thus, a point in signal space stands for a two-sided complex exponential signal in general and stands for a right-sided complex exponential signal when that point is specified as a pole of a network function.

The evaluation of residue Ai at the pole pi involves the evaluation of product of terms like (p-z1)… (pi-z1) and (pi-p1)… (pi-pi -1) (pi-pi + 1)… (pi-pn ). Each of these factors will be a complex number. For instance, consider (pi -z1). This is a complex number that can be represented by a directed line drawn from the point s = z1 in s-plane to the point s = pi in the s-plane with the arrow of the line at s = pi . The length of this line gives the magnitude of the complex number (pi-z1) and the angle of the complex number (pi-z1) is given by the angle the line makes with the positive real axis in the counter-clockwise direction. The magnitude of product of complex numbers is product of magnitudes of individual numbers. The angle of product of complex numbers is the sum of angles of individual complex numbers. Therefore, evaluation of residue Ai at the pole pi reduces to determining certain lengths and angles in the pole-zero plot of the network function.

The reasoning employed in the paragraph above also reveals the roles of poles and zeros of a network function in deciding the impulse response terms. The poles decide the number of terms in impulse response and their complex frequencies. The zeros along with the poles and gain factor K decide the amplitude of each impulse response term.

A network function is a stable one if its impulse response decays to zero with time. This is must be equivalent to stating that its impulse response must be absolutely integrable, i.e., finite. Therefore, a network function is stable if all the impulse response terms are damped ones. That is, all the poles must have negative real values or complex values with negative real parts. Therefore, a network function is stable if and only if all its poles are in the left-half of s-plane excluding the jω-axis.

Note that a stable network function in a linear time-invariant circuit does not necessarily imply that the circuit itself is stable. A linear time-invariant circuit is stable only if all the network functions that can be defined in it are stable ones. That is, a stable circuit will have only stable network functions in it. But, an unstable circuit can have both stable and unstable network functions in it.

The graphical interpretation adduced to impulse response coefficients in this section is illustrated in the examples that follow.

Example: 13.12-1

Obtain the pole-zero plots for (i) for positive and negative values of α and sketch the impulse response for α = ± 1.

Solution

These are standard first-order network functions. They are stable ones for positive values of α and unstable ones for negative values of α. They are important, yet simple, functions.

  1. . Therefore, h(t) = αe-αt u(t) The pole-zero plot and impulse response are shown in Fig.13.12-1 for α = ±1.

    Fig. 13.12-1 Pole-zero plot and impulse response for

  2. The pole-zero plots and impulse responses for are shown in Fig. 13.12-2.

    Fig. 13.12-2 Pole-zero plot and impulse response for

  3. . The pole-zero plots and impulse responses are shown in Fig. 13.12-3.

    Fig. 13.12-3 Pole-zero plot and impulse response for

 

Example: 13.12-2

 

A second-order low-pass network function in standard form is given as where ξ is the damping factor and ωn is the undamped natural frequency as defined in Section 11.6 in Chapter 11. Obtain expressions for impulse response of the network function for positive and negative values of ξ in the range –1<ξ<1.

Solution

The poles are at They are complex conjugate poles for –1<ξ<1. They are located in the right-half s-plane for –1<ξ<0 and in the left-half s-plane for 0<ξ<1. They are located on -axis at when ξ = 0. The poles have a magnitude of ωn for all values of ξ in the range (–1,1). The pole line of the pole makes an angle of cos–1(ξ) with negative real axis in the case of positive ξ and with positive real axis in the case of negative ξ. Thus the damping factor magnitude is given by cosine of pole angle. See Fig. 13.12-4.

Fig. 13.12-4 Pole-zero plots for a standard second-order low-pass network function with positive damping

 

The residue at the pole marked as B is given by ωn2 divided by the complex number represented by the line connecting A and B in Fig. 13.12-4 with arrow towards B. This line is seen to be in length and it makes –90° with positive real axis. Similarly, the residue at the pole marked as B is given by ωn2 divided by the complex number represented by the line connecting A and B in Fig. 13.12-4 with arrow towards B. This line is seen to be in length and it makes 90° with positive real axis.

Therefore,

This response is shown in Fig. 13.12-5 for ωn = 1 and ξ = 0.7, 0.3, 0.1 and 0.05.

We observe that as the poles get closer and closer to-axis, the impulse response oscillations become more and more under-damped and last for many cycles.

The impulse response for ξ = –0.05 in Fig. 13.12-6 shows the unbounded nature of impulse response of a network function with poles on right-half s-plane.

Fig. 13.12-5 Impulse response of standard second-order network function for various damping factors

Fig. 13.12-6 Impulse response of standard second-order network function for ξ = –0.05

13.13 SINUSOIDAL STEADY-STATE FREQUENCY RESPONSE FROM POLE-ZERO PLOTS

Let be a network function defined in a linear time-invariant circuit and let all the poles of this network function be in the left-half of s-plane excluding -axis. The zeros can be located anywhere in s-plane. We know that H(s), the network function, is a complex gain if we evaluate it at a particular value of s. In that case, it gives the complex amplitude of the forced response with an input of est with the value of s same as the value at which H(s) was evaluated. Sinusoidal steady-state frequency response function of a stable circuit is given by the complex gain offered by the circuit to ejωt signal. Therefore, H(s) evaluated with s = gives the frequency response function provided the network function is stable. Thus,

H() is a complex function of a real variable ω. The plots of |H()| versus ω and H() versus ω yield the frequency response plots of the network function. The first is called the magnitude plot and the second is called the phase plot.

13.13.1 Three Interpretations for H(jω)

We saw that we can interpret the network function H(s) in three ways in Section 13.11. Three interpretations of H() follow from this.

  1. H() is the ratio of complex amplitudes of output complex exponential and input complex exponential when input complex exponential is of the form Aejωt. Equivalently, H() is the complex amplitude of output when input is (not ejωtu(t)). The signal ejωt is a signal that is different from Hence, H() is a two-sided function from this point of view.

    If input is ejωtu(t), then, H() ejωt gives the forced response component (same as the steady-state response component).

  2. H() is the ratio of Laplace transform of zero-state response to Laplace transform of input when input is of the form Aejωt u(t). The signal Ae–jωt u(t) is not the same as Aejωt u(t). Hence, H() is a two-sided function from this point of view too.
  3. H() is the expansion of the impulse response h(t) of the circuit in terms of complex exponential signals drawn from axis in s-plane. It expands the time-domain signal into the sum of complex exponential functions of ejωt type (i.e., essentially in terms of sinusoids) with ω value ranging from to –∞ to ∞. Hence, H(jω) is a two-sided function from this point of view too.

There are two ways to solve the problem of finding the steady-state output when input variable is cosωot u(t). The first method is to express cosωot u(t) as Re(eot)u(t) and express the output as Re[H(o )eot ]u(t). This method was called Phasor Method in Chapter 8. This method results in the steady-state response component and is based on the first interpretation of H().

Re[H(o)eot] = Re[| H(o ) | e∠H(o) ejωt] = | H(o) | cos[ωot + ∠H(o)] . Now, there is no harm if H() is thought of as a single-sided function of ω provided we interpret the magnitude of H() as the amplitude of output sinusoidal waveform with input amplitude of 1 and the phase of H() as the phase angle by which the output sinusoidal waveform leads the input sinusoidal waveform.

The second way is to express cosωt u(t) as 0.5 ejωtu(t) + 0.5e–jωt u(t) by applying Euler’s formula and express the steady-state output as 0.5[H(o )eo t + H(–o)ejωo t ] .We have seen in Chapter 7 that H(–) = [H()]*.Therefore the steady-state output will be Re[H(o)]cos ωot – Im[H(o)]sin ωot = | H(o) | cos[ωot + H(o)]; same as in the first method. This method also is based on the first interpretation of H(jω) but uses a two-sided version of H(jω).

Frequency response function is not new to us. We had dealt with the frequency response of first-order circuits and second order circuits in detail earlier in the book. But the observation that H(jω) can be evaluated by evaluating H(s) on jω-axis leads to a graphical interpretation for sinusoidal steady-state frequency response function based on the pole-zero plot of H(s). This interpretation affords an insight into the variation of magnitude and phase of H(jω) without evaluating it at all values of ω. It helps us to visualise the salient features of frequency response function without extensive calculations.

13.13.2 Frequency Response from Pole-Zero Plot

Each factor of the form (jω zi) in the numerator of Eqn. 13.13-1 is a complex number that can be thought of as a line directed from s = zi to s = in s-plane. The magnitude of (jω – zi) is equal to the length of the line and the angle of (jω zi ) is the angle that the line makes with the positive real axis in counter-clockwise direction. Similar interpretation is valid for factors of the type (jω pi) in the denominator of Eqn. 13.13-1 too. Let dzi be the length of the line joining the zero at s = zi to the excitation signal point s = on the imaginary axis in s-plane. Let the angle that the line makes with the positive real axis in counter-clockwise be θzt . Similar quantities for a pole at s = pi are dpi and θρi . Then, the frequency response function H() can be expressed in terms of these lengths and angles as

Hence, we can make a rough sketch of the frequency response function by visualising how the various zero-distances and pole-distances vary when the ω is taken from –∞ to + ∞ in s-plane.

Example: 13.13-1

Obtain the frequency response plots for (i) H(s) = α / (s + α) (ii) H(s) = s / (s + α) using geometrical interpretation of frequency response and obtain expressions for bandwidth in both cases.

Solution (i)

Figure 13.13-1 shows the pole-zero plot and frequency response function. The pole distance and the pole angle are marked in the Fig. 13.13-1. Obviously, the gain magnitude goes to 1/√2 times initial gain when ω = α and the phase at that point is –45°. Therefore bandwidth is a rad/s and the function is a low-pass function.

Fig. 13.13-1 Pole-zero plot and frequency response of H (s)

Solution (ii)

Figure 13.13-2 shows the pole-zero plot and frequency response function. The pole distance and the pole angle are marked in Fig. 13.13-2. The zero distance is same as the excitation frequency value ω and times the final gain when ω = α and the zero angle is 90°. Obviously, the gain magnitude goes to the phase at that point is 45°. Therefore bandwidth is α rad/s and the function is a high-pass function.

Fig. 13.13-2 Gain and phase plots of

Example: 13.13-2

The biquadratic network function is a second-order low-pass function if a = 0, b = 0 and c = ωn2. It is a second-order high-pass function if a = 1 and b = c = 0. It is a band-pass function if a = c = 0 and b = 2ξωn and it is a band-reject function if a = 1, b = 0 and c = ωn2. The frequency response function for low-pass, high-pass and band-pass second-order functions were studied in detail in Section 11.11 in Chapter 11 in the context of frequency response of Series RLC circuit.

Consider the band-pass function and band-reject function and sketch their frequency response plots for ξ << 1.

Solution

The poles of the function are at Let us consider the band-pass function first.

The zero of this function is at s = 0.

The distance of the pole at to the excitation frequency point is denoted by d1 and the distance of the pole at to is d2. The distance of zero at s = 0 to is ω itself. The pole angles θ1 and θ2 are also shown in the pole-zero diagram in Fig. 13.13-3.

 

The magnitude function then is 2ξωnω/(d1d2) and the phase function is (π/2)(θ1 + θ2). The distances d1 and d2 are equal to ωn at ω = 0 and the sum of the angles θ1 and θ2 at that frequency is 360°. Therefore, the gain at zero frequency is 0 (due to the zero-distance of zero) and angle is 90°. As ω → ∞ all the three distances go to ∞ and hence magnitude goes to zero. θ1 and θ2 go to 90° as ω → ∞ and hence the phase angle of frequency response function goes to –90° as ω → ∞.

As ω increases from 0, the distance d1 decreases and the distances d2 and d3 increase. Consider a pair of ω values equal to i.e., two ω values separated by part of the pole from the imaginary part of the pole. The distance d1 undergoes a variation and again to as ω varies from passing through the point The distances d2 and d3 also vary. However, if ξ << 1, the variation in these two quantities will be negligible over this frequency range and approximation will be satisfactory.

Therefore, the magnitude of frequency response function will vary from and again to as ω varies from passing through the point The imaginary part of poles can be taken as approximately ωn itself for ξ << 1. Therefore, the maximum gain is 1 at ω = ωn and gain goes through The phase angle at ω = ωn is zero. See Fig. 13.13-3.

Fig. 13.13-3 Pole-zero plot and frequency response plot for a second order band-pass function

The center frequency of the narrow band-pass function is seen to be ≈ ωn and the bandwidth is ≈ 2ξωn. Thus the ratio of center frequency to bandwidth of a narrow band-pass second-order network function is 1/2ξ or Q of the denominator polynomial.

Let us consider the band-reject function now.

The poles are at and zeros are at The distance of the pole at to the excitation frequency point is denoted by d1 and the distance of the pole at The distance of zero at s = n to is d3 and the distance of zero at s = – n to is d4. The pole angles θ1 and θ2 are also shown in the pole-zero diagram in Fig. 13.13-4. The zero-angles are –90° and 90° for all ω< ωn and 90° and 90° for all ω> ωn . The gain is given by d3d4/d1d2 and starts at 1 at ω = 0 since d1 = d2 and d3 = d4. The gain goes to 1 as ω → ∞ since d1d2 and d3d4 under that condition. The gain is zero at ω = ωn since d3 is zero under that condition. Therefore, it is a band-reject function.

The pole-zero plots and frequency response plots are shown in Fig. 13.13-4. For ξ << 1, it may be shown that the gain crosses and that the phase angles at those frequencies are –45° and 45°. The centre frequency of the narrow band-reject function is seen to be ≈ ωn and the bandwidth is ≈ 2ξωn . Thus the ratio of centre frequency to bandwidth of a narrow band-reject second order network function is 1/2ξ or Q of the denominator polynomial.

Fig. 13.13-4 Pole-zero plot and frequency response plot for a second-order band-pass function

The frequency response of higher order network functions can similarly be sketched. A higher order H(jω) can be expressed as the product of first-order factors and biquadratic factors. The magnitude response for first-order factors and biquadratic factors may be sketched separately first and then multiplied together to get the magnitude response curve of H(jω). Phase curves will have to be added.

Poles on negative real axis contribute a monotonically decreasing magnitude response. Poles close to jω-axis render resonant peaks in magnitude response and zeros on jω-axis produce zero gain response at the excitation frequencies equal to the zero locations. Thus graphical interpretation of frequency response function is a valuable aid to a circuit designer who wants to locate poles and zeros of a network function to tailor the frequency response function to meet design specifications.

13.14 SUMMARY
  • Let v(t) be a right-sided function that is bounded by Meαt with some finite value of M and α. Then the Laplace transform pair is defined as

    where s = σ + jω is the complex frequency variable standing for the complex exponential function est with σ value > α. The ROC of V(s) is the entire plane to the right of Re(s) = α line.

     

    Table 13.14-1 Some Impotant Properties of Laplace Transforms

  • Laplace transform expands a transient right-sided time-function in terms of infinitely many complex exponential functions of infinitesimal amplitudes. The ROC of such a Laplace transform will include right-half of s-plane and hence the time-domain waveform gets constructed by growing complex exponential functions.
  • For a linear time-invariant circuit described by the ratio of rational polynomials in s defined as has three interpretations.
    1. It may be viewed as a generalised frequency response function. Its magnitude gives the ratio between the amplitude of output complex exponential function and input complex exponential function when input is of the form Aest. Its angle gives the phase angle by which the output complex exponential function leads the input complex exponential function.
    2. It is also the ratio of Laplace transform of zero-state response to Laplace transform of input source function called ‘the s-domain System Function’.
    3. Further, it is the Laplace transform of Impulse Response
  • Laplace transforms can be inverted by the method of partial fractions.
  • Laplace transformation of both sides of a linear constant-coefficient ordinary differential equation converts it into an algebraic equation on transforms. Thus Laplace transform affords a convenient way to solve such differential equations with initial conditions.
  • All circuit elements have s-domain equivalents. The s-domain equivalent of the complete circuit can be constructed by replacing each element with its s-domain equivalent. KVL and KCL are directly applicable to the transformed quantities.
  • s-domain equivalent circuits may be analysed by nodal analysis or mesh analysis procedures. All circuit theorems developed in the context of memoryless circuits are applicable to s-domain equivalent circuits.
  • The s-domain System Function is also called the network function. Immittance functions, transfer functions and transfer immittance functions are three classes of network functions usually employed in circuit analysis. Those complex frequency values at which a network function goes to ∞ are called its poles and those complex frequency values at which the network function goes to zero are called its zeros.
  • Pole-zero plot along with a gain factor K will specify a network function uniquely. The impulse response of the network function may be obtained from its pole-zero plot. The poles decide the number of terms in impulse response and their complex frequencies. The zeros along with the poles and gain factor K decide the amplitude of each impulse response term.
  • A network function is a stable one if its impulse response decays to zero with time. This is must equivalent to stating that its impulse response must be absolutely integrable, i.e., must be finite. Therefore, a network function is stable if all the impulse response terms are damped ones. That is, all the poles must have negative real values or complex values with negative real parts. Therefore, a network function is stable if and only if all its poles are in the left-half of s-plane excluding the-axis.
  • Sinusoidal steady-state frequency response function H() can be obtained by evaluating H(s) on -axis. This evaluation can be carried out in a graphical manner too.
  • The DC steady-state gain of a network function is H(0).
13.15 PROBLEMS
  1. Evaluate the voltage across the capacitor in a series RC circuit with R = 1 kΩ and C = 1000 μF at t = 0 + and t = 20 sec if the applied voltage to the circuit is (a) 2 e0.01t cost V (b) 2 e0.01tcost u(t) V. Use differential equation approach.
  2. A current source with iS(t) = f(t) u(t) A is applied to a parallel RL circuit with R = 1 Ω and L = 1 H. The voltage across the combination is found to be = 2etsint u(t) V. Find f(t) and the initial current in the inductor. Do not use Laplace transform technique.
  3.  The output variable y in a linear time-invariant circuit is related to the input variable x by the following differential equation Its zero-input response is found to contain e term among other terms. If x(t) = 3e0.01t sint, find the instantaneous value of y at t = 10 s. Do not use Laplace transform technique.
  4. A voltage source of vS(t) = 2e–0.2tcos(t–45°) V is applied to a series RLC circuit with R = 1 Ω, L = 1 H and C = 1 F from t = 0 + . The circuit is initially relaxed. Determine the total response of current in the circuit by solving the circuit differential equation without using Laplace transforms.
  5. A bounding exponential Meαt is to be determined for each of the functions listed below. Find the minimum value of α and the corresponding value of M for each. (i) tu(t) (ii) u(t)–u(t–2) (iii) e3 t u(t) (iv) e–3 t u(t) (v) e3 t u(t–3) (vi) e–3 t u(t–2) (vii) e2 t cos2t u(t) (viii) e2 t cos2t u(t).
  6. The signals listed in Problem 5 are applied to a parallel RL circuit with R = 2 Ω and L = 1 H as current sources. Find the instantaneous voltage across the combination at t = 2 sec in each case by using Laplace transforms technique.
  7. The Laplace transform of impulse response of a linear time-invariant circuit is given by (i) Find the differential equation describing the circuit assuming that the output variable is y and the input variable is x. (ii) Find the total response of the circuit if x(t) = 3 for t >0 + with y(0) = 1 unit and y'(0) = 1 unit/s by Laplace transform technique.
  8. The Laplace transform of current drawn by an initially relaxed dynamic circuit from an unit impulse voltage source is I(s) (i) Find the differential equation relating the current drawn by the circuit to voltage applied. (ii) Find the total response of current if the circuit is initially relaxed and vS (t) = 2e–0.5tcos2t V for t ≥0+ is applied to it by using Laplace transforms.
  9. Find the total response of current in the circuit in Problem 8 if the circuit is initially relaxed at t = 0 and vs (t) = 2e–0.5tcos2t u(t–0.5) V is applied to it.
  10. Let f(t) be a periodic waveform with a period of T s. Let v(t) = f(t) u(t) and vp (t) = f(t) [u(t)–u(tT)]. That is, v(t) is the right-side of a periodic waveform and vp (t) is one period of f(t). Develop an expression for Laplace transform of v(t) in terms of Laplace transform of vp(t) using time-shifting theorem. What is the ROC of Laplace transform of v(t) ?
  11. Find the Laplace transforms of (i) a symmetric square wave of unit amplitude and unit period (ii) a rectangular pulse waveform of unit amplitude with first pulse located between 0 sec and 0.5 sec and pulses repeating every 2 sec using the result derived in Problem 10.
  12. Solve the system of differential equation
  13. If and y(0) = 0, y'(0) = 1 and y"(0) = –1, find y(t) by using Laplace transform technique.
  14. Let x(t) = 3tu(t) and y(t) = 2u(t). Find x(t)* y(t) by inverting X(s)Y(s) and verify by time-domain convolution.
  15. The impulse response of a linear time-invariant circuit is 2e–0.05t u(t). Find the zero-state response when input is 3e–0.1t by convolution theorem on Laplace transforms.
  16. Let x(t) = t[u(t)– u(t–2)] and y(t) = [u(t)– u(t–2)]. Find x(t)* y(t) by inverting X(s)Y(s) and verify by time-domain convolution.
  17. Find the Laplace transform of (i) cosπt [u(t)– u(t–2)] (ii) 2sinh0.2t [u(t)– u(t–1)] by using shifting theorem.
  18. Find the Laplace transform of
  19. Let (i) Find and its inverse transform.(ii) Find and its inverse transform.
  20. Using Laplace transforms find the value of R in the circuit in Fig. 13.15-1 such that the damping factor of the circuit for voltage input is 0.2. Find the step response for vo(t) with this R and verify the initial value and final value theorems on step response.

    Fig. 13.15-1

  21. (i) Obtain the input impedance function and input admittance function for the circuits shown in Fig. 13.15-2 and prepare pole-zero plots for these immittance functions. (ii)Determine the zero-state input current as a function of time when input is u(t) V and verify initial value and final value theorems in each case. (iii) Determine the zero-state input voltage as a function of time when input is u(t) A. All circuit elements have unit values.

    Fig. 13.15-2

  22. (a) Find the voltage transfer function Vo(s)/ Vs (s) and driving-point impedance function in the circuit in Fig. 13.15-3. (b) Prepare the pole-zero plot for both network functions. (c) Determine the step response for vo(t) and iS(t) (d) Verify initial value theorem and final value theorem on Laplace transforms in the case of vo(t) and iS(t).

    Fig. 13.15-3

  23. The initial current in the inductor is 0.5 A and the initial voltage across the capacitor is 1 V in the circuit in Fig. 13.15-4. A single rectangular pulse of current is applied to the circuit as shown in the figure. Solve for vo(t) by s-domain equivalent circuit method.

    Fig. 13.15-4

  24. (i) Show that the voltage transfer function in the circuit in Fig. 13.15-5 is a real number if R1C1 = R2C2. (ii) Obtain the input impedance function with R1C1 = R2C2.

    Fig. 13.15-5

  25. The impulse response of ix in the circuit in Fig. 13.15-6 contains a real exponential term that has a time constant of 1.755 s. (i) Show the pole-zero plots for Ix (s) and Vo (s) when vS (t) = u(t) V and the circuit is initially relaxed. (ii) Find ix (t) and vo(t) for t ≥ 0+ if vS(t) = u(t) and both capacitors have 1 V across them with the bottom plate positive at t = 0and inductor has zero current at t = 0.

    Fig. 13.15-6

  26. The impulse response of input current in the circuit in Fig. 13.15-7 contains a (1/6) δ(t) component. (i) Find the value of R. (ii) Find the driving-point impedance function and show its pole-zero plot. (iii) Find the time-function describing the current delivered by source for t ≥ 0 + if vS (t) = 2cos(2t + 30°) u(t), i1(0) = 1A and i2(0) = –1 A.

    Fig. 13.15-7

  27. Find the zero-state response for vx(t) by nodal analysis in s-domain in the circuit in Fig. 13.15-8 if vS1(t) = 2u(t) V and vS1(t) = 2e t u(t) V

    Fig. 13.15-8

  28. Find the zero-input response for vx(t) by mesh analysis in s-domain in the circuit in Fig. 13.15-8 if first capacitor has 1 V across it at t = 0with the left plate positive and the second capacitor has 1 V across it with bottom plate positive at t = 0.
  29. Pole-zero plots of some transfer functions are shown in Fig. 13.15-9. Find the transfer functions and their impulse responses. The DC gain for all the transfer functions is unity.

    Fig. 13.15-9

  30. Obtain the voltage transfer function in the circuit in Fig. 13.15-10 in terms of k and determine the range of values for k such that the transfer function is stable. Use mesh analysis in s-domain.

    Fig. 13.15-10

  31. The value of RC product in the circuit in Fig. 13.15-11 is 1μs. (i) Derive the voltage transfer function for the circuit and determine the maximum value of A for which the circuit will be stable. (ii) If the value of actually used is 1/10th of this value, calculate the poles and zeros of the voltage transfer function and show the pole-zero plot. (iii) Sketch the frequency response plots for the above condition by geometrical interpretation of frequency response.

    Fig. 13.15-11

  32. The impulse response of a voltage transfer function in a linear time-invariant circuit is found to contain two waveshapes e–0.5t and e–t sin 2t. The steady-state step response of the same circuit is 0.7 V Find the voltage that must be applied to the circuit if the desired steady-state output is 10 sin (4t + 45°) V by geometrical calculations in s-plane. Assume that the transfer function has no zeros.
  33. Mark the pole-zero plot of the transfer function H (s) and obtain its frequency response plot by geometrical calculations in pole-zero plot.
  34. Obtain the voltage transfer function in the Opamp circuit in Fig. 13.15-12 and show that the circuit can work as a band-pass filter. Select the values for R1C1 and R2C2 such that the filter has a centre frequency of 1000 rad/s and bandwidth of 100 rad/s.

    Fig. 13.15-12

  35. Sketch the frequency response plots for the transfer functions with pole-zero plots as in Fig. 13.15-13 (a) through (g) approximately by using geometrical interpretation in s-plane. ‘r’ indicates the multiplicity number. The maximum gain is unity in all cases.

    Fig. 13.15-13