Skip to content Skip to sidebar Skip to footer

Proving a Continuous Function is Causal

Introduction

Patricia Mellodge , in A Practical Approach to Dynamical Systems for Engineers, 2016

1.2.5 Causal versus Noncausal

A causal system is one whose output depends only on the present and the past inputs. A noncausal system's output depends on the future inputs. In a sense, a noncausal system is just the opposite of one that has memory.

How can a real-world system be noncausal? It cannot because real systems cannot react to the future. But noncausal systems have important real-world applications. Consider a song stored in a sound file. Because the entire song is stored, we could process the sound by filtering in a way that has the current notes depend on notes later in the song. This is an example of postprocessing in which noncausal systems may be implemented. Another example of a noncausal system application is image processing. The pixels to the left of the current location can be considered as the "past" and pixels to the right as the "future."

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081002025000012

Digital Signals and Systems

Li Tan , Jean Jiang , in Digital Signal Processing (Second Edition), 2013

3.2.3 Causality

A causal system is the one in which the output y ( n ) at time n depends only on the current input x ( n ) at time n , and its past input sample values such as x ( n 1 ) , x ( n 2 ) ,…. Otherwise, if a system output depends on future input values such as x ( n + 1 ) , x ( n + 2 ) , …, the system is noncausal. The noncausal system cannot be realized in real time.

EXAMPLE 3.4

Determine whether the systems

a.

y ( n ) = 0.5 x ( n ) + 2.5 x ( n 2 ) , for n 0

b.

y ( n ) = 0.25 x ( n 1 ) + 0.5 x ( n + 1 ) 0.4 y ( n 1 ) , for n 0

are causal.

Solution

a. Since for n 0 , the output y ( n ) depends on the current input x ( n ) and its past value x ( n 2 ) , the system is causal.

b. Since for n 0 , the output y ( n ) depends on the current input x ( n ) and its future value x ( n + 1 ) , the system is noncausal.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124158931000032

Digital Signals and Systems

Lizhe Tan , Jean Jiang , in Digital Signal Processing (Third Edition), 2019

3.2.3 Causality

A causal system is the one in which the output y(n) at time n depends only on the current input x(n) at time n, and its past input sample values such as x(n    1), x(n    2),…. Otherwise, if a system output depends on the future input values such as x(n  +   1), x(n  +   2),…, the system is noncausal. The noncausal system cannot be realized in real time.

Example 3.4

Given the following linear systems

(a)

y(n)   =   0.5x(n)   +   2.5x(n    2), for n    0,

(b)

y(n)   =   0.25x(n    1)   +   0.5x(n  +   1)     0.4y(n    1), for n    0,

determine whether each is causal.

Solution:

(a)

Since for n    0, the output y(n) depends on the current input x(n) and its past value x(n    2), the system is causal.

(b)

Since for n    0, the output y(n) depends on the current input x(n) and its future value x(n  +   1), the system is a noncausal.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128150719000038

Signal Processing, General

Rao Yarlagadda , John E. Hershey , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

II.E.3 Causality

A causal system is a system for which the output for any time t 0 depends on the inputs for t  t 0 only. That is, the response does not depend on the future inputs and it relies only on the past and present inputs.

The above concepts allow for the characterization of a linear system. One function that is important to us is the impulse function, δ(t), which is defined in terms of the process

(31) x ( t ) δ ( t ) dt = x ( 0 ) ,

where x(t) is any test function that is continuous at t  =   0. Equation (31) is a special case of the so-called sifting property

(32) x ( t ) δ ( t t 0 ) dt = x ( t 0 ) ,

where again x(t) is assumed to be continuous at t  = t 0. An example of an impulse function is shown in Fig. 9, where we have

FIGURE 9. An example of an impulse function, as ϵ     0.

(33) δ ( t ) = lim ε 0 δ ε ( t ) .

Intuitively we see that any function having unit area and zero width in the limit as some parameter approaches zero is a suitable representation of δ(t). A dynamite explosion is a good approximation, for example, for an impulse input, which is used in seismic exploration. In the following we will consider exclusively linear time invariant systems. The impulse response, h(t), of a linear time invariant system is defined to be the response of the system to an impulse at t  =   0. By the linear time invariant properties, we can see that the response of a linear system to the input

(34) x ( t ) = n = 1 N a n δ ( t t n )

is

(35) y ( t ) = n = 1 N a n h ( t t n ) .

Using the sifting property in Eq. (32), we can write

(36) x ( t ) = x ( t ) δ ( t t ) d t .

Using the rectangular integration, we can approximate Eq. (36) by

(37) x ( t ) = n = N 1 N 2 x ( n Δ T ) δ ( t n Δ T ) Δ T ,

where ΔT is some small increment time. The response y(t) in Eq. (35) is given by

(38) y ( t ) = n = N 1 N 2 x ( n Δ T ) h ( t n Δ T ) Δ T .

As ΔT    0, nΔT approaches the continuous variable t′, the sum in Eq. (38) becomes an integral, and we have

(39) y ( t ) = x ( t ) h ( t t ) d t = h ( t ) x ( t t ) d t ,

which is usually referred to as a convolution integral. The second integral in Eq. (39) is obtained from the first by redefining the variable. The convolution is an important relation, which, in words, says that the response of an arbitrary input is related to the impulse response via Eq. (39). Equation (39) is symbolically written in the form

(40) y ( t ) = x ( t ) * h ( t ) = h ( t ) * x ( t ) ,

where (*) represents convolution.

The computation of the convolution in Eq. (39) can be computed analytically only if x(t) and h(t) are known analytically and can be integrated. Another approach is to use transform theory. That is, if the Fourier transforms of x(t) and h(t) are expressed as

(41) F [ x ( t ) ] = X ( f )

and

(42) F [ h ( t ) ] = H ( f ) ,

then we can show that

(43) Y ( f ) = F [ y ( t ) ] = H ( f ) X ( f )

and

(44) Y ( t ) = H ( f ) X ( f ) e j 2 π f t d f .

The transform of h(t), H(f), is usually referred to as a transfer function relating the input and the output. Transfer functions play a major role in communication theory, control systems, and signal processing. We will consider the discrete convolution later. Next, let us consider the concepts of correlation for the aperiodic case.

The process of correlation is useful for comparing two signals and it measures the similarity between one signal and a time-delayed version of a second signal. The cross-correlation of x(t) and g(t) is defined by

(45) R g x ( τ ) = g ( t ) x ( t + τ ) dt ,

which is a function of τ, the amount of time displacement. When g(t)   = x(t), R gg (τ) is referred to as autocorrelation. Correlation functions are useful in many areas. For example, if x(t)   = g(tt 0), then R gx(t) will be a maximum when τ   = t 0 indicating a method for measuring the delay. It is of interest that the cross-correlation in Eq. (45) can be expressed in terms of convolution in Eq. (40). It can be shown that

(46) R g ¯ x ( τ ) = g ( τ ) * x ( τ ) ,

where g ¯ ( t ) = g ( t ) . That is, the cross-correlation of g(−t) with x(t) will result in the convolution of g(t) with x(t).

Transforms can be used to compute correlations also. For example, if F[x(t)]   = X(f), F[g(t)]   = G(f), and F[R gx(τ)]   = S gx(f), then

(47) S gx = G * ( f ) X ( f ) ,

where G *(f) is the complex conjugate of G(f). Computing the inverse transfer of Eq. (47) will give the cross-correlation function. Next, let us consider the discrete computation of convolutions and correlations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105006888

Nichols-Krohn-Manger-Hall Chart

Yazdan Bavafa-Toosi , in Introduction to Linear Control Systems, 2019

8.7.3 Bandwidth

For a causal system of type one or higher (and thus tangent to the 0 dB M-contour at low frequencies in the upper part of the plane), bandwidth is the smallest frequency its magnitude intersects the 3 dB line. Thus for such systems we should look for the intersection of the NKMH plot of the system with the 3 dB M-contour. Straight forward determination of the bandwidth for such systems seems to be the only tractable usage of the NKMH chart.

Example 8.6

Find the bandwidth of the system P ( s ) = 2 / [ s ( s + 1 ) 3 ] .

Note that the system satisfies the condition. The closed-loop BW is thus easily obtained as the frequency of the intersection of the NKMH plot of the system with the 3 dB M-contour, as depicted in Fig. 8.12. Recall that frequency is the hidden parameter of the plot and is found by clicking on the point of interest. The answer is BW = ω = 1.09 .

Figure 8.12. Example 8.6.

Remark 8.4

The abovementioned usage of the NKMH chart is only for the sake of illustration of how it was once usable for this purpose in the pre-MATLAB® era when everything had to be done by hand. Clearly, now we simply use the command "bandwidth" in MATLAB®. Even in that era, the method failed for systems which did not satisfy the aforementioned condition. This is shown in the proceeding example.

Example 8.7

Find the frequency of the intersection of the NKMH plot of the system of Example 8.2, i.e., P ( s ) = 2 / ( s + 1 ) 3 , with the −3   db M-contour.

The NKMH chart is provided in Fig. 8.13. By clicking on the designated point we find out the frequency is 1.34 which is different from the true bandwidth of the system being BW 1.54 . Note that the plot has two intersections with the −3   db M-contour and we chose the larger frequency. (Why?)

Figure 8.13. NKMH chart of Example 8.6.

Remark 8.5

The discrepancy is slight in this system. From a theoretical standpoint it can be quite noticeable in a system, like 10 times different. This is evinced in the ensuing example.

Example 8.8

For the system P = 10 / ( s 9 ) the closed-loop BW is 1 whereas the 3 dB M-contour intersection frequency is 14.5. For such systems the intersection should be computed with the M-contour whose value is 3   db less than the M-contour at which the system is tangent at the low-frequency end. The NKMH chart of this system is given in Fig. 8.14. The closed-loop system is P = 10 / ( s + 1 ) and thus the central M-contour to which the plot is tangent at the low frequency end has M = 20 log 10 = 20 dB whereas the second central M-contour has M = 20 3 = 17 dB . The intersection frequency with this M-contour correctly gives the BW of the system as ω = 1 rad / s .

Figure 8.14. Example 8.8. Correct computation of the bandwidth.

Once again, we couch that this is only for the purpose of illustration. For such systems bandwidth computation via the NKMH chart is clearly not a tractable method, since the available "sheets" did not have all the M-contour values (even MATLAB® does not plot all these curves) and the plot had to be drawn quite carefully.

Next we discuss the high sensitivity region in the NKMH chart context.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128127483000082

Knowledge Representation for Causal Calculi on Internet of Things

Phillip G. Bradford , ... Marcus Tanque , in Artificial Intelligence to Solve Pervasive Internet of Things Issues, 2021

7.19 Conclusion

Distributed IoT networks or causal systems are a reality. Together IoT devices/sensors and SCM have a great deal of potential. In general, causality is fundamental in reasoning systems. In the end, pushing causal reasoning down to the IoT device-system level may open many new opportunities. The process entails KR for casualty, that is, suited for small, constrained devices and systems. It is not clear how the nature of IoT devices/sensors will be suitable for any of the three systems mentioned here: Pearl's Do-Calculus, Shafer's probability trees, and Halpern-Pearl causality. These systems are based on logic, probability, and are expensive to deploy and perform a range of tasks. Determining causality intuitively seems to be essential and potentially costly. In essence, declarative systems offer the foundations for causal reasoning [17]. How a cause-effect relationship can be computed may not impact on the functions of most causal reasoning systems. Hence, there may be cases when the reasoning is essential to analyze [17].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128185766000071

Multirate and Wavelet Signal Processing

In Wavelet Analysis and Its Applications, 1998

Definition 4.2.3.1

The McMillan degree, μ, of a p × r causal system H(z) is the minimum number of delay units (z −1 elements) required to implement it, that is

μ =de g (H(z)) .

If the system is noncausal, then the degree is undefined. If H(z) = z −1 R, where R is an M × N matrix with rank ρ, then

R = T S

where T is M × ρ and S is ρ × N. Therefore,

H ( z )= z −1 R = T [ z −1 I ρ ] S .

Hence, we can implement the system with ρ delays. So, the system has a McMillan degree ⩽ ρ. As an example, consider

H( z )= z −1 [ l 2 3 l 2 3 l 2 3 ] .

We can rewrite H(z) as

H( z )= [ l l l ] z −1 [ 1 2 3 ]

Thus, the system can be implemented with a single delay, illustrating that the McMillan degree of the system is unity.

The Smith-McMillan decomposition provides insight into the determination of the McMillan degree of an M × M lossless system. The following result is central to the design of lattice structures.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874608X98800496

The Laplace Transform

Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019

3.3 The One-Sided Laplace Transform

The one-sided Laplace transform is of significance given that most of the applications consider causal systems and causal signals—in which cases the two-sided transform is not needed—and that any signal or impulse response of a LTI system can be decomposed into causal and anticausal components requiring only the computation of one-sided Laplace transforms.

For any function f ( t ) , < t < , its one-sided Laplace transform F ( s ) is defined as

(3.6) F ( s ) = L [ f ( t ) u ( t ) ] = 0 f ( t ) e s t d t , ROC

or the two-sided Laplace transform of a causal or made-causal signal.

Remarks

1.

The functions f ( t ) above can be either a signal or the impulse response of a LTI system.

2.

If f ( t ) is causal, multiplying it by u ( t ) is redundant—but harmless, but if f ( t ) is not causal the multiplication by u ( t ) makes f ( t ) u ( t ) causal. When f ( t ) is causal, the two-sided and the one-sided Laplace transforms of f ( t ) coincide.

3.

For a causal function f ( t ) u ( t ) (notice u ( t ) indicates the function is causal so it is an important part of the function) the corresponding Laplace transform is F ( s ) with a certain region of convergence. This unique relation is indicated by the pair

f ( t ) u ( t ) F ( s ) , ROC

where the symbol ↔ indicates a unique relation between a function in t with a function in s—it is not an equality, far from it!
4.

The lower limit of the integral in the one-sided Laplace transform is set to 0 = 0 ε , where ε 0 , or a value on the left of 0. The reason for this is to make sure that an impulse function, δ ( t ) , only defined at t = 0 , is included when we are computing its Laplace transform. For any other function this limit can be taken as 0 with no effect on the transform.

5.

An important use of the one-sided Laplace transform is for solving ordinary differential equations with initial conditions. The two-sided Laplace transform by starting at t = (lower bound of the integral) ignores possible nonzero initial conditions at t = 0 , and thus it is not useful in solving ordinary differential equations unless the initial conditions are zero.

The one-sided Laplace transform can be used to find the two-sided Laplace transform of any signal or impulse response.

The Laplace transform of

a finite-support function f ( t ) , i.e., f ( t ) = 0 for t < t 1 and t > t 2 , t 1 < t 2 , is

(3.7) F ( s ) = L [ f ( t ) [ u ( t t 1 ) u ( t t 2 ) ] ] ROC: whole s -plane;

a causal function g ( t ) , i.e., g ( t ) = 0 for t < 0 , is

(3.8) G ( s ) = L [ g ( t ) u ( t ) ] R c = { ( σ , Ω ) : σ > max { σ i } , < Ω < }

where { σ i } are the real parts of the poles of G ( s ) ;

an anticausal function h ( t ) , i.e., h ( t ) = 0 for t > 0 , is

(3.9) H ( s ) = L [ h ( t ) u ( t ) ] ( s ) R a c = { ( σ , Ω ) : σ < min { σ i } , < Ω < }

where { σ i } are the real parts of the poles of H ( s ) ;

a noncausal function p ( t ) , i.e., p ( t ) = p a c ( t ) + p c ( t ) = p ( t ) u ( t ) + p ( t ) u ( t ) , is

(3.10) P ( s ) = L [ p ( t ) ] = L [ p a c ( t ) u ( t ) ] ( s ) + L [ p c ( t ) u ( t ) ] R c R a c .

The Laplace transform of a bounded function f ( t ) of finite support t 1 t t 2 , always exists and has the whole s-plane as ROC. Indeed, the integral defining the Laplace transform is bounded for any value of σ. If A = max ( | f ( t ) | ) , then

| F ( s ) | t 1 t 2 | f ( t ) | | e s t | d t A t 1 t 2 e σ t d t = { A ( e σ t 1 e σ t 2 ) / σ σ 0 , A ( t 2 t 1 ) σ = 0 ,

is less than infinity so that the integral converges for all σ.

For an anticausal function h ( t ) , so that h ( t ) = 0 for t > 0 , its Laplace transform is obtained after the variable substitution τ = t as

H ( s ) = L [ h ( t ) u ( t ) ] = 0 h ( t ) u ( t ) e s t d t = 0 h ( τ ) u ( τ ) e s τ d τ = 0 h ( τ ) u ( τ ) e s τ d τ = L [ h ( t ) u ( t ) ] ( s ) .

That is, it is the Laplace transform of the causal function h ( t ) u ( t ) (the reflection of the anticausal function h ( t ) ) with s replaced by −s.

As a result, for a noncausal function p ( t ) = p a c ( t ) + p c ( t ) with p a c ( t ) = p ( t ) u ( t ) , the anticausal component, and p c ( t ) = p ( t ) u ( t ) , the causal component, the Laplace transform of p ( t ) is

P ( s ) = L [ p ( t ) u ( t ) ] ( s ) + L [ p ( t ) u ( t ) ] .

The ROC of P ( s ) is the intersection of the ROCs of its anticausal and causal components.

Example 3.3

Find and use the Laplace transform of e j ( Ω 0 t + θ ) u ( t ) to obtain the Laplace transform of x ( t ) = cos ( Ω 0 t + θ ) u ( t ) . Consider the special cases that θ = 0 and θ = π / 2 . Determine the ROCs. Use MATLAB to plot the signals and the corresponding poles/zeros when Ω 0 = 2 , θ = 0 and π / 4 .

Solution: The Laplace transform of the complex causal signal e j ( Ω 0 t + θ ) u ( t ) is found to be

L [ e j ( Ω 0 t + θ ) u ( t ) ] = 0 e j ( Ω 0 t + θ ) e s t d t = e j θ 0 e ( s j Ω 0 ) t d t = e j θ s j Ω 0 e σ t j ( Ω Ω 0 ) t | t = 0 = e j θ s j Ω 0 ROC: σ > 0 .

According to Euler's identity

cos ( Ω 0 t + θ ) = e j ( Ω 0 t + θ ) + e j ( Ω 0 t + θ ) 2 ,

by the linearity of the integral and using the above result we get

X ( s ) = L [ cos ( Ω 0 t + θ ) u ( t ) ] = 0.5 L [ e j ( Ω 0 t + θ ) u ( t ) ] + 0.5 L [ e j ( Ω 0 t + θ ) u ( t ) ] = 0.5 e j θ ( s + j Ω 0 ) + e j θ ( s j Ω 0 ) s 2 + Ω 0 2 = s cos ( θ ) Ω 0 sin ( θ ) s 2 + Ω 0 2

and a region of convergence { ( σ , Ω ) : σ > 0 , < Ω < } or the open right-hand s-plane. The poles of X ( s ) are s 1 , 2 = ± j Ω 0 , and its zero is s = ( Ω 0 sin ( θ ) ) / cos ( θ ) = Ω 0 tan ( θ ) .

Now if we let θ = 0 , π / 2 in the above equation we have the following Laplace transforms:

L [ cos ( Ω 0 t ) u ( t ) ] = s s 2 + Ω 0 2 , L [ sin ( Ω 0 t ) u ( t ) ] = Ω 0 s 2 + Ω 0 2

as cos ( Ω 0 t π / 2 ) = sin ( Ω 0 t ) . The ROC of the above Laplace transforms is still { ( σ , Ω ) : σ > 0 , < Ω < } , or the open right-hand s-plane (i.e., not including the jΩ-axis). See Fig. 3.6 for the pole–zero plots and the corresponding signals for θ = 0 , θ = π / 4 and Ω 0 = 2 . Notice that, for all the cases, the regions of convergence do not include the poles of the Laplace transforms, located on the jΩ-axis. □

Figure 3.6

Figure 3.6. Location of the poles and zeros of L [ cos ( 2 t + θ ) u ( t ) ] for θ = 0 (top figure) and for θ =π/4 (bottom figure). Note that the zero in the top figure is moved to the right to 2 in the bottom figure because the zero of the Laplace transform of x 2(t) is s = Ω 0 tan ( θ ) = 2 tan ( π / 4 ) = 2 .

Example 3.4

Use MATLAB symbolic computation to find the Laplace transform of a real exponential, x ( t ) = e t u ( t ) , and of x ( t ) modulated by a cosine or y ( t ) = e t cos ( 10 t ) u ( t ) . Plot the signals and the poles and zeros of their Laplace transforms.

Solution: The script shown below is used. The MATLAB function laplace is used for the computation of the Laplace transform and the function fplot allow us to do the plotting of the signal. For the plotting of the poles and zeros we use our function splane. When you run the script you obtain the Laplace transforms,

X ( s ) = 1 s + 1 , Y ( s ) = s + 1 s 2 + 2 s + 101 = s + 1 ( s + 1 ) 2 + 100 ;

X ( s ) has a pole at s = 1 , but no zeros, while Y ( s ) has a zero at s = 1 and poles at s 1 , 2 = 1 ± j 10 . The results are shown in Fig. 3.7. Notice that

Figure 3.7

Figure 3.7. Poles and zeros of the Laplace transform of causal signal x(t)=e t u(t) (top) and of the causal decaying signal y ( t ) = e t cos ( 10 t ) u ( t ) .

Y ( s ) = L [ e t cos ( 10 t ) u ( t ) ] = 0 cos ( 10 t ) e ( s + 1 ) t d t = L [ cos ( 10 t ) u ( t ) ] s = s + 1 = s ( s ) 2 + 1 | s = s + 1 = s + 1 ( s + 1 ) 2 + 1

or a "frequency shift" of the original variable s. □

The function splane is used to plot the poles and zeros of the Laplace transforms.

Example 3.5

In statistical signal processing, the autocorrelation function c ( τ ) of a random signal describes the correlation that exists between the random signal x ( t ) and shifted versions of it, x ( t + τ ) for shifts < τ < . Typically, c ( τ ) is two-sided, i.e., nonzero for both positive and negative values of τ, and symmetric. Its two-sided Laplace transform is related to the power spectrum of the signal x ( t ) . Let c ( t ) = e a | t | , where a > 0 (we replaced the τ variable for t for convenience), find its Laplace transform indicating its region of convergence. Determine if it would be possible to compute | C ( Ω ) | 2 , which is called the power spectral density of the random signal x ( t ) .

Solution: The autocorrelation can be expressed as c ( t ) = c ( t ) u ( t ) + c ( t ) u ( t ) = c c ( t ) + c a c ( t ) , where c c ( t ) is the causal component and c a c ( t ) the anticausal component of c ( t ) . The Laplace transform of c ( t ) is then given by

C ( s ) = L [ c c ( t ) ] + L [ c a c ( t ) ] ( s ) .

The Laplace transform for c c ( t ) = e a t u ( t ) is

C c ( s ) = 0 e a t e s t d t = e ( s + a ) t s + a | t = 0 = 1 s + a

with a region of convergence { ( σ , Ω ) : σ > a , < Ω < } . The Laplace transform of the anticausal part is

L [ c a c ( t ) u ( t ) ] ( s ) = 1 s + a

and since it is anticausal and has a pole at s = a its region of convergence is { ( σ , Ω ) : σ < a , < Ω < } . We thus have

C ( s ) = 1 s + a + 1 s + a = 2 a a 2 s 2

with a region of convergence the intersection of σ > a with σ < a or

{ ( σ , Ω ) : a < σ < a , < Ω < } .

This region contains the jΩ-axis which will permit us to compute the distribution of the power over frequencies, or the power spectral density of the random signal, | C ( Ω ) | 2 (shown in Fig. 3.8 for a = 2 ). □

Figure 3.8

Figure 3.8. Two-sided autocorrelation function autocorrelation c(t)=e −2|t| and poles of C(s) (left figure). The ROC of C(s) is the region in between the poles which includes the jΩ-axis. The power spectral density |C(Ω)|2 corresponding to c(t) is shown on the right figure—it is the magnitude square of the Fourier transform of c(t).

Example 3.6

Consider a noncausal LTI system with impulse response

h ( t ) = e t u ( t ) + e 2 t u ( t ) = h c ( t ) + h a c ( t ) .

Find the system function H ( s ) , its ROC, and indicate whether we could compute H ( j Ω ) from it.

Solution: The Laplace transform of the causal component, h c ( t ) , is

H c ( s ) = 1 s + 1

provided that σ > 1 . For the anticausal component

L [ h a c ( t ) ] = L [ h a c ( t ) u ( t ) ] ( s ) = 1 s + 2 ,

which converges when σ 2 < 0 or σ < 2 , or its region of convergence is { ( σ , Ω ) : σ < 2 , < Ω < } . Thus the system function is

H ( s ) = 1 s + 1 + 1 s + 2 = 3 ( s + 1 ) ( s 2 )

with a region of convergence the intersection of { ( σ , Ω ) : σ > 1 , < Ω < } and { ( σ , Ω ) : σ < 2 , < Ω < } , or

{ ( σ , Ω ) : 1 < σ < 2 , < Ω < }

which is a sector of the s-plane that includes the jΩ-axis. Thus H ( j Ω ) can be obtained from its Laplace transform. □

Example 3.7

Find the Laplace transform of the ramp function r ( t ) = t u ( t ) and use it to find the Laplace of a triangular pulse Λ ( t ) = r ( t + 1 ) 2 r ( t ) + r ( t 1 ) .

Solution: Notice that although the ramp is an ever increasing function of t, we still can obtain its Laplace transform

R ( s ) = 0 t e s t d t = e s t s 2 ( s t 1 ) | t = 0 = 1 s 2

where we let σ > 0 for the integral to exist. Thus R ( s ) = 1 / s 2 with region of convergence

{ ( σ , Ω ) : σ > 0 , < Ω < } .

The above integration can be avoided by noticing that if we find the derivative with respect to s of the Laplace transform of u ( t ) , or

d U ( s ) d s = 0 d e s t d s d t = 0 ( t ) e s t d t = 0 t e s t d t R ( s )

where we assumed the derivative and the integral can be interchanged. We then have

R ( s ) = d U ( s ) d s = 1 s 2 .

The Laplace transform of Λ ( t ) can then be shown to be (try it!)

Λ ( s ) = 1 s 2 [ e s 2 + e s ] .

The zeros of Λ ( s ) are the values of s that make e s 2 + e s = 0 or multiplying by e s ,

1 2 e s + e 2 s = ( 1 e s ) 2 = 0 ,

or double zeros at

s k = j 2 π k k = 0 , ± 1 , ± 2 , .

In particular, when k = 0 there are two zeros at 0 which cancel the two poles at 0 resulting from the denominator s 2 . Thus Λ ( s ) has an infinite number of zeros but no poles given this pole–zero cancellation (see Fig. 3.9). Therefore, Λ ( s ) , as a signal of finite support, has the whole s-plane as its region of convergence, and can be calculated at s = j Ω . □

Figure 3.9

Figure 3.9. The Laplace transform of triangular signal Λ(t) has as ROC the whole s-plane as it has no poles, but an infinite number of double zeros at ±j2πk, for k = ±1,±2,⋯.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128142042000132

Signals, Systems, and Spectral Analysis

Ali Grami , in Introduction to Digital Communications, 2016

3.3.9 Causal and Noncausal Systems

A system is said to be causal if it does not respond before the input is applied. In other words, in a causal system , the output at any time depends only on the values of the input signal up to and including that time and does not depend on the future values of the input. In contrast, the output signal of a noncausal system depends on one or more future values of the input signal. All physically realizable systems are causal. Note that all memoryless systems are causal, but not vice versa. If delay can be incorporated in a system, then a noncausal system may become physically realizable.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012407682200003X

OPTICAL COHERENCE TOMOGRAPHY THEORY

Mark E. Brezinski MD, PhD , in Optical Coherence Tomography, 2006

Hilbert transforms and Analytical Signals

The Hilbert transform and the related Kronig-Kramers relationship link the real and imaginary parts of the transfer function of a linear shift invariant causal system. A causal system is one that, when t ≤ 0, the impulse response vanishes. In other words, there cannot be a response prior to the input.

The causal system is asymmetric so that the Fourier transform of the impulse response must be complex. In addition, if the impulse response is real, the Fourier transform must be symmetric.

If all frequencies of one part of the transfer function are known (such as the real portion), then the other can be determined completely (imaginary).

The Hilbert Transform of a real valued function s(t) is obtained by convoluting the signal, (S(t) with 1/πt to obtain S(t)). The impulse response is therefore Specifically:

(2) S ( t ) = H [ S ( t ) ] = S ( t ) * h ( t ) = ( 1 / π ) P + s ( x ) / ( t x ) d x

where h(t) = 1/πt and:

(3) H ( ω ) = F [ h ( t ) ] = i sign ( ω )

which equals +i for ω < 0 and –i for ω > 0. The Hilbert transform therefore has the effect of shifting the negative frequency components of s(t) by +90 degrees and the positive frequency components by –90 degrees. It then represents a filter where ideally the amplitude of the spectral components are left unchanged but the phases are altered by π/2. The integral inequation 2 is considered a Cauchy Principal Value (P) around x = t, where a singularity exists. It is dealt with as described above.

Again, because of causality, the real and imaginary parts of a function can be related. If F{h(t)} is the Fourier transform of a transfer function, and R(f and X(f) are the Hilbert transforms of the real and imaginary portions respectively, then:

(4) F { h ( t ) } = R ( f ) + i X ( f )

Then the real and imaginary components can be related through, often referred to as the dispersion relationship:

(5) X ( f ) = ( 1 / π ) P + R ( y ) / ( t y ) d y

R ( f ) = ( 1 / π ) P + X ( y ) / ( t y ) d y

When the integrals are rewritten such that the intervals are from 0 to ∞, the equations are known as the Kramers-Kronig relations.

Very often, it is useful to express a signal in terms of its positive frequencies, such as in demodulation. A signal that contains no negative frequencies is referred to as an analytical signal. For any complicated function signals which are expressible as the sum of many sinusoids, a filter can be constructed which shifts each component by a quarter cycle, which is a Hilbert transform filter, and ideally keeps the magnitude constant. Let Sa (t) be the analytical function of S(t) and Si (t) be the Hilbert transform of S(t). Then:

(6) S a ( t ) = S ( t ) + i S i ( t )

Again, positive frequencies are shifted –π/2 and by π/2 for the negative frequencies. How this results in removal of negative frequencies is as follows. The original function S(t) is broken into its positive and negative components, S +(t) = eiωt and S (t) = e iωt . Now adding a –90 degree phase shift to the positive frequencies and +90 degrees to the negative frequencies:

(7) S i + ( t ) = ( e i π / 2 ) ( e i ω t ) = i e i ω t

S i ( t ) = ( e i π / 2 ) ( e i ω t ) = i e i ω t

Now the positive and negative frequencies of S(t) are:

(8) S a + ( t ) = e i ω t i 2 e i ω t = 2 e i ω t

S a ( t ) = e i ω t + i 2 e i ω t = 0

It is clear that the negative frequencies have been removed to produce the analytical signal.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012133570050007X

mccarthybettle.blogspot.com

Source: https://www.sciencedirect.com/topics/engineering/causal-system

Post a Comment for "Proving a Continuous Function is Causal"