Proving a Continuous Function is Causal
Introduction
Patricia Mellodge , in A Practical Approach to Dynamical Systems for Engineers, 2016
1.2.5 Causal versus Noncausal
A causal system is one whose output depends only on the present and the past inputs. A noncausal system's output depends on the future inputs. In a sense, a noncausal system is just the opposite of one that has memory.
How can a real-world system be noncausal? It cannot because real systems cannot react to the future. But noncausal systems have important real-world applications. Consider a song stored in a sound file. Because the entire song is stored, we could process the sound by filtering in a way that has the current notes depend on notes later in the song. This is an example of postprocessing in which noncausal systems may be implemented. Another example of a noncausal system application is image processing. The pixels to the left of the current location can be considered as the "past" and pixels to the right as the "future."
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780081002025000012
Digital Signals and Systems
Li Tan , Jean Jiang , in Digital Signal Processing (Second Edition), 2013
3.2.3 Causality
A causal system is the one in which the output at time depends only on the current input at time , and its past input sample values such as , ,…. Otherwise, if a system output depends on future input values such as , , …, the system is noncausal. The noncausal system cannot be realized in real time.
EXAMPLE 3.4
Determine whether the systems
- a.
-
, for
- b.
-
, for
are causal.
Solution
a. Since for , the output depends on the current input and its past value , the system is causal.
b. Since for , the output depends on the current input and its future value , the system is noncausal.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124158931000032
Digital Signals and Systems
Lizhe Tan , Jean Jiang , in Digital Signal Processing (Third Edition), 2019
3.2.3 Causality
A causal system is the one in which the output y(n) at time n depends only on the current input x(n) at time n, and its past input sample values such as x(n − 1), x(n − 2),…. Otherwise, if a system output depends on the future input values such as x(n + 1), x(n + 2),…, the system is noncausal. The noncausal system cannot be realized in real time.
Example 3.4
Given the following linear systems
- (a)
-
y(n) = 0.5x(n) + 2.5x(n − 2), for n ≥ 0,
- (b)
-
y(n) = 0.25x(n − 1) + 0.5x(n + 1) − 0.4y(n − 1), for n ≥ 0,
determine whether each is causal.
Solution:
- (a)
-
Since for n ≥ 0, the output y(n) depends on the current input x(n) and its past value x(n − 2), the system is causal.
- (b)
-
Since for n ≥ 0, the output y(n) depends on the current input x(n) and its future value x(n + 1), the system is a noncausal.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128150719000038
Signal Processing, General
Rao Yarlagadda , John E. Hershey , in Encyclopedia of Physical Science and Technology (Third Edition), 2003
II.E.3 Causality
A causal system is a system for which the output for any time t 0 depends on the inputs for t ≤ t 0 only. That is, the response does not depend on the future inputs and it relies only on the past and present inputs.
The above concepts allow for the characterization of a linear system. One function that is important to us is the impulse function, δ(t), which is defined in terms of the process
(31)
where x(t) is any test function that is continuous at t = 0. Equation (31) is a special case of the so-called sifting property
(32)
where again x(t) is assumed to be continuous at t = t 0. An example of an impulse function is shown in Fig. 9, where we have
(33)
Intuitively we see that any function having unit area and zero width in the limit as some parameter approaches zero is a suitable representation of δ(t). A dynamite explosion is a good approximation, for example, for an impulse input, which is used in seismic exploration. In the following we will consider exclusively linear time invariant systems. The impulse response, h(t), of a linear time invariant system is defined to be the response of the system to an impulse at t = 0. By the linear time invariant properties, we can see that the response of a linear system to the input
(34)
is
(35)
Using the sifting property in Eq. (32), we can write
(36)
Using the rectangular integration, we can approximate Eq. (36) by
(37)
where ΔT is some small increment time. The response y(t) in Eq. (35) is given by
(38)
As ΔT → 0, nΔT approaches the continuous variable t′, the sum in Eq. (38) becomes an integral, and we have
(39)
which is usually referred to as a convolution integral. The second integral in Eq. (39) is obtained from the first by redefining the variable. The convolution is an important relation, which, in words, says that the response of an arbitrary input is related to the impulse response via Eq. (39). Equation (39) is symbolically written in the form
(40)
where (*) represents convolution.
The computation of the convolution in Eq. (39) can be computed analytically only if x(t) and h(t) are known analytically and can be integrated. Another approach is to use transform theory. That is, if the Fourier transforms of x(t) and h(t) are expressed as
(41)
and
(42)
then we can show that
(43)
and
(44)
The transform of h(t), H(f), is usually referred to as a transfer function relating the input and the output. Transfer functions play a major role in communication theory, control systems, and signal processing. We will consider the discrete convolution later. Next, let us consider the concepts of correlation for the aperiodic case.
The process of correlation is useful for comparing two signals and it measures the similarity between one signal and a time-delayed version of a second signal. The cross-correlation of x(t) and g(t) is defined by
(45)
which is a function of τ, the amount of time displacement. When g(t) = x(t), R gg (τ) is referred to as autocorrelation. Correlation functions are useful in many areas. For example, if x(t) = g(t −t 0), then R gx(t) will be a maximum when τ = t 0 indicating a method for measuring the delay. It is of interest that the cross-correlation in Eq. (45) can be expressed in terms of convolution in Eq. (40). It can be shown that
(46)
where . That is, the cross-correlation of g(−t) with x(t) will result in the convolution of g(t) with x(t).
Transforms can be used to compute correlations also. For example, if F[x(t)] = X(f), F[g(t)] = G(f), and F[R gx(τ)] = S gx(f), then
(47)
where G *(f) is the complex conjugate of G(f). Computing the inverse transfer of Eq. (47) will give the cross-correlation function. Next, let us consider the discrete computation of convolutions and correlations.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122274105006888
Nichols-Krohn-Manger-Hall Chart
Yazdan Bavafa-Toosi , in Introduction to Linear Control Systems, 2019
8.7.3 Bandwidth
For a causal system of type one or higher (and thus tangent to the M-contour at low frequencies in the upper part of the plane), bandwidth is the smallest frequency its magnitude intersects the line. Thus for such systems we should look for the intersection of the NKMH plot of the system with the M-contour. Straight forward determination of the bandwidth for such systems seems to be the only tractable usage of the NKMH chart.
Example 8.6
Find the bandwidth of the system .
Note that the system satisfies the condition. The closed-loop BW is thus easily obtained as the frequency of the intersection of the NKMH plot of the system with the M-contour, as depicted in Fig. 8.12. Recall that frequency is the hidden parameter of the plot and is found by clicking on the point of interest. The answer is .
Remark 8.4
The abovementioned usage of the NKMH chart is only for the sake of illustration of how it was once usable for this purpose in the pre-MATLAB® era when everything had to be done by hand. Clearly, now we simply use the command "bandwidth" in MATLAB®. Even in that era, the method failed for systems which did not satisfy the aforementioned condition. This is shown in the proceeding example.
Example 8.7
Find the frequency of the intersection of the NKMH plot of the system of Example 8.2, i.e., , with the −3 db M-contour.
The NKMH chart is provided in Fig. 8.13. By clicking on the designated point we find out the frequency is 1.34 which is different from the true bandwidth of the system being . Note that the plot has two intersections with the −3 db M-contour and we chose the larger frequency. (Why?)
Remark 8.5
The discrepancy is slight in this system. From a theoretical standpoint it can be quite noticeable in a system, like 10 times different. This is evinced in the ensuing example.
Example 8.8
For the system the closed-loop BW is 1 whereas the M-contour intersection frequency is 14.5. For such systems the intersection should be computed with the M-contour whose value is 3 db less than the M-contour at which the system is tangent at the low-frequency end. The NKMH chart of this system is given in Fig. 8.14. The closed-loop system is and thus the central M-contour to which the plot is tangent at the low frequency end has whereas the second central M-contour has . The intersection frequency with this M-contour correctly gives the BW of the system as .
Once again, we couch that this is only for the purpose of illustration. For such systems bandwidth computation via the NKMH chart is clearly not a tractable method, since the available "sheets" did not have all the M-contour values (even MATLAB® does not plot all these curves) and the plot had to be drawn quite carefully.
Next we discuss the high sensitivity region in the NKMH chart context.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128127483000082
Knowledge Representation for Causal Calculi on Internet of Things
Phillip G. Bradford , ... Marcus Tanque , in Artificial Intelligence to Solve Pervasive Internet of Things Issues, 2021
7.19 Conclusion
Distributed IoT networks or causal systems are a reality. Together IoT devices/sensors and SCM have a great deal of potential. In general, causality is fundamental in reasoning systems. In the end, pushing causal reasoning down to the IoT device-system level may open many new opportunities. The process entails KR for casualty, that is, suited for small, constrained devices and systems. It is not clear how the nature of IoT devices/sensors will be suitable for any of the three systems mentioned here: Pearl's Do-Calculus, Shafer's probability trees, and Halpern-Pearl causality. These systems are based on logic, probability, and are expensive to deploy and perform a range of tasks. Determining causality intuitively seems to be essential and potentially costly. In essence, declarative systems offer the foundations for causal reasoning [17]. How a cause-effect relationship can be computed may not impact on the functions of most causal reasoning systems. Hence, there may be cases when the reasoning is essential to analyze [17].
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128185766000071
Multirate and Wavelet Signal Processing
In Wavelet Analysis and Its Applications, 1998
Definition 4.2.3.1
The McMillan degree, μ, of a p × r causal system H(z) is the minimum number of delay units (z −1 elements) required to implement it, that is
If the system is noncausal, then the degree is undefined. If H(z) = z −1 R, where R is an M × N matrix with rank ρ, then
where T is M × ρ and S is ρ × N. Therefore,
Hence, we can implement the system with ρ delays. So, the system has a McMillan degree ⩽ ρ. As an example, consider
We can rewrite H(z) as
Thus, the system can be implemented with a single delay, illustrating that the McMillan degree of the system is unity.
The Smith-McMillan decomposition provides insight into the determination of the McMillan degree of an M × M lossless system. The following result is central to the design of lattice structures.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S1874608X98800496
The Laplace Transform
Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019
3.3 The One-Sided Laplace Transform
The one-sided Laplace transform is of significance given that most of the applications consider causal systems and causal signals—in which cases the two-sided transform is not needed—and that any signal or impulse response of a LTI system can be decomposed into causal and anticausal components requiring only the computation of one-sided Laplace transforms.
For any function , , its one-sided Laplace transform is defined as
(3.6)
or the two-sided Laplace transform of a causal or made-causal signal.
Remarks
- 1.
-
The functions above can be either a signal or the impulse response of a LTI system.
- 2.
-
If is causal, multiplying it by is redundant—but harmless, but if is not causal the multiplication by makes causal. When is causal, the two-sided and the one-sided Laplace transforms of coincide.
- 3.
-
For a causal function (notice indicates the function is causal so it is an important part of the function) the corresponding Laplace transform is with a certain region of convergence. This unique relation is indicated by the pair
- 4.
-
The lower limit of the integral in the one-sided Laplace transform is set to , where , or a value on the left of 0. The reason for this is to make sure that an impulse function, , only defined at , is included when we are computing its Laplace transform. For any other function this limit can be taken as 0 with no effect on the transform.
- 5.
-
An important use of the one-sided Laplace transform is for solving ordinary differential equations with initial conditions. The two-sided Laplace transform by starting at (lower bound of the integral) ignores possible nonzero initial conditions at , and thus it is not useful in solving ordinary differential equations unless the initial conditions are zero.
The one-sided Laplace transform can be used to find the two-sided Laplace transform of any signal or impulse response.
The Laplace transform of
- •
-
a finite-support function , i.e., for and , , is
(3.7)
- •
-
a causal function , i.e., for , is
(3.8)
where are the real parts of the poles of ; - •
-
an anticausal function , i.e., for , is
(3.9)
where are the real parts of the poles of ; - •
-
a noncausal function , i.e., , is
(3.10)
The Laplace transform of a bounded function of finite support , always exists and has the whole s-plane as ROC. Indeed, the integral defining the Laplace transform is bounded for any value of σ. If , then
is less than infinity so that the integral converges for all σ.
For an anticausal function , so that for , its Laplace transform is obtained after the variable substitution as
That is, it is the Laplace transform of the causal function (the reflection of the anticausal function ) with s replaced by −s.
As a result, for a noncausal function with , the anticausal component, and , the causal component, the Laplace transform of is
The ROC of is the intersection of the ROCs of its anticausal and causal components.
Example 3.3
Find and use the Laplace transform of to obtain the Laplace transform of . Consider the special cases that and . Determine the ROCs. Use MATLAB to plot the signals and the corresponding poles/zeros when , and .
Solution: The Laplace transform of the complex causal signal is found to be
According to Euler's identity
by the linearity of the integral and using the above result we get
and a region of convergence or the open right-hand s-plane. The poles of are , and its zero is .
Now if we let in the above equation we have the following Laplace transforms:
as . The ROC of the above Laplace transforms is still , or the open right-hand s-plane (i.e., not including the jΩ-axis). See Fig. 3.6 for the pole–zero plots and the corresponding signals for , and . Notice that, for all the cases, the regions of convergence do not include the poles of the Laplace transforms, located on the jΩ-axis. □
Example 3.4
Use MATLAB symbolic computation to find the Laplace transform of a real exponential, , and of modulated by a cosine or . Plot the signals and the poles and zeros of their Laplace transforms.
Solution: The script shown below is used. The MATLAB function laplace is used for the computation of the Laplace transform and the function fplot allow us to do the plotting of the signal. For the plotting of the poles and zeros we use our function splane. When you run the script you obtain the Laplace transforms,
has a pole at , but no zeros, while has a zero at and poles at . The results are shown in Fig. 3.7. Notice that
or a "frequency shift" of the original variable s. □
The function splane is used to plot the poles and zeros of the Laplace transforms.
Example 3.5
In statistical signal processing, the autocorrelation function of a random signal describes the correlation that exists between the random signal and shifted versions of it, for shifts . Typically, is two-sided, i.e., nonzero for both positive and negative values of τ, and symmetric. Its two-sided Laplace transform is related to the power spectrum of the signal . Let , where (we replaced the τ variable for t for convenience), find its Laplace transform indicating its region of convergence. Determine if it would be possible to compute , which is called the power spectral density of the random signal .
Solution: The autocorrelation can be expressed as , where is the causal component and the anticausal component of . The Laplace transform of is then given by
The Laplace transform for is
with a region of convergence . The Laplace transform of the anticausal part is
and since it is anticausal and has a pole at its region of convergence is . We thus have
with a region of convergence the intersection of with or
This region contains the jΩ-axis which will permit us to compute the distribution of the power over frequencies, or the power spectral density of the random signal, (shown in Fig. 3.8 for ). □
Example 3.6
Consider a noncausal LTI system with impulse response
Find the system function , its ROC, and indicate whether we could compute from it.
Solution: The Laplace transform of the causal component, , is
provided that . For the anticausal component
which converges when or , or its region of convergence is . Thus the system function is
with a region of convergence the intersection of and , or
which is a sector of the s-plane that includes the jΩ-axis. Thus can be obtained from its Laplace transform. □
Example 3.7
Find the Laplace transform of the ramp function and use it to find the Laplace of a triangular pulse .
Solution: Notice that although the ramp is an ever increasing function of t, we still can obtain its Laplace transform
where we let for the integral to exist. Thus with region of convergence
The above integration can be avoided by noticing that if we find the derivative with respect to s of the Laplace transform of , or
where we assumed the derivative and the integral can be interchanged. We then have
The Laplace transform of can then be shown to be (try it!)
The zeros of are the values of s that make or multiplying by ,
or double zeros at
In particular, when there are two zeros at 0 which cancel the two poles at 0 resulting from the denominator . Thus has an infinite number of zeros but no poles given this pole–zero cancellation (see Fig. 3.9). Therefore, , as a signal of finite support, has the whole s-plane as its region of convergence, and can be calculated at . □
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128142042000132
Signals, Systems, and Spectral Analysis
Ali Grami , in Introduction to Digital Communications, 2016
3.3.9 Causal and Noncausal Systems
A system is said to be causal if it does not respond before the input is applied. In other words, in a causal system , the output at any time depends only on the values of the input signal up to and including that time and does not depend on the future values of the input. In contrast, the output signal of a noncausal system depends on one or more future values of the input signal. All physically realizable systems are causal. Note that all memoryless systems are causal, but not vice versa. If delay can be incorporated in a system, then a noncausal system may become physically realizable.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012407682200003X
OPTICAL COHERENCE TOMOGRAPHY THEORY
Mark E. Brezinski MD, PhD , in Optical Coherence Tomography, 2006
Hilbert transforms and Analytical Signals
The Hilbert transform and the related Kronig-Kramers relationship link the real and imaginary parts of the transfer function of a linear shift invariant causal system. A causal system is one that, when t ≤ 0, the impulse response vanishes. In other words, there cannot be a response prior to the input.
The causal system is asymmetric so that the Fourier transform of the impulse response must be complex. In addition, if the impulse response is real, the Fourier transform must be symmetric.
If all frequencies of one part of the transfer function are known (such as the real portion), then the other can be determined completely (imaginary).
The Hilbert Transform of a real valued function s(t) is obtained by convoluting the signal, (S(t) with 1/πt to obtain S(t)). The impulse response is therefore Specifically:
(2)
where h(t) = 1/πt and:
(3)
which equals +i for ω < 0 and –i for ω > 0. The Hilbert transform therefore has the effect of shifting the negative frequency components of s(t) by +90 degrees and the positive frequency components by –90 degrees. It then represents a filter where ideally the amplitude of the spectral components are left unchanged but the phases are altered by π/2. The integral inequation 2 is considered a Cauchy Principal Value (P) around x = t, where a singularity exists. It is dealt with as described above.
Again, because of causality, the real and imaginary parts of a function can be related. If F{h(t)} is the Fourier transform of a transfer function, and R(f and X(f) are the Hilbert transforms of the real and imaginary portions respectively, then:
(4)
Then the real and imaginary components can be related through, often referred to as the dispersion relationship:
(5)
When the integrals are rewritten such that the intervals are from 0 to ∞, the equations are known as the Kramers-Kronig relations.
Very often, it is useful to express a signal in terms of its positive frequencies, such as in demodulation. A signal that contains no negative frequencies is referred to as an analytical signal. For any complicated function signals which are expressible as the sum of many sinusoids, a filter can be constructed which shifts each component by a quarter cycle, which is a Hilbert transform filter, and ideally keeps the magnitude constant. Let Sa (t) be the analytical function of S(t) and Si (t) be the Hilbert transform of S(t). Then:
(6)
Again, positive frequencies are shifted –π/2 and by π/2 for the negative frequencies. How this results in removal of negative frequencies is as follows. The original function S(t) is broken into its positive and negative components, S +(t) = eiωt and S –(t) = e– iωt . Now adding a –90 degree phase shift to the positive frequencies and +90 degrees to the negative frequencies:
(7)
Now the positive and negative frequencies of S(t) are:
(8)
It is clear that the negative frequencies have been removed to produce the analytical signal.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012133570050007X
Source: https://www.sciencedirect.com/topics/engineering/causal-system
Post a Comment for "Proving a Continuous Function is Causal"