Independent Processes in Signal Processing
This jupyter notebook is part of a collection of notebooks on various topics of Digital Signal Processing. Please direct questions and suggestions to Sascha.Spors@uni-rostock.de.
The independence of random signals is a desired property in many applications of statistical signal processing, as well as uncorrelatedness and orthogonality. The concept of independence is introduced in the following together with a discussion of the links to uncorrelatedness and orthogonality.
Definition
Two stochastic events are said to be independent if the probability of occurrence of one event is not affected by the occurrence of the other event. Or more specifically, if their joint probability equals the product of their individual probabilities. In terms of the bivariate probability density function (PDF) of two continuous-amplitude real-valued random processes and this reads
where and denote the univariate (marginal) PDFs of the random processes for the time-instances and , respectively. The bivariate PDF of two independent random processes is given by the multiplication of their univariate PDFs. It follows that the second-order ensemble average for a linear mapping is given as
The linear second-order ensemble average of two independent random signals is equal to the multiplication of their linear first-order ensemble averages. For jointly wide-sense stationary (WSS) processes, the bivariate PDF does only depend on the difference of the time instants. Hence, two jointly WSS random signals are independent if
Above bivariate PDF is rewritten using the definition of conditional probabilities in order to specialize the definition of independence to one WSS random signal
where denotes the conditional probability that takes the amplitude value under the condition that takes the amplitude value . Under the assumption that and substituting and by and , independence for one random signal is defined as
since the conditional probability for since this represents a sure event. The bivariate PDF of an independent random signal is equal to the product of the univariate PDFs of the signal and the time-shifted signal for . A random signal for which this condition does not hold shows statistical dependencies between samples. These dependencies can be exploited for instance for coding or prediction.
Example - Comparison of bivariate PDF and product of marginal PDFs
The following example estimates the bivariate PDF of a WSS random signal by computing its two-dimensional histogram. The univariate PDFs and are additionally estimated. Both the estimated bivariate PDF and the product of the two univariate PDFs are plotted for different .
Exercise
- With the given results, how can you evaluate the independence of the random signal?
- Can the random signal assumed to be independent?
Solution: According to the definition of independence, the bivariate PDF and the product of the univariate PDFs has to be equal for . This is obviously not the case for . Hence, the random signal is not independent in a strict sense. However for the condition for independence is sufficiently fulfilled, considering the statistical uncertainty due to a finite number of samples.
Independence versus Uncorrelatedness
Two continuous-amplitude real-valued jointly WSS random processes and are termed as uncorrelated if their cross-correlation function (CCF) is equal to the product of their linear means, . If two random signals are independent then they are also uncorrelated. This can be proven by introducing above findings for the linear second-order ensemble average of independent random signals into the definition of the CCF
where the last equality is a consequence of the assumed wide-sense stationarity. The reverse, that two uncorrelated signals are also independent does not hold in general from this result.
The auto-correlation function (ACF) of an uncorrelated signal is given as . Introducing the definition of independence into the definition of the ACF yields
where the result for follows from the bivariate PDF of an independent signal, as derived above. It can be concluded from this result that an independent random signal is also uncorrelated. The reverse, that an uncorrelated signal is independent does not hold in general.
Independence versus Orthogonality
In geometry, two vectors are said to be orthogonal if their dot product equals zero. This definition is frequently applied to finite-length random signals by interpreting them as vectors. The relation between independence, correlatedness and orthogonality is derived in the following.
Let's assume two continuous-amplitude real-valued jointly wide-sense ergodic random signals and with finite lengths and , respectively. The CCF between both can be reformulated as follows
where denotes the dot product. The vector is defined as
where denotes the zero vector of length . The vector is defined as
It follows from above definition of orthogonality that two finite-length random signals are orthogonal if their CCF is zero. This implies that at least one of the two signals has to be mean free. It can be concluded further that two independent random signals are also orthogonal and uncorrelated if at least one of them is mean free. The reverse, that orthogonal signals are independent, does not hold in general.
The concept of orthogonality can also be extended to one random signal by setting . Since a random signal cannot be orthogonal to itself for , the definition of orthogonality has to be extended for this case. According to the ACF of a mean-free uncorrelated random signal , self-orthogonality may be defined as
An independent random signal is also orthogonal if it is zero-mean. The reverse, that an orthogonal signal is independent does not hold in general.
Example - Computation of cross-correlation by dot product
This example illustrates the computation of the CCF by the dot product. First, a function is defined which computes the CCF by means of the dot product
Now the CCF is computed using different methods: computation by the dot product and by the built-in correlation function. The CCF is plotted for the computation by the dot product, as well as the difference (magnitude) between both methods. The resulting difference is in the typical expected range due to numerical inaccuracies.
/tmp/ipykernel_94/1384516438.py:17: MatplotlibDeprecationWarning: The 'use_line_collection' parameter of stem() was deprecated in Matplotlib 3.6 and will be removed two minor releases later. If any parameter follows 'use_line_collection', they should be passed as keyword, not positionally. plt.stem(kappa, ccf1, use_line_collection=True) /tmp/ipykernel_94/1384516438.py:24: MatplotlibDeprecationWarning: The 'use_line_collection' parameter of stem() was deprecated in Matplotlib 3.6 and will be removed two minor releases later. If any parameter follows 'use_line_collection', they should be passed as keyword, not positionally. plt.stem(kappa, np.abs(ccf1-ccf2), use_line_collection=True)
Copyright
This notebook is provided as Open Educational Resource. Feel free to use the notebook for your own purposes. The text is licensed under Creative Commons Attribution 4.0, the code of the IPython examples under the MIT license. Please attribute the work as follows: Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples.