Signal modality characterisation allows us to understand more about the nature of the signals we are investigating. While some signal characteristics are well defined many signals do not fall neatly into these categories.

By using adaptive filters in a prediction configuration we can compare the output of filters designed for different inputs. This allows us to determine which algorithm has the best prediction of the signal.

This solution is simple but the algorithms do not co-operate, to create an online signal modality characterisation solution with synergy between the filters the filters feed into a mixing algorithm which is also adaptive.

Using a convex combination of the filters guarantees the existence and uniqueness of a solution. The convex mixing parameter is adapted using a gradient descent update:

\begin{align*} \lambda(k+1) =& \lambda(k) - \mu_\lambda\nabla_\lambda E(k)_{|\lambda=\lambda(k)}\\ =& \lambda(k) - \frac{\mu_\lambda}{2}\frac{\partial e^2(k)}{\partial\lambda(k)}\\ =& \lambda(k) + \mu_\lambda e(k)\big(y_1(k) - y_2(k)\big) \end{align*}
• This solution can provide improved performance over the individual constituent filters;
• The mixing algorithm can give an indication of which filter is currently responding to the input signal most effectively;
• By appropriate selection of the algorithms the mixing algorithm can adapt according to fundamental properties of the input signal.

By tracking the mixing parameter rather than the performance of the filter we can see which of the subfilters is being favoured by the hybrid filter. We can also combine several hybrid filters to gain a more complete picture of the nature of the signal under investigation.

### Example Applications

#### Speech

• Unvoiced Sounds
• Most consonants are unvoiced
• Result in a noise like waveform
• Linear autoregressive models are adequate for unvoiced sounds
• Voiced Sound
• A voiced sound is one where the vocal folds are vibrating
• Periodic in nature with periodic excitation source
• All vowels and certain consonants are voiced speech
• Voiced sounds require a nonlinear model
• Nasals
• Nasals are sounds like \n\or \m\
• For nasals or certain vowels a chaotic model is appropriate

Comparing results for hybrid filters for nonlinear and sparse shows a high degree of correlation.

\s\ sounds are more linear in nature and voiced sounds are indicated by the regions where $$\lambda$$exhibits a spiky behaviour [1].

#### EEG - Epilepsy

Epileptic signals fundamentally change the nature of brain activity. Using hybrid filters for both nonlinearity and sparsity we can build a feature map of the epileptic signal [2], [3].

#### EEG - Consciousness States

Identification of brain consciousness states is a important area of EEG analysis. There are obvious legal implications to declaring a patient brain dead. Normally this assessment requires several invasive medical procedures.

A trail test has been performed on EEG data for 34 patients, 17 in a coma and 17 considered to be brain dead. The recordings were taken in the intensive care unit of a hospital leading to high levels of noise generated by other monitoring machines.

Nonlinearity showed clear differences but sparsity was less obvious, combining the two to produce a feature map gave a clear difference [4].

#### Wind

Wind vectors $$v(k)$$ can be represented in the complex domain as

$v(k) = |v(k)|e^{j\theta(k)} = v_E(k) + jv_N(k)$

To compare the nature of the complex signals split- & fully-complex subfilters were used to predict the wind data.

Looking at the wind data from an urban area over the course of 24 hours, the wind can be considered more fully-complex in nature. The first & last samples are more unstable in nature, these areas indicate from 2pm until 6pm and from 8am until 2pm. During these periods the wind would fluctuate more than during the calm period late at night & early in the morning [2], [5]

Radar can be represented in the complex domain and has been shown to be fully complex when the target is in the beam.

Alternating high (turbulent) sea state and low (calm) sea state we can see that the high sea state was predominantly fully-complex and the low sea state split complex [3].

### Code

Hybrid filter code can be found here: hybrid-filters
This code contains several example filters and benchmark signals and can be customised to include any adaptive filter of your choosing.

### References

1. Exploiting Nonlinearity in Adaptive Signal Processing
In Advances in Nonlinear Speech Processing, M. Chetouani et al. (eds), Springer Berlin Heidelberg, 2007, pp. 57–77.
2. Characterisation of Signal Modality: Exploiting Signal Nonlinearity in Machine Learning and Signal Processing
Journal of Signal Processing Systems, 2010, vol. 61, no. 1, pp. 105–115.
3. Collaborative Adaptive Filters for Online Knowledge Extraction and Information Fusion
In Signal Processing Techniques for Knowledge Extraction and Information Fusion, D. Mandic et al. (eds), Springer, 2008, vol. 61, no. 1, pp. 3–21.
4. Modelling of Brain Consciousness Based on Collaborative Adaptive Filters
Neurocomputing, 2012, vol. 76, no. 1, pp. 36–43.
5. An Adaptive Approach for the Identification of Improper Complex Signals
Signal Processing, 2012, vol. 92, no. 2, pp. 335–344.