Theory-Inspired Deep Network for Instantaneous-Frequency Extraction and Subsignals Recovery From Discrete Blind-Source Data

Ningning Han, H. N. Mhaskar*, Charles Kam-Tai Chui

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

3 Citations (Scopus)

Abstract

In the mathematical and engineering literature on signal processing and time-series analysis, there are two opposite points of view concerning the extraction of time-varying frequencies (commonly called instantaneous frequencies, IFs). One is to consider the given signal as a composite signal consisting of a finite number of subsignals that are oscillating, and the goal is to decompose the signal into the sum of the (unknown) subsignals, followed by extracting the IF from each subsignal; the other is first to extract from the given signal, the IFs of the (unknown) subsignals, from which the subsignals that constitute the given signal are recovered. Let us call the first the 'signal decomposition approach' and the second the 'signal resolution approach.' For the 'signal decomposition approach,' rigorous mathematical theories on function decomposition have been well developed in the mathematical literature, with the most relevant one, called 'atomic decomposition' initiated by R. Coifman, with various extensions by others, notably by D. Donoho, with the goal of extracting the signal building blocks, but without concern of which building blocks constitute any of the subsignals, and consequently, the subsignals along with their IFs cannot be recovered. On the other hand, the most popular of the decomposition approach is the 'empirical mode decomposition (EMD),' proposed by N. Huang et al., with many variations by others. In contrast to atomic decomposition, all variations of EMD are ad hoc algorithms, without any rigorous mathematical theory. Unfortunately, all existing versions of EMD fail to resolve the inverse problem on the recovery of the subsignals that constitute the given composite signal, and consequently, extracting the IFs is not satisfactory. For example, EMD fails to extract even two IFs that are not far apart from each other. In contrast to the signal decomposition approach, the 'signal resolution approach' has a very long history dated back to the Prony method, introduced by G. de Prony in 1795, for solving the inverse problem of time-invariant linear systems. On the other hand, for nonstationary signals, the synchrosqueezed wavelet transform (SST), proposed by I. Daubechies over a decade ago, with various extensions and variations by others, was introduced to resolving the inverse problem, by first extracting the IFs from some reference frequency, followed by recovering the subsignals. Unfortunately, the SST approximate IFs could not be separated when the target IFs are close to one another at certain time instants, and even if they could be separated, the approximation is usually not sufficiently accurate. For these reasons, some signal components could not be recovered, and those that could be recovered are usually inexact. More recently, we introduced and developed a more direct method, called signal separation operation (SSO), published in 2016, to accurately compute the IFs and to accurately recover all signal components even if some of the target IFs are close to each other. The main contributions of this article are twofold. First, the SSO method is extended from uniformly sampled data to arbitrarily sampled data. This method is localized as illustrated by a number of numerical examples, including components with different subsignal arrival and departure times. It also yields a short-term prediction of the digital components along with their IFs. Second, we present a novel theory-inspired implementation of our method as a deep neural network (DNN). We have proved that a major advantage of DNN over shallow networks is that DNN can take advantage of any inherent compositional structure in the target function, while shallow networks are necessarily blind to such structure. Therefore, DNN can avoid the so-called curse of dimensionality using what we have called the blessing of compositionality. However, the compositional structure of the target function is not uniquely defined, and the constituent functions are typically not known so that the networks still need to be trained end-to-end. In contrast, the DNN introduced in this article implements a mathematical procedure so that no training is required at all, and the compositional structure is evident from the procedure. We will disclose the extension of the SSO method in Sections II and III and explain the construction of the deep network in Section IV.

Original languageEnglish
Pages (from-to)3437-3447
Number of pages11
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume33
Issue number8
Early online date10 Feb 2021
DOIs
Publication statusPublished - Aug 2022

Scopus Subject Areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

User-Defined Keywords

  • Deep networks
  • nonstationary signals
  • separation of components
  • superresolution

Fingerprint

Dive into the research topics of 'Theory-Inspired Deep Network for Instantaneous-Frequency Extraction and Subsignals Recovery From Discrete Blind-Source Data'. Together they form a unique fingerprint.

Cite this