Abstract
Neural networks (NNs) are pivotal in enhancing data processing tasks such as classification, generation, and restoration. A crucial consideration in these applications is the signal-to-noise ratio (SNR), which serves as a measure of the quality of the data. In this paper, we hypothesize that optimizing NNs in some tasks can be more effective when all samples in the dataset are clustered based on quantized SNR levels regarding the statistical similarity between training/test dataset. Hence, we introduce two novel techniques, i.e., 1) a linear algebraic method with a single-shot data sample and 2) an NN-based method with few-shot data samples, for estimating the SNR of a sparse signal. The proposed techniques are based on the mathematical fact that the dominant singular values contain the information of a signal space when signals are Hankelized in matrix form. Both algorithms achieve over 93% clustering accuracy, and almost 100% accuracy in high SNR scenarios and increased signal length. Furthermore, we provide an example of signal denoising as practical validation of the benefits of these clustering results for optimizing NN in a task. The proposed approaches show a superior denoising performance while requiring an extremely small training dataset compared to conventional methods, which can be interpreted as an improvement in the learnability of the NN.
| Original language | English |
|---|---|
| Journal | IEEE Internet of Things Journal |
| DOIs | |
| State | Accepted/In press - 2025 |
Keywords
- Hankelization
- learnability maximization
- neural networks
- SNR estimation
- sparse signals