TY - GEN
T1 - TinySleepNet
T2 - 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society, EMBC 2020
AU - Supratak, Akara
AU - GUO, Yi-Ke
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/7
Y1 - 2020/7
N2 - Deep learning has become popular for automatic sleep stage scoring due to its capability to extract useful features from raw signals. Most of the existing models, however, have been overengineered to consist of many layers or have introduced additional steps in the processing pipeline, such as converting signals to spectrogram-based images. They require to be trained on a large dataset to prevent the overfitting problem (but most of the sleep datasets contain a limited amount of class-imbalanced data) and are difficult to be applied (as there are many hyperparameters to be configured in the pipeline). In this paper, we propose an efficient deep learning model, named TinySleepNet, and a novel technique to effectively train the model end-to-end for automatic sleep stage scoring based on raw single-channel EEG. Our model consists of a less number of model parameters to be trained compared to the existing ones, requiring a less amount of training data and computational resources. Our training technique incorporates data augmentation that can make our model be more robust the shift along the time axis, and can prevent the model from remembering the sequence of sleep stages. We evaluated our model on seven public sleep datasets that have different characteristics in terms of scoring criteria and recording channels and environments. The results show that, with the same model architecture and the training parameters, our method achieves a similar (or better) performance compared to the state-of-the-art methods on all datasets. This demonstrates that our method can generalize well to the largest number of different datasets.
AB - Deep learning has become popular for automatic sleep stage scoring due to its capability to extract useful features from raw signals. Most of the existing models, however, have been overengineered to consist of many layers or have introduced additional steps in the processing pipeline, such as converting signals to spectrogram-based images. They require to be trained on a large dataset to prevent the overfitting problem (but most of the sleep datasets contain a limited amount of class-imbalanced data) and are difficult to be applied (as there are many hyperparameters to be configured in the pipeline). In this paper, we propose an efficient deep learning model, named TinySleepNet, and a novel technique to effectively train the model end-to-end for automatic sleep stage scoring based on raw single-channel EEG. Our model consists of a less number of model parameters to be trained compared to the existing ones, requiring a less amount of training data and computational resources. Our training technique incorporates data augmentation that can make our model be more robust the shift along the time axis, and can prevent the model from remembering the sequence of sleep stages. We evaluated our model on seven public sleep datasets that have different characteristics in terms of scoring criteria and recording channels and environments. The results show that, with the same model architecture and the training parameters, our method achieves a similar (or better) performance compared to the state-of-the-art methods on all datasets. This demonstrates that our method can generalize well to the largest number of different datasets.
UR - http://www.scopus.com/inward/record.url?scp=85091021536&partnerID=8YFLogxK
U2 - 10.1109/EMBC44109.2020.9176741
DO - 10.1109/EMBC44109.2020.9176741
M3 - Conference proceeding
AN - SCOPUS:85091021536
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
SP - 641
EP - 644
BT - 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society
PB - IEEE
Y2 - 20 July 2020 through 24 July 2020
ER -