Authors: Jingyuan Chen, Yuan Yao, Mie Anderson, Natalie Hauglund, Celia Kjaerby, Verena Untiet, Maiken Nedergaard, Jiebo Luo
Abstract: Automatic sleep staging based on electroencephalography (EEG) and
electromyography (EMG) signals is an important aspect of sleep-related
research. Current sleep staging methods suffer from two major drawbacks. First,
there are limited information interactions between modalities in the existing
methods. Second, current methods do not develop unified models that can handle
different sources of input. To address these issues, we propose a novel sleep
stage scoring model sDREAMER, which emphasizes cross-modality interaction and
per-channel performance. Specifically, we develop a mixture-of-modality-expert
(MoME) model with three pathways for EEG, EMG, and mixed signals with partially
shared weights. We further propose a self-distillation training scheme for
further information interaction across modalities. Our model is trained with
multi-channel inputs and can make classifications on either single-channel or
multi-channel inputs. Experiments demonstrate that our model outperforms the
existing transformer-based sleep scoring methods for multi-channel inference.
For single-channel inference, our model also outperforms the transformer-based
models trained with single-channel signals.
Source: http://arxiv.org/abs/2501.16329v1