TY - JOUR
T1 - Type-based mixture of experts and semi-supervised multi-task pre-training for symbolic music
AU - Li, Shuyu
AU - Sung, Yunsick
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/11/1
Y1 - 2025/11/1
N2 - In the rapidly evolving field of AI-driven music applications, there is a growing interest in the understanding and generation of symbolic music (e.g., MIDI). Symbolic music, unlike audio waveforms, contains discrete representations of musical elements, making it both a detailed and challenging domain for AI models to process. While pre-training techniques from natural language processing have been adapted for music-related tasks, these pre-trained models often struggle with the hierarchical and polyphonic characteristics of symbolic music. To overcome these problems, a method is proposed comprising two components, a foundational model named type-based mixture of experts (TypeMoE) and a semi-supervised multi-task pre-training (SS-MTP) strategy. TypeMoE captures fine-grained musical features more effectively by dynamically activating specialized experts for different event types, while SS-MTP covers tasks including key-signature recognition, time-signature recognition, and causal language modeling. Unlike purely self-supervised approaches, SS-MTP utilizes a small amount of labeled data alongside extensive unlabeled data, enabling structural representation learning and promoting efficient knowledge sharing across tasks. Experimental results showed that TypeMoE, when pre-trained with the SS-MTP strategy, outperformed baseline models in both music understanding and generation tasks. Specifically, it achieved 71.80 % accuracy in genre classification and 76.79 % in emotion classification. For music generation, it outperformed baselines with 54.24 % Hits@1 and 0.7521 BLEU-2 in continue generation, and 75.79 % Hits@1 and 0.8757 BLEU-2 in conditional generation. Additionally, it obtained a CLAP-based semantic alignment score of 0.24.
AB - In the rapidly evolving field of AI-driven music applications, there is a growing interest in the understanding and generation of symbolic music (e.g., MIDI). Symbolic music, unlike audio waveforms, contains discrete representations of musical elements, making it both a detailed and challenging domain for AI models to process. While pre-training techniques from natural language processing have been adapted for music-related tasks, these pre-trained models often struggle with the hierarchical and polyphonic characteristics of symbolic music. To overcome these problems, a method is proposed comprising two components, a foundational model named type-based mixture of experts (TypeMoE) and a semi-supervised multi-task pre-training (SS-MTP) strategy. TypeMoE captures fine-grained musical features more effectively by dynamically activating specialized experts for different event types, while SS-MTP covers tasks including key-signature recognition, time-signature recognition, and causal language modeling. Unlike purely self-supervised approaches, SS-MTP utilizes a small amount of labeled data alongside extensive unlabeled data, enabling structural representation learning and promoting efficient knowledge sharing across tasks. Experimental results showed that TypeMoE, when pre-trained with the SS-MTP strategy, outperformed baseline models in both music understanding and generation tasks. Specifically, it achieved 71.80 % accuracy in genre classification and 76.79 % in emotion classification. For music generation, it outperformed baselines with 54.24 % Hits@1 and 0.7521 BLEU-2 in continue generation, and 75.79 % Hits@1 and 0.8757 BLEU-2 in conditional generation. Additionally, it obtained a CLAP-based semantic alignment score of 0.24.
KW - Fine-tuning
KW - Mixture of experts
KW - Multi-task
KW - Pre-training
KW - Semi-supervised learning
KW - Symbolic music
UR - https://www.scopus.com/pages/publications/105008489168
U2 - 10.1016/j.eswa.2025.128613
DO - 10.1016/j.eswa.2025.128613
M3 - Article
AN - SCOPUS:105008489168
SN - 0957-4174
VL - 292
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 128613
ER -