Type-based mixture of experts and semi-supervised multi-task pre-training for symbolic music

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In the rapidly evolving field of AI-driven music applications, there is a growing interest in the understanding and generation of symbolic music (e.g., MIDI). Symbolic music, unlike audio waveforms, contains discrete representations of musical elements, making it both a detailed and challenging domain for AI models to process. While pre-training techniques from natural language processing have been adapted for music-related tasks, these pre-trained models often struggle with the hierarchical and polyphonic characteristics of symbolic music. To overcome these problems, a method is proposed comprising two components, a foundational model named type-based mixture of experts (TypeMoE) and a semi-supervised multi-task pre-training (SS-MTP) strategy. TypeMoE captures fine-grained musical features more effectively by dynamically activating specialized experts for different event types, while SS-MTP covers tasks including key-signature recognition, time-signature recognition, and causal language modeling. Unlike purely self-supervised approaches, SS-MTP utilizes a small amount of labeled data alongside extensive unlabeled data, enabling structural representation learning and promoting efficient knowledge sharing across tasks. Experimental results showed that TypeMoE, when pre-trained with the SS-MTP strategy, outperformed baseline models in both music understanding and generation tasks. Specifically, it achieved 71.80 % accuracy in genre classification and 76.79 % in emotion classification. For music generation, it outperformed baselines with 54.24 % Hits@1 and 0.7521 BLEU-2 in continue generation, and 75.79 % Hits@1 and 0.8757 BLEU-2 in conditional generation. Additionally, it obtained a CLAP-based semantic alignment score of 0.24.

Original languageEnglish
Article number128613
JournalExpert Systems with Applications
Volume292
DOIs
StatePublished - 1 Nov 2025

Keywords

  • Fine-tuning
  • Mixture of experts
  • Multi-task
  • Pre-training
  • Semi-supervised learning
  • Symbolic music

Fingerprint

Dive into the research topics of 'Type-based mixture of experts and semi-supervised multi-task pre-training for symbolic music'. Together they form a unique fingerprint.

Cite this