TinyMyo: Tiny Foundation Model for EMG Signal Processing
π Overview
TinyMyo is a lightweight, Transformer-based foundation model designed specifically for surface electromyography (sEMG) signal processing. Unlike large-scale models, the TinyMyo family (including the 3.6M parameter base model and the ultra-compact 1.9M parameter TinyissimoMyo) is purpose-built for ultra-low-power edge deployment. It enables real-time motor intent decoding, neuromuscular assessment, and human-machine interaction directly on microcontrollers like the GAP9.
π Key Highlights
- Generalist Foundation: Pre-trained on a massive, heterogeneous corpus of >480 GB of EMG data (NinaPro DB6/7, EMG2Pose) using self-supervised masked reconstruction.
- Edge-Ready: The first EMG foundation model demonstrated on an ultra-low-power MCU (GAP9), achieving sub-100ms inference for real-time applications.
- Highly Efficient: Just 3.6M parameters (1.9M for TinyissimoMyo), ensuring low latency and high energy efficiency (~45 mJ per inference).
- Versatile: Achieves state-of-the-art (SoA) performance across hand gesture classification, kinematic regression, and speech processing.
π§ Model Architecture
- Core: 8-layer bidirectional Transformer encoder (4-layer for TinyissimoMyo).
- Embeddings: 192-dimensional latent space with 3 attention heads.
- Tokenization: Channel-independent patching (20 samples per patch) utilizing Rotary Position Embeddings (RoPE) to preserve temporal alignment across channels without spurious cross-channel ordering.
- Deployment: Optimized via offline liveness analysis, multi-level memory tiling, and INT8 fixed-point quantization for resource-constrained hardware execution.
π Performance Benchmarks
| Task | Dataset | Metric | TinyMyo Result |
|---|---|---|---|
| Gesture Classification | NinaPro DB5 | Accuracy | 87.98% |
| Gesture Classification | EPN-612 | Accuracy | 96.57% |
| Gesture Classification | UCI EMG | Accuracy | 97.10% |
| Gesture Classification | Generic Neuromotor Interface | CLER | 0.142 |
| Kinematic Regression | NinaPro DB8 | MAE | 8.8Β° |
| Speech Synthesis | Gaddy | WER | 33.54% |
| Speech Recognition | Gaddy | WER | 33.95% |
β‘ Deployment (GAP9 MCU)
TinyMyo bridges the gap between high-performance deep learning and stringent wearable constraints. We provide two variants to balance the accuracy-latency trade-off:
TinyMyo (3.6M Parameters)
- Inference Time (5s window): 0.785 s
- Energy Consumption: 44.91 mJ
- Power Envelope: 57.18 mW
TinyissimoMyo (1.9M Parameters)
- Inference Time (5s window): 0.496 s
- Inference Time (1s window): 0.089 s (Sub-100ms regime, ideal for real-time prosthetic control)
π οΈ Getting Started
TinyMyo is part of theBioFoundation ecosystem.
Prerequisites
Install the required dependencies from the BioFoundation repository.
Loading & Fine-tuning
You can easily fine-tune the pre-trained weights for your specific task:
python run_train.py +experiment=TinyMyo_finetune pretrained_safetensors_path={*.safetensors}
π License & Citation
This model is licensed under CC BY-ND 4.0. If you find TinyMyo useful in your research, please cite our paper:
@misc{fasulo2026tinymyotinyfoundationmodel,
title={TinyMyo: a Tiny Foundation Model for Flexible EMG Signal Processing at the Edge},
author={Matteo Fasulo and Giusy Spacone and Thorir Mar Ingolfsson and Yawei Li and Luca Benini and Andrea Cossettini},
year={2026},
eprint={2512.15729},
archivePrefix={arXiv},
primaryClass={eess.SP}
}
- Downloads last month
- 54
Model tree for PulpBio/TinyMyo
Unable to build the model tree, the base model loops to the model itself. Learn more.