LingBot-VLA Collection Vision-Language-Action Foundation Model • 3 items • Updated about 8 hours ago • 13
👁️ LFM2-VL Collection LFM2-VL is our first series of vision-language models, designed for on-device deployment. • 10 items • Updated 6 days ago • 63
InstructVLA Collection Paper, Data and Checkpoints for ``InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation'' • 14 items • Updated Sep 17, 2025 • 1
InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation Paper • 2507.17520 • Published Jul 23, 2025 • 15