Spatial Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model Paper ⢠2510.12276 ⢠Published Oct 14, 2025 ⢠149
What If : Understanding Motion Through Sparse Interactions Paper ⢠2510.12777 ⢠Published Oct 14, 2025 ⢠6
Experience is the Best Teacher: Grounding VLMs for Robotics through Self-Generated Memory Paper ⢠2507.16713 ⢠Published Jul 22, 2025 ⢠21
Trending 3D (Image to 3D) Collection One place to keep track of all 3D demos ⢠39 items ⢠Updated Mar 2 ⢠4
A Careful Examination of Large Behavior Models for Multitask Dexterous Manipulation Paper ⢠2507.05331 ⢠Published Jul 7, 2025 ⢠12
view article Article SmolVLA: Efficient Vision-Language-Action Model trained on Lerobot Community Data +7 Jun 3, 2025 ⢠345
Real-is-Sim: Bridging the Sim-to-Real Gap with a Dynamic Digital Twin for Real-World Robot Policy Evaluation Paper ⢠2504.03597 ⢠Published Apr 4, 2025 ⢠4
Real-is-Sim: Bridging the Sim-to-Real Gap with a Dynamic Digital Twin for Real-World Robot Policy Evaluation Paper ⢠2504.03597 ⢠Published Apr 4, 2025 ⢠4
Real-is-Sim: Bridging the Sim-to-Real Gap with a Dynamic Digital Twin for Real-World Robot Policy Evaluation Paper ⢠2504.03597 ⢠Published Apr 4, 2025 ⢠4 ⢠2
Cosmos Collection ā ļø This collection is archived. š https://huggingface.co/collections/nvidia/nvidia-cosmos-2 ⢠14 items ⢠Updated 3 days ago ⢠301
Theia: Distilling Diverse Vision Foundation Models for Robot Learning Paper ⢠2407.20179 ⢠Published Jul 29, 2024 ⢠47 ⢠3
Theia: Distilling Diverse Vision Foundation Models for Robot Learning Paper ⢠2407.20179 ⢠Published Jul 29, 2024 ⢠47
Theia: Distilling Diverse Vision Foundation Models for Robot Learning Paper ⢠2407.20179 ⢠Published Jul 29, 2024 ⢠47
theaiinstitute/theia-tiny-patch16-224-cddsv Feature Extraction ⢠16.2M ⢠Updated Jul 30, 2024 ⢠5.42k ⢠4
theaiinstitute/theia-base-patch16-224-cdiv Feature Extraction ⢠0.1B ⢠Updated Jul 30, 2024 ⢠1.32k ⢠9