license: other
license_name: maginoresell
license_link: https://huggingface.co/datasets/MicroAGI-Labs/MicroAGI01/blob/main/LICENSE
task_categories:
- robotics
- text-generation
tags:
- egocentric
- fov
- VLA
- VLM
size_categories:
- 100M<n<1B
MicroAGI01: Egocentric Manipulation Dataset
License: See
maginoresell
MicroAGI01 is an egocentric RGB-D dataset of human household manipulation with full pose annotations. 676 recordings spanning 137 task types across 14 activity categories.
What's Included Per Recording
- RGB + depth streams
- Camera pose (6DoF)
- Hand poses (3D landmarks)
- Task segmentation with text annotations
Quick Facts
| Recordings | 676 mcaps (283 cut, 393 uncut) |
| Task types | 137 |
| Container | .mcap |
| Previews | 1 sample .mp4 file |
Folder Structure
MicroAGI01/
├── uncut_mcaps/ # Full-length recordings, ≥80% hands validity
├── cut_mcaps/ # Shorter semantic chunks, ≥95% hands validity
├── task_mapping.csv # Task labels per recording
├── microagi01viewerfoxglove.json
└── LICENSE
Start with uncut_mcaps — full-length recordings with all annotations included.
cut_mcaps contains shorter, semantically-complete segments with stricter hand tracking validity.
Task Categories
Kitchen: kitchen_cooking, kitchen_prep, kitchen_dishes, kitchen_organization, kitchen_dining, kitchen_general
Cleaning: cleaning_general, cleaning_floor
Laundry: laundry
Organization: general_organization, general_household
Rooms: bedroom, bathroom, living_room
Topic Structure
Overview
Meta /meta
Camera
/tf_static
/camera/color/image, /camera/color/info (+ /camera/color/health)
/camera/depth/image, /camera/depth/info, /camera/depth/unit_of_depth_in_mm
SLAM /tf/camera (+ .../health, .../state)
Hands /tf/hands, /hands/left, /hands/right (+ .../health)
IMU /imu/accel/sample, /imu/gyro/sample
Task /task (includes task_title)
Descriptions (of relevant topics)
/meta: Information about the mcap, the operator, ... (operator_height_in_m, metadata for general task description)
/tf_static: Any static transforms (Includes transforms between camera, imu, depth and color)
/camera/.../image: JPEG@90 image for color, PNG for depth
/camera/.../info: Parameters for sensor (especially intrinsics)
/camera/depth/unit_of_depth_in_mm: Defines the depth unit conversion. Currently set to 1, meaning the raw pixel values in the depth image are measured directly in millimeters (e.g., a pixel value of 1000 equals 1 meter)
/camera/color/health: Signals bad images which are e.g. too dark, blurry, ...
/tf/camera: Pose of camera (Only valid if a msg on .../health exists with the same timestamp and valid == true, otherwise they should be ignored. Poses are only coherent to poses in the same block of valid poses.)
/tf/camera/health: Signals regions which successful tracking
/tf/hands: Pose of left and right wrist
/hands/...: Positions of Hand keypoints (In wrist frame)
/hands/.../health: Signals whether to trust the hands position or not
/imu/.../sample: Raw imu samples
/task: Description of the current task (includes task_title)
TF-Tree (Across all tf (static) topics)
TF_TREE (RightHanded Coordinate Systems):
world (On the ground; z is up, gravity aligned)
camera (Center of camera; z is up, x is front)
# Camera data
depth (Reference for the depth image; x to the right, y is down)
accel (Reference for the accel)
gyro (Reference for the gyro)
color (Reference for the color image; x to the right, y is down)
left_wrist (x is in direction from pinky to thumb, z is in direction of arm)
right_wrist (x is in direction from pinky to thumb, z is in direction of arm)
Download
Everything:
huggingface-cli download MicroAGI-Labs/MicroAGI01 --repo-type dataset --local-dir ./MicroAGI01
Single file:
huggingface-cli download MicroAGI-Labs/MicroAGI01 uncut_mcaps/open-source-06.mcap --repo-type dataset --local-dir ./
Viewing
We use Foxglove. A layout template is included in the repo:
- Open Foxglove
- Layout → Import layout → select
microagi01viewerfoxglove.json - Load any
.mcapfile
This sets up the 3D view, camera feed, hand validity state transitions, and task annotations panel.
Extracting protobuf
We use our github repo. A script is included in the repo.
Intended Uses
- Policy and skill learning (robotics / VLA)
- Action detection and segmentation
- Hand/pose estimation and grasp analysis
- World-model pre/post training
Attribution
This work uses the MicroAGI01 dataset (MicroAGI, Inc. 2026).
Contact
Questions: info@micro-agi.com
Custom data or derived signals: data@micro-agi.com