Datasets:
image image | label image | image_id string |
|---|---|---|
56515 | ||
37443 | ||
23443 | ||
42893 | ||
23551 | ||
29991 | ||
213305 | ||
39401 | ||
28439 | ||
32588 | ||
41732 | ||
38972 | ||
197427 | ||
31884 | ||
57923 | ||
203717 | ||
39905 | ||
41681 | ||
33958 | ||
126469 | ||
29836 | ||
28425 | ||
60777 | ||
45710 | ||
34508 | ||
40997 | ||
38471 | ||
220270 | ||
42351 | ||
33401 | ||
60405 | ||
60106 | ||
56178 | ||
45996 | ||
34352 | ||
127167 | ||
25283 | ||
35601 | ||
26227 | ||
58174 | ||
91982 | ||
209585 | ||
27172 | ||
190132 | ||
51653 | ||
217303 | ||
35407 | ||
43517 | ||
208171 | ||
33801 | ||
33297 | ||
215786 | ||
31631 | ||
38631 | ||
33506 | ||
26466 | ||
44132 | ||
60919 | ||
35498 | ||
209104 | ||
29184 | ||
51462 | ||
58368 | ||
60274 | ||
50448 | ||
24596 | ||
24481 | ||
149200 | ||
24237 | ||
54915 | ||
44234 | ||
35264 | ||
37519 | ||
42218 | ||
27149 | ||
193358 | ||
26932 | ||
214899 | ||
23530 | ||
46373 | ||
55414 | ||
198816 | ||
50615 | ||
35386 | ||
33846 | ||
24968 | ||
42708 | ||
54628 | ||
29256 | ||
42302 | ||
41119 | ||
44044 | ||
207266 | ||
209012 | ||
35864 | ||
28665 | ||
56061 | ||
130478 | ||
32381 | ||
58916 |
Apple Dense Material Segmentation (DMS) – Stratified 80/10/10 Split
A pixel-level material segmentation dataset containing ~41K images with dense annotations across 57 material categories. Originally released by Apple as part of the Dense Material Segmentation (DMS) research project.
This version uses a custom stratified 80/10/10 split (vs Apple's original 54/23/23) to maximise training data while maintaining representative validation and test sets.
Why a Custom Split?
Apple's original split reserves nearly half the data for evaluation (23% val + 23% test). Our re-split allocates 80% to training while using stratified sampling (based on the dominant material class per image) to keep val/test distributions aligned with the training set.
Split Quality Comparison
| Metric | Original (Apple) | Custom (Stratified) | Improvement |
|---|---|---|---|
| Train size | 22,492 (54%) | 33,118 (80%) | +47% more training data |
| JSD train↔val | 0.0524 | 0.0158 | ✅ 70% lower divergence |
| JSD train↔test | 0.0526 | 0.0163 | ✅ 69% lower divergence |
| Classes in all splits | 53/57 | 53/57 | Equal coverage |
JSD = Jensen-Shannon Divergence between pixel-level class distributions. Lower values mean the evaluation sets better represent the training distribution, leading to more reliable metrics.
Dataset Description
Each sample consists of:
| Field | Type | Description |
|---|---|---|
image |
PIL.Image |
RGB input image |
label |
PIL.Image |
Single-channel segmentation mask (pixel values = class indices 0–56) |
image_id |
string |
Unique image identifier |
Splits
| Split | Samples | Percentage |
|---|---|---|
| Train | 33,118 | 80.0% |
| Validation | 4,138 | 10.0% |
| Test | 4,140 | 10.0% |
| Total | 41,396 | 100% |
Material Classes (57)
Click to expand full class list
| ID | Material | ID | Material | ID | Material |
|---|---|---|---|---|---|
| 0 | No label | 19 | Gemstone/quartz | 38 | Sky |
| 1 | Animal skin | 20 | Glass | 39 | Snow |
| 2 | Bone/teeth/horn | 21 | Hair | 40 | Soap |
| 3 | Brickwork | 22 | I cannot tell | 41 | Soil/mud |
| 4 | Cardboard | 23 | Ice | 42 | Sponge |
| 5 | Carpet/rug | 24 | Leather | 43 | Stone, natural |
| 6 | Ceiling tile | 25 | Liquid, non-water | 44 | Stone, polished |
| 7 | Ceramic | 26 | Metal | 45 | Styrofoam |
| 8 | Chalkboard/blackboard | 27 | Mirror | 46 | Tile |
| 9 | Clutter | 28 | Not on list | 47 | Wallpaper |
| 10 | Concrete | 29 | Paint/plaster/enamel | 48 | Water |
| 11 | Cork/corkboard | 30 | Paper | 49 | Wax |
| 12 | Engineered stone | 31 | Pearl | 50 | Whiteboard |
| 13 | Fabric/cloth | 32 | Photograph/painting | 51 | Wicker |
| 14 | Fiberglass wool | 33 | Plastic, clear | 52 | Wood |
| 15 | Fire | 34 | Plastic, non-clear | 53 | Wood, tree |
| 16 | Foliage | 35 | Rubber/latex | 54 | Bad polygon |
| 17 | Food | 36 | Sand | 55 | Multiple materials |
| 18 | Fur | 37 | Skin/lips | 56 | Asphalt |
Usage
Loading the Dataset
from datasets import load_dataset
dataset = load_dataset("AllanK24/apple-dms-materials-v2")
# Access splits
train_ds = dataset["train"] # 33,118 samples
val_ds = dataset["validation"] # 4,138 samples
test_ds = dataset["test"] # 4,140 samples
# View a sample
sample = train_ds[0]
sample["image"].show() # RGB image
sample["label"].show() # Segmentation mask
Training with SegFormer / Mask2Former
from transformers import SegformerForSemanticSegmentation, SegformerImageProcessorFast
import json
from huggingface_hub import hf_hub_download
# Load class info
class_info_path = hf_hub_download(
repo_id="AllanK24/apple-dms-materials-v2",
filename="class_info.json",
repo_type="dataset",
)
with open(class_info_path) as f:
class_info = json.load(f)
id2label = {int(k): v for k, v in class_info["id2label"].items()}
label2id = class_info["label2id"]
num_labels = class_info["num_labels"]
# Initialize model
model = SegformerForSemanticSegmentation.from_pretrained(
"nvidia/segformer-b2-finetuned-ade-512-512",
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
)
# Initialize processor
processor = SegformerImageProcessorFast.from_pretrained(
"nvidia/segformer-b2-finetuned-ade-512-512"
)
# Apply transforms
def transforms(batch):
images = [x.convert("RGB") for x in batch["image"]]
labels = [x for x in batch["label"]]
return processor(images=images, segmentation_maps=labels, return_tensors="pt")
train_ds.set_transform(transforms)
Stratification Method
The split was created using a two-level stratified sampling approach:
- Dominant class extraction – For each image, the material class with the most pixels (excluding "No label") is identified.
- First split – Images are stratified into 80% train vs 20% eval using
StratifiedShuffleSplit. - Second split – The 20% eval pool is stratified 50/50 into validation and test.
- Rare class handling – Classes with <5 total images go directly to train; classes with <2 images in the eval pool are randomly assigned between val/test.
Seed: 42 (for reproducibility).
Source & Preparation
- Original dataset: Apple DMS with images from Open Images V7
- Original split (v1): AllanK24/apple-dms-materials
- Preparation pipeline: Download → resize/align (
prepare_images.py) → validate (check_images.py, 41,385/41,396 passed) → stratified re-split
Citation
@article{upchurch2022dense,
title={Dense Material Segmentation with Context-Aware Network},
author={Upchurch, Paul and Niu, Ransen},
year={2022},
url={https://machinelearning.apple.com/research/dense-material-segmentation}
}
License
Released under the Apple Sample Code License (ASCL). Source images are from Open Images V7 (primarily CC BY 2.0). See the original repository for full licensing details.
- Downloads last month
- 23