Need autonomous driving training data? ›

New COCO Stuff Challenge Shows Innovation in Semantic Segmentation

New COCO Stuff Challenge Shows Innovation in Semantic Segmentation

Our friends at the Common Visual Data Foundation (CVDF) have announced the COCO 2017 Stuff Segmentation Challenge—and we’re thrilled to share that we collaborated with CVDF and researchers from the University of Edinburgh on annotations for the competition.

Our community of Fives segmented, categorized, labeled, and validated the data within 55,000 images across 91 stuff classes for this challenge.

Stuff classes are important background materials defined by homogeneous or repetitive patterns of fine-scale properties but no distinctive spatial shape—stuff like grass, walls, or sky, all of which help identify a scene. The challenge focuses on these classes because stuff covers about 66 percent of the pixels in COCO (a large-scale object detection, segmentation, and captioning dataset). To learn more, check out the COCO website.

While you’re there, show us your stuff:

Click to Enter the Stuff Segmentation Challenge

(But hurry: The submission deadline is October 8.)

Mighty AI