from ShapeNet Core55

In recent years we have witnessed an explosion in the amount of 3D data that we can generate and store. On one hand, better 3D modeling tools have enabled designers to build 3D models easily, resulting in an expansion in the size of 3D CAD model repositories. On the other hand, commodity depth sensors have allowed ordinary people to conveniently capture their own 3D scans. We need good techniques for leveraging such 3D content, in order to design algorithms that successfully understand our 3D world. However, due to fundamental challenges in dealing with 3D representations and processing, there are still many open research issues. Two key research problems are: (1) 3D shape reconstruction based on a single image, and (2) shape part level segmentation. Existing algorithms are usually evaluated on small datasets with a few hundreds of models, even though millions of 3D models are now available on the Internet. Thanks to the efforts of the ShapeNet team [1,2], we can now use a much larger and varied repository of 3D models to develop and evaluate new algorithms in computer vision and computer graphics. In this track, we aim to evaluate the performance of 3D reconstruction based on single image and shape part level segmentation on a subset of the ShapeNet dataset.

**[2017-10-16]**The report document for the two tracks is available HERE. Test set annotations for the segmentation task is available HERE. And the test set reconstruction results for the reconstruction track could be downloaded HERE.**[2017-08-23]**Google group established. Use this forum to ask and discuss challenge related questions.**[2017-08-08]**The dataset is released and the schedule for this year's competition has been updated. Please register as a participant by sending an email with a list of all members in your team to shapenetchallenge.iccv17@gmail.com

**[2017-09-25]**For both the segmentation task and the reconstruction task, the final submission will be evaluated on both the original released test set as well as a subset of it with higher shape variance. Lists of the test models in the higher variance subsets could be found for reconstruction task and segmentation task respectively. Participants do not need to change their submission format and the organizers will evaluate on both the original test set and the higher variance subset for final results.

In the 3D reconstruction challenge, the task is defined as reconstructing a 3D shape, given a single image as input. We use the ShapeNetCore subset of ShapeNet which contains about 48,600 3D models over 55 common categories. We have created a 70%/10%/20% training/validation/test split from this dataset. We choose voxels as the output 3D representations. Each model is represented by 256^3 voxels. Each voxel contains 1 or 0 as its value. 1 indicates occupancy and 0 indicates free space. Besides, we also provide the synthesized images as input. A unique model ID is used to label the model's synthesized images and voxels. Voxel resolution is allowed to change by down-sampling or up-sampling our voxels in training stage. The final evalution in the testing stage should be on the full 256^3 voxels. Download links:

Training Images | Training Voxels

Validation Images | Validation Voxels

Test Images

In part level segmentation challenge, the task is defined as predicting a per point part label, given 3D shape point clouds and their category labels. We use a subset of ShapeNetCore containing about 17,000 models from 16 shape categories [2]. Each category is annotated with 2 to 6 parts and there are 50 different parts annotated in total. 3D shapes are represented as point clouds uniformly sampled from 3D surfaces. Part annotations are represented as point labels, ranging from 1 to the number of parts. We have created a 70%/10%/20% training/validation/test split from this dataset. Shapes and labels are organized into different categories and each folder corresponds to one category. The mapping from folder name to category name can be found here. The download link for the models and labelings is as below:

Training Point Clouds | Training Label

Validation Point Clouds | Validation Label

Test Point Clouds

- Dataset released
**(August 8)** - Participants register by sending email to shapenetchallenge.iccv17@gmail.com
**(by Sep. 15)** - Participants send results. Each participant also submits a one-page method description with at most two figures. Submission is by email containing MD5 checksum and download link for results to shapenetchallenge.iccv17@gmail.com
**(by Sep. 30 11:59PM UTC)** - Organizers carry out automatic evaluation and release evaluation results for all participants.
**(Oct. 7)** - Organizers write a contest report with result details including method descriptions from each participant
**(Oct 7)**. - Results are presented at ICCV 2017 Workshop on Learning to See from 3D Data and report is published in the proceedings of the workshop.

Please submit result as zip archives for test model sets. Follow the submission format described HERE.

Each participating team will write a report describing their method and its implementation. The maximum is one page with at most two figures (included in page length). There is no need to include any test result details -- we will be computing the evaluation statistics for all participants. We require both a PDF file and source LaTeX files. Please use the ICCV 2017 LaTeX template . Include the names and affiliations of all members of the team when you submit your method description.

To make your submission, package your results as specified in the result format description, and create your method writeup as specified above. Then create a single zip archive with all your files and compute an MD5 checksum for it. Send the MD5 checksum and a download link for your file to shapenetchallenge.iccv17@gmail.com by Sep 30 UTC. We will confirm each submission after it is successfully received.

Please send email to shapenetchallenge.iccv17@gmail.com if you have any questions.

We provide two metrics to evaluate the reconstructed voxels.

- IoU: Intersection Over Union

The evalution code can be found here

The evalution code is provided here

The following evaluation metric is used for part segmentation task:

- Average IoU: per category average Intersection Over Union is computed for each category first by averaging across all parts on all shapes with the certain category label. Then an overall average IoU is computed through a weighted average of per category IoU. The weights are just the number of shapes in each category.

The above evalution code is provided here

- Li Yi - Stanford University
- Hao Su - University of California San Diego
- Lin Shao - Stanford University
- Manolis Savva - Princeton University

- Leonidas Guibas - Stanford University
- Pat Hanrahan - Stanford University
- Silvio Savarese - Stanford University
- Qixing Huang - University of Texas, Austin
- Thomas Funkhouser - Princeton University
- Evangelos Kalogerakis - University of Massachusetts Amherst

[1] Chang et al., *ShapeNet: An Information-Rich 3D Model Repository* arXiv:1512.03012

[2] Yi et al.,*A Scalable Active Framework for Region Annotation in 3D Shape Collections* SIGGRAPH Asia 2016

[3] Fan et al.,*A Point Set Generation Network for 3D Object Reconstruction from a Single Image* CVPR 2017

[2] Yi et al.,

[3] Fan et al.,