Large-scale 3D Shape Retrieval
from ShapeNet Core55

3D content is becoming increasingly prevalent and important to everyday life. With commodity depth sensors, everyone can easily scan 3D models from the real world. Better 3D modeling tools are allowing designers to produce 3D models more easily. And with the advent of virtual reality, the demand for high quality 3D models will only increase. The increasing availability of 3D models requires scalable and efficient algorithms to manage and analyze them. A key research problem is retrieval of relevant 3D models and the community has been actively working on this task for more than a decade. However, existing algorithms are usually evaluated on datasets with only thousands of models, even though millions of 3D models are now available on the Internet. Thanks to the efforts of the ShapeNet [1] team, we can now use a much bigger dataset of 3D models to develop and evaluate new algorithms. In this track, we aim to evaluate the performance of 3D shape retrieval methods on a subset of the ShapeNet dataset.

News

  • [2017-04-10] The full track report document is available HERE. Test set annotations available in the Evaluator package.
  • [2017-02-01] The test set data and evaluation code are released.
  • [2017-02-01] Participant and organizer mailing list established at shapenet-shrec@googlegroups.com (or visit Google Group). Use this list for asking questions that are relevant to all participants.
  • [2017-01-20] The dataset is released and the schedule for this year's competition is updated. Please register as a participant by sending an email with a list of all members in your team to shrecshapenet@gmail.com

Dataset

We use the ShapeNetCore subset of ShapeNet which contains about 51,300 3D models over 55 common categories, each subdivided into several subcategories. We created a 70%/10%/20% training/validation/test split from this dataset. Models are provided in OBJ format and two dataset versions are available: consistently aligned (regular dataset), and a more challenging dataset where models are perturbed by random rotations. Category and subcategory labels are provided for training and validation models as comma-separated files with a header row specifying the meaning of each column: modelId, synsetId (category label) and subsynetId (subcategory label). Download links:
Training Models (8.4GB) | Training Models Perturbed (9.2GB) | Training Model Labels
Validation Models (1.2GB) | Validation Models Perturbed (1.3GB) | Validation Model Labels
Test Models (2.4GB) | Test Models Perturbed (2.6GB)

Procedure and Schedule

For the evaluation, we will treat each test model as a query model and all of the models in the test set (including the model itself) as the target retrieval database. Please submit result zip archives for the training, validation and test model sets (both normal and perturbed versions of each). Follow the submission format described HERE. In total you should submit six result sets (train/val/test x normal/perturbed).

Each participating team will write a report describing their method and its implementation. The maximum is one page with at most two figures (included in page length). There is no need to include any test result details -- we will be computing the evaluation statistics for all participants. We require both a PDF file and source LaTeX files. Please use the Eurographics 2017 LaTeX template (link). Include the names and affiliations of all members of the team when you submit your method description. For an example, please refer to the report from last year's competition (HERE).

To make your submission, package your results as specified in the result format description, and create your method writeup as specified above. Then create a single zip archive with all your files and compute an MD5 checksum for it. Send the MD5 checksum and a download link for your file to shrecshapenet@gmail.com by Tuesday Feb 21st 11:59PM UTC. We will confirm each submission after it is successfully received.

Please send email to shrecshapenet@gmail.com if you have any questions.

Evaluation

Category and subcategory labels are given for the training and validation splits of the dataset. The test set labels will be used for evaluation as described below (subcategory labels will only be used to establish a more challenging graded relevance for the NDCG metric). Each participant will submit a set of ranked retrieval lists using each test set model as the query. The ranked list format and submission procedure is described in more detail HERE. Ranked lists should order retrieved test models by similarity to the query test model. The ranked lists are evaluated against the ground truth category and subcategory annotations of the test set. A set of standard information retrieval evaluation metrics are used:

The first three metrics will be evaluated on binary in-category vs out-of-category relevance, whereas the NDCG metric will use a graded relevance: 3 for perfect category and subcategory match in query and retrieval, 2 for category and subcategory both being same as the category, 1 for correct category and a sibling subcategory, and 0 for no match.
Macro-averaged versions of the above metrics will be used to give an unweighted average over the entire dataset. Micro-averaged versions will also be used to adjust for model category sizes giving a representative performance metric averaged across categories. The organizers provide evaluation code for computing all these metrics HERE.

Results

The full track report document is available HERE.

Below is a summary table of evaluation results and Precision-Recall plots for all participating teams and methods, as well as the top two methods from the SHREC'16 iteration of the competition. The results were computed as specified in the evaluation section using the ranked list format files submitted by each participating team. They can be regenerated using the Evaluator code.

Evaluation metrics summary tables

Precision-recall plots

Team

Organizers

  • Manolis Savva - Stanford University
  • Fisher Yu - Princeton University
  • Hao Su - Stanford University

Advisory Board

  • Leonidas Guibas - Stanford University
  • Pat Hanrahan - Stanford University
  • Silvio Savarese - Stanford University
  • Qixing Huang - University of Texas, Austin
  • Thomas Funkhouser - Princeton University

References

[1] Chang et al., ShapeNet: An Information-Rich 3D Model Repository arXiv:1512.03012
[2] Wu et al., 3D ShapeNets: A Deep Representation for Volumetric Shapes CVPR 2015
[3] Philip Shilane et al., The Princeton Shape Benchmark Shape Modeling International, June 2004