Joint Embeddings of Shapes and Images via CNN Image Purification


Both 3D models and 2D images contain a wealth of information about everyday objects in our environment. However, it is difficult to semantically link together these two media forms, even when they feature identical or very similar objects. Real-world images are naturally variable in a number of characteristics such as viewpoint, lighting, background elements, and occlusions. This variability makes it challenging to match images with each other, or with 3D shapes. We propose a joint embedding space populated by both 3D shapes and 2D images, where the distance between embedded entities reflects the similarity between the underlying objects represented by the image or 3D model, unaffected by all the aforementioned nuisance factors. This joint embedding space facilitates comparison between entities of either form, and allows for cross-modality retrieval. We construct the embedding space using an all-pairs 3D shape similarity measure, as 3D shapes are more pure and complete than their appearances in images, leading to more robust distance metrics. We then employ a Convolutional Neural Network (CNN) to "purify" images by muting the distracting factors. The CNN is trained to map an image to a point within the embedding space, such that it is close to a point attributed to a 3D model of a similar object to the one depicted in the image. This purifying capability of the CNN is accomplished with the help of a large amount of training data consisting of images synthesized from 3D shapes. Our deep embedding brings 3D shapes and 2D images into a joint embedding space, where cross-view image retrieval, image-based shape retrieval, as well as shape-based image retrieval tasks are all naturally supported. We evaluate our method on these retrieval tasks and show that it consistently out-performs state-of-the-art methods. Additionally, we demonstrate the usability of a joint embedding in a number of computer graphics applications.

Embedding Space Visualization


We are grateful to Matthias Nießner and Matthew Fisher for insightful discussions, Amit Bermano for proofreading the paper and the reviewers for invaluable comments and suggestions.

We would like to acknowledge the support of NSFC grant 61202221, NSF grant DMS 1228304, CCF 1161480 and IIS 1528025, ONR MURI grant N00014-13-1-0341, a Google Focused Research Award, the Max Planck Center for Visual Computing and Communications, K40 GPU donations from NVIDIA Corporation, and a gift from the Apple Corporation.