The results are reported as follows. For each query model a ranked list of retrieved result models is saved in a file whose filename is the same as the model filename, with no extension (e.g., 034670.obj will have a result file named 034670). Within the file, each line contains the model id of the retrieved result model and optionally a distance from the query model separated from the model id by a single space. Distances will not be used for evaluation but are allowed in the files for convenience. The models are sorted from most similar (i.e., most relevant, smallest distance) first to least similar (i.e., least relevant, largest disance) last. For example:
A maximum of 1000 retrieved model entries are allowed per query. The participant's method can give a smaller number of results for a query depending on relevance/distance/similarity cutoffs. This choice is up to the participant. Please note that the choice of how many models to retrieve for each query is allowed to vary, and that this choice is critical to the performance of the algorithm.
Each participant should create a zip archive containing all test set query model result ranked list files in one directory (or two directories, one for the normal dataset, and one for the perturbed alignment dataset if the participant is also submitting perturbed dataset results). The directory and the archive should be named as follows: AuthorLastName_MethodName (directories optionally with _normal or _perturbed at the end to indicate normal or perturbed dataset results).
The participant should then send the MD5 checksum code and a download link (Dropbox, SkyDrive or other downloadable location) for their zip file to the organizers through email at firstname.lastname@example.org. Once the organizers have successfully downloaded the results and verified the MD5, a confirmation email will be sent to each participant.