Data Guidelines
The recommended training data for the submission for either track of the challenge is released on the Open Images Challenge website and is considered to be internal to the challenge. The validation subset for both tracks is the same. It is recommended, but not mandatory, to use it during validation. This validation data can be used for any purpose, either for actual validation or even for directly training your model. It is not recommended to use the validation and test subsets of Open Images V4 as they contain less dense annotations than the Challenge training and validation sets.
Any data that is downloadable from the Open Images Challenge website is considered to be internal to the challenge. The usage of the external data is allowed, however the winner teams will be required to submit a full description of the data used for training (see below). The usage of pretrained models is allowed, however the winning teams will be required to describe them in terms of architecture and training data (see below).
Note: Using the challenge test set for any form of training is strictly forbidden. Annotating the challenge test set in any way is also strictly forbidden.
Participation requirements
Competitions are open to residents of the United States and worldwide, except that if you are a resident of Crimea, Cuba, Iran, Syria, North Korea, Sudan, or are subject to U.S. export controls or sanctions, you may not enter the Competition. Each participant must submit their results by September 1st 2018 to the evaluation server. Participants are encouraged to submit a short abstract describing their method and the data used for training (10 lines of text). The winners of each track will additionally be required to provide a detailed description of their method and all the data used for training in order to claim the prize (minimum of 2 pages double-column). In addition to that the winners are encouraged to provide inference results of their models on a subset of training set (400K images, will be defined by the organizers). The predictions will be open-sourced to encourage applications and analysis of object detection algorithms (e.g. distillation).
Prize The total prize fund of the challenge is 50000$. The money is split between the tracks as following:
- 30000 for Object Detection track
- 20000 for Visual Relationship Detection track
The money will be split between the top 3 ranked participants for each track. More details will be published on the Kaggle page soon.