Open Images Dataset V7 News Extras Extended Download Description Explore
You are viewing the description of the latest version of Open Images (V7 - released Oct 2022),
if you would like to view the description of a previous version, please select it here:

Download dense annotations over 1.9M images

Out of the 9M images, a subset of 1.9M images have been annotated with: bounding boxes, object segmentations, visual relationships, localized narratives, point-level labels, and image-level labels. (The remaining images have only image-level labels).

This subset of images and dense annotations are available via three data channels:

Manual download of the images and raw annotations.
Access to all annotations via Tensorflow datasets.
Access to a subset of annotations (v5 data types: images, image labels, boxes, masks) via FiftyOne thirtd-party open source library.

Trouble accessing the data? Let us know.

Download using Tensorflow Datasets

2022-09:To be released. Please +1 and subscribe to this Github issue if you want TFDS support.

Tensorflow datasets provides an unified API to access hundreds of datasets.

Once installed Open Images data can be directly accessed via:

dataset = tfds.load(‘open_images/v7’, split='train')
for datum in dataset:
  image, bboxes = datum["image"], example["bboxes"]

Previous versions open_images/v6, /v5, and /v4 are also available.

Download and Visualize using FiftyOne

We have collaborated with the team at Voxel51 to make downloading and visualizing (a subset of) Open Images a breeze using their open-source tool FiftyOne.

FiftyOne Example

As with any other dataset in the FiftyOne Dataset Zoo, downloading it is as easy as calling:
dataset = fiftyone.zoo.load_zoo_dataset("open-images-v6", split="validation")
The function allows you to:

These properties give you the ability to quickly download subsets of the dataset that are relevant to you.

dataset = fiftyone.zoo.load_zoo_dataset(
              label_types=["detections", "segmentations"],
              classes=["Cat", "Dog"],

FiftyOne also provides native support for Open Images-style evaluation to compute mAP, plot PR curves, interact with confusion matrices, and explore individual label-level results.

results = dataset.evaluate_detections("predictions", gt_field="detections", method="open-images")

Download Manually


If you're interested in downloading the full set of training, test, or validation images (1.7M, 125k, and 42k, respectively; annotated with bounding boxes, etc.), you can download them packaged in various compressed files from CVDF's site:
If you only need a certain subset of these images and you'd rather avoid downloading the full 1.9M images, we provide a Python script that downloads images from CVDF.
  1. Download the file (open and press Ctrl + S), or directly run:
  2. Create a text file containing all the image IDs that you're interested in downloading. It can come from filtering the annotations with certain classes, those annotated with a certain type of annotations (e.g., MIAP). Each line should follow the format $SPLIT/$IMAGE_ID, where $SPLIT is either "train", "test", "validation", or "challenge2018"; and $IMAGE_ID is the image ID that uniquely identifies the image. A sample file could be:
  3. Run the following script, making sure you have the dependencies installed:
    python $IMAGE_LIST_FILE --download_folder=$DOWNLOAD_FOLDER --num_processes=5
    For help, run:
    python -h

Annotations and metadata

Image IDs
Image labels
Localized narratives voice recordings

Download image labels over 9M images

These image-label annotation files provide annotations for all images over 20,638 classes. In the train set, the human-verified labels span 7,337,077 images, while the machine-generated labels span 8,949,445 images. The image IDs below list all images that have human-verified labels. The annotation files span the full validation (41,620 images) and test (125,436 images) sets.

Human-verified labels
Machine-generated labels
Image IDs

Trouble downloading the data? Let us know.

Download all Open Images images

The full set of 9,178,275 images.

Image IDs
Trouble downloading the pixels? Let us know.

Open Images Extended

Open Images extended complements the existing dataset with additional images and annotations.

Data formats

The Open Image annotations come in diverse text, csv, audio, and image files. These file formats are documented below. \

Bounding boxes

Each row defines one bounding box.


The attributes have the following definitions:

For each of them, value 1 indicates present, 0 not present, and -1 unknown.

Instance segmentation masks

The masks information is stored in two files:

The masks images are PNG binary images, where non-zero pixels belong to a single object instance and zero pixels are background. The file names look as follows (random 5 examples): e88da03f2d80f1a1_m019jd_e16d01b9.png 540c5536e95a3282_m014j1m_b00fa52e.png 1c84bdd61fa3b883_m06m11_62ef2388.png 663389d2c9d562d8_m04_sv_7e23f2a5.png 072b8fd82919ab3e_m06mf6_dd70f221.png

The format of .zip archives names is the following: each <subset>_<suffix>.zip contains all masks for all images with the first characted of ImageID equal to <suffix>. The value of <suffix> is from 0-9 and a-f.

Each row in masks_data.csv describes one instance, using similar conventions as the boxes CSV data file.

25adb319ebc72921_m02mqfb_8423aba8.png,25adb319ebc72921,/m/02mqfb,8423aba8,0.000000,0.998438,0.089062,0.770312,0.62821,0.15808 0.26206 1;0.90333 0.41076 0;0.17578 0.66566 1;0.00761 0.23197 1;0.07918 0.26058 0;0.31792 0.47737 1;0.12858 0.59262 0;0.73229 0.34016 1;0.01865 0.20001 1;0.52214 0.31037 0;0.83596 0.28105 1;0.23418 0.60177 0
0a419be97dec2fa3_m02mqfb_8ad2c442.png,0a419be97dec2fa3,/m/02mqfb,8ad2c442,0.057813,0.943750,0.056250,0.960938,0.87836,0.89971 0.08481 1;0.20175 0.90471 0;0.11511 0.89990 0;0.94728 0.28410 0;0.19611 0.85369 0;0.07672 0.87857 1;0.82215 0.62642 0;0.13916 0.92650 1;0.51738 0.48419 1
8eef6e54789ce66d_m02mqfb_83dae39c.png,8eef6e54789ce66d,/m/02mqfb,83dae39c,0.037500,0.978750,0.129688,0.925000,0.70206,0.40219 0.16838 1;0.56758 0.65286 1;0.08311 0.90762 1;0.20840 0.56515 1;0.43336 0.23679 0;0.24689 0.43426 0;0.49292 0.65762 1;0.31383 0.51431 0;0.07137 0.86214 0;0.68160 0.38210 1;0.69462 0.59568 0

Visual relationships

Each row in the file corresponds to a single annotation.


Localized narratives

The Localized Narrative annotations are in JSON Lines format, that is, each line of the file is an independent valid JSON-encoded object. The largest files are split into smaller sub-files (shards) for ease of download. Since each line of the file is independent, the whole file can be reconstructed by simply concatenating the contents of the shards.

Each line represents one Localized Narrative annotation on one image by one annotator and has the following fields:

Below a sample of one Localized Narrative in this format:

    dataset_id: 'open_image',
    image_id: 'abe9ff8763cdcc5d',
    annotator_id: 93,
    caption: 'In this image there are group of cows standing and eating th...',
    timed_caption: [{'utterance': 'In this', 'start_time': 0.0, 'end_time': 0.4}, ...],
    traces: [[{'x': 0.2086, 'y': -0.0533, 't': 0.022}, ...], ...],
    voice_recording: "open_images_validation/open_images_validation_abe9ff8763cdcc5d_110.ogg",

For more information, additional download files, and annotations over other datasets, consult the localized narratives website.

Point Labels

The point label data is contained in two types of comma separated value files. One per-split file containing the point labels, and one dataset-wide file describing the annotated classes.

Per-split point labels data looks as follows:



The class description data looks as follows:

/m/016q19,Petal,2681,5656,petals,petal,with petals,with petals and,flower petal,they have pink petals,color petals,are petals,some petals of flowers,see petals,petals which are in,petal and
/m/01qr50,Mud,2871,14850,mud,on the mud,the mud,is mud,a mud,mud and,the mud and in,see mud,see the mud,we can see mud,there is mud,in the mud
/m/0cyhj_,Orange (fruit),25002,55836,orange,oranges,an orange,the orange,an orange on this,can see orange,and orange,an oranges,halved orange,wearing orange,orange slice,oranges and some


For more details about these annotations, see 5.

Image Labels

Human-verified and machine-generated image-level labels:


Source: indicates how the annotation was created:

Confidence: Labels that are human-verified to be present in an image have confidence = 1 (positive labels). Labels that are human-verified to be absent from an image have confidence = 0 (negative labels). Machine-generated labels have fractional confidences, generally >= 0.5. The higher the confidence, the smaller the chance for the label to be a false positive.

Class names

The class names in MID format can be converted to their short descriptions by looking into class-descriptions.csv:

/m/0pcq81q,Soccer player
/m/0pdnd2t,Bengal clockvine

Note the presence of characters like commas and quotes. The file follows standard CSV escaping rules. e.g.:

/m/02wvth,"Fiat 500 ""topolino"""
/m/03gtp5,Lamb's quarters
/m/03hgsf0,"Lemon, lime and bitters"

Image information

It has image URLs, their OpenImages IDs, the rotation information, titles, authors, and license information:

"","David","28 Nov 2010 Our new house."\

Each image has a unique 64-bit ID assigned. In the CSV files they appear as zero-padded hex integers, such as 000060e3121c7305.

The data is as it appears on the destination websites.

Hierarchy for 600 boxable classes

View the set of boxable classes as a hierarchy here or download it as a JSON file:

Hierarchy Visualizer


  1. "Extreme clicking for efficient object annotation", Papadopolous et al., ICCV 2017.

  2. "We don't need no bounding-boxes: Training object class detectors using only human verification, Papadopolous et al., CVPR 2016.

  3. "Large-scale interactive object segmentation with human annotators", Benenson et al., CVPR 2019.

  4. "Natural Vocabulary Emerges from Free-Form Annotations", Pont-Tuset et al., arXiv 2019.

  5. "From couloring-in to pointillism: revisiting semantic segmentation supervision", Benenson et al., arXiv 2022.