Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
..
6790b2dadd
Pushing COLO and YOLO resources for transfer annotation
1 year ago
af5e33bdb8
Merging yolo/upload-zip to main
1 year ago
6790b2dadd
Pushing COLO and YOLO resources for transfer annotation
1 year ago

README.md

You have to be logged in to leave a comment. Sign In

Converting COCO Annotation to DagsHub Format

MS COCO (Microsoft Common Objects in Context) is a large-scale image dataset containing 328,000 images of everyday objects and humans. The dataset contains annotations you can use to train machine learning models to recognize, label, and describe objects.

COCO Annotation Structure

They define several annotation formats,but that all share the same basic structure:

  “info”: {
    “year”: 2022,
    “version”: 1.3,
    “description:” “Vehicles dataset”,
    “contributor”: “John Smith”,
    “url”: “http://vehicles-dataset.org”,
    “date_created”: “2022/02/01” 
  }
 
  “licenses”: [
    {
      “id”: 1,
      “name”: “Free license”,
      “url:” “http://vehicles-dataset.org”
    }
  ]
  
  
  “image”: [
    {
      “id”: 1342334,
      “width”: 640,
      “height”: 640,
      “file_name: “ford-t.jpg”,
      “license”: 1,
      “date_captured”: “2022-02-01  15:13”
    }
  ]

In the usecase presented in this repo, the Object Detection Annotations were chosen. The Object Detection annotations have 2 extra fields:

  “annotations”: [
    {
      ”segmentation”: {
        “counts”: [34, 55, 10, 71]
        “size”: [240, 480]
      },
    “area”: 600.4,
    “iscrowd”: 1,
    “image_id:” 122214,
    “bbox”: [473.05, 395.45, 38.65, 28.92],
    “category_id”: 15,
    “id”: 934
  }
  ]
  “categories”: [
    {
      “id”: 1, 
      “name”: ”car”, 
      “supercategory”: “road vehicles”, 
      “isthing”: 1, 
      “color”: [1,0,0]
    }
  ]
  

COCO Dataset

For this tutorial the "2017 Val Images" and the "2017 Train/Val annotation, specifically the "instances_val2017" annotations were chosen. NOTE: This decision was solely due to the small size of the dataset. The tutorial workswith dataset of all sizes.

Implementation

This folder has the following stucture:

COCO
|   Convert_COCO_Annotations_to_DagsHub_Format.ipynb.ipynb
|   README.md
|   
\---data
    +---annotations
    |       instances_val2017.json
    |       
    \---images

The images are all stored under the images folder. The annotations file will be used in the colab and need not be present in the repo, but its left here for future references.

To convert COCO Annotations to Dagshub Format:

  • Create a new DagsHub Repo
  • Push the data folder from the current repo into your new repository
  • Follow the instructions in the colab notebook

VOILA!

DEMO

We tested the transfer annotations too. Checkout this Transfer-Annotation-COCO repository!

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...