Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

LITERATURE_REVIEW.md 2.1 KB

You have to be logged in to leave a comment. Sign In

Literature Review

Getting Started with Monocular Depth Estimation?

  • ?

Ideas

  • Interesting to note none of the datasets below include a lot of people. I think this might become one of the challenges of this project. Link to issue for this idea

Datasets

  • NYU Depth Datasets V1 + V2V1: Labeled data ~4GB, raw data ~90GB. V2: Labeled data ~2.8GB, raw data ~428GB.

    • Indoor scenery as well as outdoor
  • KITTI Dataset – Annotated depth maps data ~14GB, seems entrie dataset is ~175GB

    • Mainly scenery from driving
  • DIML RGB+D Dataset – Some inside and outside scenes. No faces or people.

Papers

Implementations

  • Monodepth2 - not open source

  • BTS-PyTorch - Open source implementation in pytorch of BTS which has best results on KITTI and NYU-depth-V2

  • Vid2Depth - Open source Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints

  • Simple Colab Demo of BTS – A straightforward to use simple app that uses the above implementation of BTS and let's users upload a photo and get a predicted depth map

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...