Are you sure you want to delete this access key?
NYU Depth Datasets V1 + V2 – V1: Labeled data ~4GB, raw data ~90GB. V2: Labeled data ~2.8GB, raw data ~428GB.
KITTI Dataset – Annotated depth maps data ~14GB, seems entrie dataset is ~175GB
DIML RGB+D Dataset – Some inside and outside scenes. No faces or people.
Papers w/ Code - Monocular Depth Estimation task search.
UI for non-open-source depth map from 2d image – Based on older model Monodepth.
Connected Papers Graph for BTS Paper – Interactive link
Monodepth2 - not open source
BTS-PyTorch - Open source implementation in pytorch of BTS which has best results on KITTI and NYU-depth-V2
Vid2Depth - Open source Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
Simple Colab Demo of BTS – A straightforward to use simple app that uses the above implementation of BTS and let's users upload a photo and get a predicted depth map
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?