Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
1645594514
Remove local dvc remote
3 years ago
826a45e2c9
Add constants and gitignore file
3 years ago
1c02e2846e
Make the model inference type selectable (#20)
3 years ago
2010cb5e13
Add new best model and disable SWA in config
2 years ago
7c98436c9f
More logging improvements
2 years ago
c7a25e50c9
Rerun inference for all 4 years, update the normalization settings in the data section and cleanup inference stats code
2 years ago
7c98436c9f
More logging improvements
2 years ago
1c02e2846e
Make the model inference type selectable (#20)
3 years ago
87c879ffc8
Preprocessing v2 (#53)
2 years ago
7c98436c9f
More logging improvements
2 years ago
96e54bfc40
Generalized dice loss (#30)
2 years ago
675ce01d03
Add missing .dockerignore fille
3 years ago
71df18bee0
Add dvc init files to project and dotenv requirement to setup
3 years ago
fb5a759a8a
Add pre-commit hook configuration files
3 years ago
8b22f4c05b
Add gitattributes file to fix github project lamguage analysis
3 years ago
87c879ffc8
Preprocessing v2 (#53)
2 years ago
9074b5fdc2
Move isort cfg to pyproject.toml
3 years ago
f9997053a1
Initial commit
3 years ago
bcd6233a41
Update README.md
2 years ago
1c02e2846e
Make the model inference type selectable (#20)
3 years ago
2010cb5e13
Add new best model and disable SWA in config
2 years ago
c7a25e50c9
Rerun inference for all 4 years, update the normalization settings in the data section and cleanup inference stats code
2 years ago
35c51e6e7c
Remove MONAI (#42)
2 years ago
87c879ffc8
Preprocessing v2 (#53)
2 years ago
15b5314fe5
Update protocol.md
2 years ago
fefc12437c
Move pytest config to pyproject.toml
3 years ago
949be9b622
This is a temporary fix to cope with API changes of webdataset >0.1.62 (issue #48)
2 years ago
9e2ec5be4d
Slurm sweep (#45)
2 years ago
8d93077832
Add more model architectures (#43)
2 years ago
9e2ec5be4d
Slurm sweep (#45)
2 years ago
9e2ec5be4d
Slurm sweep (#45)
2 years ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

DeadTrees

PyTorch Lightning Config: Hydra FastAPI Streamlit


Description

Map dead trees from ortho photos. A Unet (semantic segmentation model) is trained on a ortho photo collection of Luxembourg (year: 2019). This repository contains the preprocessing pipeline, training scripts, models, and a docker-based demo app (backend: FastAPI, frontend: Streamlit).

Streamlit frontend Fig 1: Streamlit UI for interactive prediction of dead trees in ortho photos.

How to run

# clone project
git clone https://github.com/cwerner/deadtrees
cd deadtrees

# [OPTIONAL] create virtual environment (using venve, pyenv, etc.) and activate it. An easy way to get a base system configured is to use micromamba (a faster alternative to anaconda) and the fastchan channel to install the notoriously finicky pytorch base dependencies and cuda setup

wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba

# init shell
./bin/micromamba shell init -s bash -p ~/micromamba
source ~/.bashrc

micromamba create -p deadtrees python=3.9 -c conda-forge
micromamba activate deadtrees
micromamba install pytorch torchvision albumentations -c fastchan -c conda-forge

# install requirements (basic requirements):
pip install -e . 

# [OPTIONAL] install extra requirements for training:
pip install -e ".[train]"

# [OPTIONAL] install extra requirements to preprocess the raw data
# (instead of reading preprocessed data from S3):
pip install -e ".[preprocess]"

# [ALTERNATIVE] install all subpackages:
pip install -e ".[all]"

Download the dataset from S3 (output of the createdataset dvc stage)

dvc pull createdataset

Specify the location of the training dataset on your system by creating the file .env with the following syntax:

export TRAIN_DATASET_PATH="/path_to_my_repos/deadtrees/data/dataset/train"

Train model with default configuration (you can adjust the training config on the commandline or by editing the hydra yaml files in conf):

python scripts/train.py

Tip!

Press p or to see the previous file or, n or to see the next file

About

Semantic Segmentation model for the detection of dead trees from ortho photos.

Collaborators 1

Comments

Loading...