Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

#320 Feature/sg 143 document commands for all recipes

Merged
Ghost merged 1 commits into Deci-AI:master from deci-ai:feature/SG-143-document-commands-for-all-recipes
24 changed files with 396 additions and 85 deletions
  1. 213
    1
      README.md
  2. 1
    0
      src/super_gradients/recipes/checkpoint_params/default_checkpoint_params.yaml
  3. 5
    6
      src/super_gradients/recipes/cifar10_resnet.yaml
  4. 9
    4
      src/super_gradients/recipes/cityscapes_ddrnet.yaml
  5. 6
    2
      src/super_gradients/recipes/cityscapes_regseg48.yaml
  6. 9
    4
      src/super_gradients/recipes/cityscapes_stdc_seg50.yaml
  7. 9
    4
      src/super_gradients/recipes/cityscapes_stdc_seg75.yaml
  8. 8
    14
      src/super_gradients/recipes/coco2017_ssd_lite_mobilenet_v2.yaml
  9. 17
    6
      src/super_gradients/recipes/coco2017_yolox.yaml
  10. 10
    1
      src/super_gradients/recipes/coco_segmentation_shelfnet_lw.yaml
  11. 5
    4
      src/super_gradients/recipes/imagenet_efficientnet.yaml
  12. 6
    2
      src/super_gradients/recipes/imagenet_mobilenetv2.yaml
  13. 1
    0
      src/super_gradients/recipes/imagenet_mobilenetv3_base.yaml
  14. 21
    0
      src/super_gradients/recipes/imagenet_mobilenetv3_large.yaml
  15. 21
    0
      src/super_gradients/recipes/imagenet_mobilenetv3_small.yaml
  16. 10
    10
      src/super_gradients/recipes/imagenet_regnetY.yaml
  17. 7
    6
      src/super_gradients/recipes/imagenet_repvgg.yaml
  18. 4
    4
      src/super_gradients/recipes/imagenet_resnet50.yaml
  19. 5
    4
      src/super_gradients/recipes/imagenet_resnet50_kd.yaml
  20. 6
    6
      src/super_gradients/recipes/imagenet_vit_base.yaml
  21. 6
    5
      src/super_gradients/recipes/imagenet_vit_large.yaml
  22. 7
    1
      src/super_gradients/recipes/training_hyperparams/cifar10_resnet_train_params.yaml
  23. 5
    1
      src/super_gradients/recipes/training_hyperparams/default_train_params.yaml
  24. 5
    0
      src/super_gradients/training/utils/utils.py
@@ -107,6 +107,7 @@ ________________________________________________________________________________
   - [Pretrained Object Detection PyTorch Checkpoints](#pretrained-object-detection-pytorch-checkpoints)
   - [Pretrained Object Detection PyTorch Checkpoints](#pretrained-object-detection-pytorch-checkpoints)
   - [Pretrained Semantic Segmentation PyTorch Checkpoints](#pretrained-semantic-segmentation-pytorch-checkpoints)
   - [Pretrained Semantic Segmentation PyTorch Checkpoints](#pretrained-semantic-segmentation-pytorch-checkpoints)
 - [Implemented Model Architectures](#implemented-model-architectures)
 - [Implemented Model Architectures](#implemented-model-architectures)
+- [Training Recipes](#Training-Recipes)
 - [Contributing](#contributing)
 - [Contributing](#contributing)
 - [Citation](#citation)
 - [Citation](#citation)
 - [Community](#community)
 - [Community](#community)
@@ -449,7 +450,218 @@ Devices[https://arxiv.org/pdf/1807.11164](https://arxiv.org/pdf/1807.11164)
 - [STDC](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/stdc.py) - Rethinking BiSeNet For Real-time Semantic Segmentation [https://arxiv.org/pdf/2104.13188](https://arxiv.org/pdf/2104.13188)
 - [STDC](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/training/models/segmentation_models/stdc.py) - Rethinking BiSeNet For Real-time Semantic Segmentation [https://arxiv.org/pdf/2104.13188](https://arxiv.org/pdf/2104.13188)
   
   
 </details>
 </details>
-  
+
+## Training Recipes
+
+We defined recipes to ensure that anyone can reproduce our results in the most simple way.
+
+
+**Setup**
+
+To run recipes you first need to clone the super-gradients repository:
+```
+git clone https://github.com/Deci-AI/super-gradients
+```
+
+You then need to move to the root of the clone project (where you find "requirements.txt" and "setup.py") and install super-gradients:
+```
+pip install -e .
+```
+
+Finally, append super-gradients to the python path: (Replace "YOUR-LOCAL-PATH" with the path to the downloaded repo)
+```
+export PYTHONPATH=$PYTHONPATH:<YOUR-LOCAL-PATH>/super-gradients/
+```
+
+
+**How to run a recipe**
+
+The recipes are defined in .yaml format and we use the hydra library to allow you to easily customize the parameters.
+The basic basic syntax is as follow:
+```
+python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=<CONFIG-NAME> dataset_params.data_dir=<PATH-TO-DATASET>
+```
+But in most cases you will want to train on multiple GPU's using this syntax:
+```
+python -m torch.distributed.launch --nproc_per_node=<N-NODES> src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=<CONFIG-NAME> dataset_params.data_dir=<PATH-TO-DATASET>
+```
+*Note: this script needs to be launched from the root folder of super_gradients*
+*Note: if you stored your dataset in the path specified by the recipe you can drop "dataset_params.data_dir=<PATH-TO-DATASET>".*
+
+**Explore our recipes**
+
+You can find all of our recipes [here](https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/recipes).
+You will find information about the performance of a recipe as well as the command to execute it in the header of its config file.
+
+*Example: [Training of YoloX Small on Coco 2017](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/coco2017_yolox.yaml), using 8 GPU* 
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_s dataset_params.data_dir=/home/coco2017
+```
+
+
+
+**List of commands**
+
+All the commands to launch the recipes described [here](https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/recipes) are listed below.
+Please make to "dataset_params.data_dir=<PATH-TO-DATASET>" if you did not store the dataset in the path specified by the recipe (as showed in the example above).
+
+**- Classification**
+<details>
+<summary>Cifar10</summary>
+
+resnet:
+```
+python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cifar10_resnet +experiment_name=cifar10
+```
+
+</details>
+<details>
+<summary>ImageNet</summary>
+
+efficientnet
+```
+python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_efficientnet
+```
+mobilenetv2
+```
+python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_mobilenetv2
+```
+mobilenetv3 small
+```
+python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_mobilenetv3_small
+```
+mobilenetv3 large
+```
+python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_mobilenetv3_large
+```
+regnetY200
+```
+python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture=regnetY200
+```
+regnetY400
+```
+python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture=regnetY400
+```
+regnetY600
+```
+python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture=regnetY600
+```
+regnetY800
+```
+python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture=regnetY800
+```
+repvgg
+```
+python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_repvgg
+```
+resnet50
+```
+python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_resnet50
+```
+resnet50_kd
+```
+python -m torch.distributed.launch --nproc_per_node=8  src/super_gradients/examples/train_from_kd_recipe_example/train_from_kd_recipe.py --config-name=imagenet_resnet50_kd
+```
+vit_base
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_vit_base
+```
+vit_large
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_vit_large
+```
+</details>
+
+**- Detection**
+
+<details>
+<summary>Coco2017</summary>
+
+ssd_lite_mobilenet_v2
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_ssd_lite_mobilenet_v2
+```
+yolox_n
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_n
+```
+yolox_t
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_t
+```
+yolox_s
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_s
+```
+yolox_m
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_m
+```
+yolox_l
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_l
+```
+yolox_x
+```
+python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_x
+```
+
+</details>
+
+
+**- Segmentation**
+
+<details>
+<summary>Cityscapes</summary>
+
+DDRNet23
+```
+python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_ddrnet
+```
+DDRNet23-Slim
+```
+python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_ddrnet architecture=ddrnet_23_slim
+```
+RegSeg48
+```
+python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_regseg48
+```
+STDC1-Seg50
+```
+python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_stdc_seg50
+```
+STDC2-Seg50
+```
+python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_stdc_seg50 architecture=stdc2_seg
+```
+STDC1-Seg75
+```
+python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_stdc_seg75
+```
+STDC2-Seg75
+```
+python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_stdc_seg75 external_checkpoint_path=<stdc2-backbone-pretrained-path> architecture=stdc2_seg
+```
+
+</details>
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
 ## Documentation
 ## Documentation
 
 
 Check SuperGradients [Docs](https://deci-ai.github.io/super-gradients/welcome.html) for full documentation, user guide, and examples.
 Check SuperGradients [Docs](https://deci-ai.github.io/super-gradients/welcome.html) for full documentation, user guide, and examples.
Discard
@@ -1,3 +1,4 @@
+checkpoint_path:
 load_checkpoint: False # whether to load checkpoint
 load_checkpoint: False # whether to load checkpoint
 load_backbone: False # whether to load only backbone part of checkpoint
 load_backbone: False # whether to load only backbone part of checkpoint
 external_checkpoint_path: # checkpoint path that is not located in super_gradients/checkpoints
 external_checkpoint_path: # checkpoint path that is not located in super_gradients/checkpoints
Discard
@@ -1,10 +1,10 @@
 # Cifar10 Classification Training:
 # Cifar10 Classification Training:
 # Reaches ~94.9 Accuracy after 250 Epochs
 # Reaches ~94.9 Accuracy after 250 Epochs
 # Instructions:
 # Instructions:
-# running from the command line, set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-#   export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/
-# Then:
-#   python train_from_recipe_example/train_from_recipe.py --config-name=cifar10_resnet
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cifar10_resnet +experiment_name=cifar10
 
 
 defaults:
 defaults:
   - training_hyperparams: cifar10_resnet_train_params
   - training_hyperparams: cifar10_resnet_train_params
@@ -20,11 +20,10 @@ data_loader_num_workers: 8
 
 
 resume: False
 resume: False
 training_hyperparams:
 training_hyperparams:
-  resume: $(resume}
+  resume: ${resume}
 
 
 
 
 model_checkpoints_location: local
 model_checkpoints_location: local
 ckpt_root_dir:
 ckpt_root_dir:
 
 
 architecture: resnet18_cifar
 architecture: resnet18_cifar
-
Discard
@@ -3,10 +3,15 @@
 #      "Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes"
 #      "Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes"
 #      https://arxiv.org/abs/2104.13188
 #      https://arxiv.org/abs/2104.13188
 #
 #
-#  Usage DDRNet23:
-#      python -m torch.distributed.launch --nproc_per_node=4 train_from_recipe.py --config-name=cityscapes_ddrnet checkpoint_params.external_checkpoint_path=<ddrnet23-backbone-pretrained-path>
-#  Usage DDRNet23-Slim:
-#      python -m torch.distributed.launch --nproc_per_node=4 train_from_recipe.py --config-name=cityscapes_ddrnet checkpoint_params.external_checkpoint_path=<ddrnet23-backbone-pretrained-path> architecture=ddrnet_23_slim
+
+
+# Instructions:
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#      DDRNet23:       python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_ddrnet
+#      DDRNet23-Slim:  python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_ddrnet architecture=ddrnet_23_slim
+# Note: add "checkpoint_params.external_checkpoint_path=<ddrnet23-backbone-pretrained-path>" to use pretrained backbone
 #
 #
 #  Validation mIoU - Cityscapes, training time:
 #  Validation mIoU - Cityscapes, training time:
 #      DDRNet23:        input-size: [1024, 2048]     mIoU: 80.26     4 X RTX A5000, 12 H
 #      DDRNet23:        input-size: [1024, 2048]     mIoU: 80.26     4 X RTX A5000, 12 H
Discard
@@ -1,8 +1,12 @@
 #  RegSeg segmentation training example with Cityscapes dataset.
 #  RegSeg segmentation training example with Cityscapes dataset.
 #  Reproduction of paper: Rethink Dilated Convolution for Real-time Semantic Segmentation.
 #  Reproduction of paper: Rethink Dilated Convolution for Real-time Semantic Segmentation.
 #
 #
-#  Usage RegSeg48:
-#      python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=regseg48_cityscapes
+
+# Instructions:
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_regseg48
 #
 #
 #
 #
 #  Validation mIoU - Cityscapes, training time:
 #  Validation mIoU - Cityscapes, training time:
Discard
@@ -1,10 +1,15 @@
 #  STDC segmentation training example with Cityscapes dataset.
 #  STDC segmentation training example with Cityscapes dataset.
 #  Reproduction and refinement of paper: Rethinking BiSeNet For Real-time Semantic Segmentation.
 #  Reproduction and refinement of paper: Rethinking BiSeNet For Real-time Semantic Segmentation.
 #
 #
-#  Usage STDC1-Seg50:
-#      python -m torch.distributed.launch --nproc_per_node=2 train_from_recipe.py --config-name=cityscapes_stdc_seg50 checkpoint_params.external_checkpoint_path=<stdc1-backbone-pretrained-path>
-#  Usage STDC2-Seg50:
-#      python -m torch.distributed.launch --nproc_per_node=2 train_from_recipe.py --config-name=cityscapes_stdc_seg50 checkpoint_params.external_checkpoint_path=<stdc1-backbone-pretrained-path> architecture=stdc2_seg
+
+# Instructions:
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       STDC1-Seg50: python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_stdc_seg50
+#       STDC2-Seg50: python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_stdc_seg50 architecture=stdc2_seg
+# Note: add "checkpoint_params.external_checkpoint_path=<stdc1-backbone-pretrained-path>" to use pretrained backbone
+#
 #
 #
 #
 #
 #  Validation mIoU - Cityscapes, training time:
 #  Validation mIoU - Cityscapes, training time:
Discard
@@ -1,10 +1,15 @@
 #  STDC segmentation training example with Cityscapes dataset.
 #  STDC segmentation training example with Cityscapes dataset.
 #  Reproduction and refinement of paper: Rethinking BiSeNet For Real-time Semantic Segmentation.
 #  Reproduction and refinement of paper: Rethinking BiSeNet For Real-time Semantic Segmentation.
 #
 #
-#  Usage STDC1-Seg75:
-#      python -m torch.distributed.launch --nproc_per_node=4 train_from_recipe.py --config-name=cityscapes_stdc_seg75 external_checkpoint_path=<stdc1-backbone-pretrained-path>
-#  Usage STDC2-Seg75:
-#      python -m torch.distributed.launch --nproc_per_node=4 train_from_recipe.py --config-name=cityscapes_stdc_seg75 external_checkpoint_path=<stdc2-backbone-pretrained-path> architecture=stdc2_seg
+
+# Instructions:
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       STDC1-Seg75: python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_stdc_seg75
+#       STDC2-Seg75: python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=cityscapes_stdc_seg75 architecture=stdc2_seg
+# Note: add "external_checkpoint_path=<stdc1-backbone-pretrained-path>" to use pretrained backbone
+#
 #
 #
 #
 #
 #  Validation mIoU - Cityscapes, training time:
 #  Validation mIoU - Cityscapes, training time:
Discard
@@ -4,21 +4,21 @@
 # (trained with stride_16_plus_big)
 # (trained with stride_16_plus_big)
 # Hardware: 8 NVIDIA RTX 3090
 # Hardware: 8 NVIDIA RTX 3090
 # Training time: ±17 hours
 # Training time: ±17 hours
+#
+
 
 
 # Instructions:
 # Instructions:
-# Set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-# export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/src:"YOUR_LOCAL_PATH"/super_gradients/
-#
-# Run with:
-# python3 -m torch.distributed.launch --nproc_per_node=8 train_from_recipe.py --config-name=coco2017_ssd_lite_mobilenet_v2.yaml
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_ssd_lite_mobilenet_v2
 
 
 
 
 # NOTE:
 # NOTE:
 # Anchors will be selected based on validation resolution and anchors_name
 # Anchors will be selected based on validation resolution and anchors_name
 # To switch between anchors, set anchors_name to something else defined in the anchors section
 # To switch between anchors, set anchors_name to something else defined in the anchors section
 # e.g.
 # e.g.
-# python3 -m torch.distributed.launch --nproc_per_node=4 train_from_recipe_example/train_from_recipe.py \
-# --config-name=coco_ssd_lite_mobilenet_v2.yaml anchors_name=stride_16_plus
+# python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_ssd_lite_mobilenet_v2 anchors_name=stride_16_plus
 
 
 
 
 defaults:
 defaults:
@@ -35,12 +35,6 @@ model_checkpoints_location: local
 experiment_suffix: res${dataset_params.train_image_size}
 experiment_suffix: res${dataset_params.train_image_size}
 experiment_name: ${architecture}_coco_${experiment_suffix}
 experiment_name: ${architecture}_coco_${experiment_suffix}
 
 
-sg_model:
-  _target_: super_gradients.SgModel
-  experiment_name: ${experiment_name}
-  model_checkpoints_location: ${model_checkpoints_location}
-  multi_gpu: DDP
-
 anchors_resolution: ${dataset_params.val_image_size}x${dataset_params.val_image_size}
 anchors_resolution: ${dataset_params.val_image_size}x${dataset_params.val_image_size}
 anchors_name: stride_16_plus_big
 anchors_name: stride_16_plus_big
 dboxes: ${anchors.${anchors_resolution}.${anchors_name}}
 dboxes: ${anchors.${anchors_resolution}.${anchors_name}}
@@ -59,4 +53,4 @@ training_hyperparams:
     alpha: 1.0
     alpha: 1.0
     dboxes: ${dboxes}
     dboxes: ${dboxes}
 
 
-
+multi_gpu: DDP
Discard
@@ -2,14 +2,25 @@
 # YoloX trained in 640x640
 # YoloX trained in 640x640
 # Checkpoints + tensorboards: https://deci-pretrained-models.s3.amazonaws.com/yolox_coco/
 # Checkpoints + tensorboards: https://deci-pretrained-models.s3.amazonaws.com/yolox_coco/
 # Recipe runs with batch size = 16 X 8 gpus = 128.
 # Recipe runs with batch size = 16 X 8 gpus = 128.
+
+
 # Instructions:
 # Instructions:
-# python -m torch.distributed.launch super_gradients.train_from_recipe --nproc_per_node=8 --config-name=coco2017_yolox architecture=yolox_<YOLOX_SIZE> dataset_params.data_dir=<YOUR_COCO_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command you want:
+#         yolox_n: python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_n
+#         yolox_t: python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_t
+#         yolox_s: python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_s
+#         yolox_m: python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_m
+#         yolox_l: python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_l
+#         yolox_x: python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco2017_yolox architecture=yolox_x
+#
 # Training times and accuracies (mAP@0.5-0.95 (COCO API, confidence 0.001, IoU threshold 0.6, test on 640x640 images):
 # Training times and accuracies (mAP@0.5-0.95 (COCO API, confidence 0.001, IoU threshold 0.6, test on 640x640 images):
-# yolox_n: 1d 16h 33m 9s on 8 NVIDIA GeForce RTX 3090, mAP: 26.77
-# yolox_tiny: 20h 43m 37s on 8 NVIDIA RTX A5000, mAP: 37.18
-# yolox_s: 1d 17h 40m 30s on 8 NVIDIA RTX A5000, mAP: 40.47
-# yolox_m: 1d 22h 23m 43s on 8 NVIDIA GeForce RTX 3090, mAP: 46.40
-# yolox_l: 2d 14h 11m 41s on 8 NVIDIA GeForce RTX 3090, mAP: 49.25
+#         yolox_n: 1d 16h 33m 9s  on 8 NVIDIA GeForce RTX 3090, mAP: 26.77
+#         yolox_t: 20h 43m 37s    on 8 NVIDIA RTX A5000, mAP: 37.18
+#         yolox_s: 1d 17h 40m 30s on 8 NVIDIA RTX A5000, mAP: 40.47
+#         yolox_m: 1d 22h 23m 43s on 8 NVIDIA GeForce RTX 3090, mAP: 46.40
+#         yolox_l: 2d 14h 11m 41s on 8 NVIDIA GeForce RTX 3090, mAP: 49.25
 
 
 defaults:
 defaults:
   - training_hyperparams: coco2017_yolox_train_params
   - training_hyperparams: coco2017_yolox_train_params
Discard
@@ -3,7 +3,16 @@
 # Trained using 4 X 2080 Ti using DDP- takes ~ 2d 7h with batch size of 8 and batch accumulate of 3 (i.e effective batch
 # Trained using 4 X 2080 Ti using DDP- takes ~ 2d 7h with batch size of 8 and batch accumulate of 3 (i.e effective batch
 # size is 4*8*3 = 96)
 # size is 4*8*3 = 96)
 # Logs and tensorboards: s3://deci-pretrained-models/shelfnet34_coco_segmentation_tensorboard/
 # Logs and tensorboards: s3://deci-pretrained-models/shelfnet34_coco_segmentation_tensorboard/
-# python train_from_recipe_example/train_from_recipe.py --config-name=coco_segmentation_shelfnet_lw
+
+
+# Instructions:
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=coco_segmentation_shelfnet_lw --model_checkpoints_location=<checkpoint-location>
+
+
+# /!\ THIS RECIPE IS NOT SUPPORTED AT THE MOMENT /!\
 
 
 defaults:
 defaults:
   - training_hyperparams: coco_segmentation_shelfnet_lw_train_params
   - training_hyperparams: coco_segmentation_shelfnet_lw_train_params
Discard
@@ -3,10 +3,11 @@
 # Epoch time on 4 X 3090Ti distributed training is ~ 16:25 minutes
 # Epoch time on 4 X 3090Ti distributed training is ~ 16:25 minutes
 # Logs and tensorboards: s3://deci-pretrained-models/efficientnet_b0/
 # Logs and tensorboards: s3://deci-pretrained-models/efficientnet_b0/
 # Instructions:
 # Instructions:
-# Set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-#   export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/:"YOUR_LOCAL_PATH"/super_gradients/src/
-# Then:
-# #   python -m torch.distributed.launch --nproc_per_node=4 train_from_recipe.py --config-name=imagenet_efficientnet
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_efficientnet
+
 defaults:
 defaults:
   - training_hyperparams: imagenet_efficientnet_train_params
   - training_hyperparams: imagenet_efficientnet_train_params
   - dataset_params: imagenet_dataset_params
   - dataset_params: imagenet_dataset_params
Discard
@@ -2,8 +2,12 @@
 # Top1-Accuracy:  73.08
 # Top1-Accuracy:  73.08
 # Learning rate and batch size parameters, using 2 GPUs with DDP:
 # Learning rate and batch size parameters, using 2 GPUs with DDP:
 #     initial_lr: 0.032    batch-size: 256 * 2gpus = 512
 #     initial_lr: 0.032    batch-size: 256 * 2gpus = 512
-# Usage:
-#     python -m torch.distributed.launch --nproc_per_node=2 --master_port=1234 examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_mobilenetv2
+#
+# Instructions:
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_mobilenetv2
 
 
 defaults:
 defaults:
   - training_hyperparams: imagenet_mobilenetv2_train_params
   - training_hyperparams: imagenet_mobilenetv2_train_params
Discard
@@ -2,6 +2,7 @@
 
 
 defaults:
 defaults:
   - training_hyperparams: imagenet_mobilenetv3_train_params
   - training_hyperparams: imagenet_mobilenetv3_train_params
+  - dataset_params: imagenet_dataset_params
   - checkpoint_params: default_checkpoint_params
   - checkpoint_params: default_checkpoint_params
 
 
 dataset_params:
 dataset_params:
Discard
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
  1. # MobileNetV3 Large Imagenet classification training:
  2. # TODO: Add metrics
  3. #
  4. # Instructions:
  5. # 0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
  6. # 1. Move to the project root (where you will find the ReadMe and src folder)
  7. # 2. Run the command:
  8. # python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_mobilenetv3_large
  9. defaults:
  10. - imagenet_mobilenetv3_base
  11. - arch_params: mobilenet_v3_large_arch_params
  12. arch_params:
  13. num_classes: 1000
  14. dropout: 0.2
  15. experiment_name: mobileNetv3_large_training
  16. architecture: mobilenet_v3_large
Discard
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
  1. # MobileNetV3 Small Imagenet classification training:
  2. # TODO: Add metrics
  3. #
  4. # Instructions:
  5. # 0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
  6. # 1. Move to the project root (where you will find the ReadMe and src folder)
  7. # 2. Run the command:
  8. # python -m torch.distributed.launch --nproc_per_node=2 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_mobilenetv3_small
  9. defaults:
  10. - imagenet_mobilenetv3_base
  11. - arch_params: mobilenet_v3_small_arch_params
  12. arch_params:
  13. num_classes: 1000
  14. dropout: 0.2
  15. experiment_name: mobileNetv3_small_training
  16. architecture: mobilenet_v3_small
Discard
@@ -6,21 +6,21 @@
 #  19 days for RegnetY600, 76.18
 #  19 days for RegnetY600, 76.18
 #  20 days for RegnetY800, 77.07
 #  20 days for RegnetY800, 77.07
 #  NOTE: Training should probably be lower as resources were shared among the above runs.
 #  NOTE: Training should probably be lower as resources were shared among the above runs.
-
+#
 #  Logs and tensorboards at:
 #  Logs and tensorboards at:
 # https://deci-pretrained-models.s3.amazonaws.com/RegnetY800/
 # https://deci-pretrained-models.s3.amazonaws.com/RegnetY800/
 # https://deci-pretrained-models.s3.amazonaws.com/RegnetY600/
 # https://deci-pretrained-models.s3.amazonaws.com/RegnetY600/
 # https://deci-pretrained-models.s3.amazonaws.com/RegnetY400/
 # https://deci-pretrained-models.s3.amazonaws.com/RegnetY400/
 # https://deci-pretrained-models.s3.amazonaws.com/RegnetY200/
 # https://deci-pretrained-models.s3.amazonaws.com/RegnetY200/
-
+#
 # Instructions:
 # Instructions:
-# Set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-#   export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/:"YOUR_LOCAL_PATH"/super_gradients/src/
-# Then:
-#   python train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture: regnetY200 experiment_name: regnetY200_imagenet
-#   python train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture: regnetY400 experiment_name: regnetY400_imagenet
-#   python train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture: regnetY600 experiment_name: regnetY600_imagenet
-#   python train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture: regnetY800 experiment_name: regnetY800_imagenet
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#         regnetY200: python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture=regnetY200
+#         regnetY400: python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture=regnetY400
+#         regnetY600: python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture=regnetY600
+#         regnetY800: python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_regnetY architecture=regnetY800
 
 
 defaults:
 defaults:
   - training_hyperparams: imagenet_regnetY_train_params
   - training_hyperparams: imagenet_regnetY_train_params
@@ -54,7 +54,6 @@ resume: False
 training_hyperparams:
 training_hyperparams:
   resume: ${resume}
   resume: ${resume}
 
 
-experiment_name: regnetY800_imagenet
 
 
 ckpt_root_dir:
 ckpt_root_dir:
 
 
@@ -62,3 +61,4 @@ multi_gpu: Off
 
 
 
 
 architecture: regnetY800
 architecture: regnetY800
+experiment_name: ${architecture}
Discard
@@ -4,20 +4,21 @@
 #  Reach => 72.05 Top1 accuracy.
 #  Reach => 72.05 Top1 accuracy.
 #
 #
 #  Log and tensorboard at s3://deci-pretrained-models/repvggg-a0-imagenet-tensorboard/
 #  Log and tensorboard at s3://deci-pretrained-models/repvggg-a0-imagenet-tensorboard/
-
+#
 # Instructions:
 # Instructions:
-# Set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-#   export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/
-# Then for 320x320 image size for training:
-#   python -m torch.distributed.launch --nproc_per_node=4 train_from_recipe_example/train_from_recipe.py --config-name=imagenet_repvgg
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_repvgg
 
 
 defaults:
 defaults:
   - training_hyperparams: imagenet_repvgg_train_params
   - training_hyperparams: imagenet_repvgg_train_params
   - dataset_params: imagenet_dataset_params
   - dataset_params: imagenet_dataset_params
-  - arch_params: default_arch_params
+  - arch_params: repvgg_arch_params
   - checkpoint_params: default_checkpoint_params
   - checkpoint_params: default_checkpoint_params
 
 
 arch_params:
 arch_params:
+  num_classes: 1000
   build_residual_branches: True
   build_residual_branches: True
 
 
 dataset_interface:
 dataset_interface:
Discard
@@ -6,10 +6,10 @@
 #  Log and tensorboard at s3://deci-pretrained-models/ResNet50_ImageNet/average_model.pth
 #  Log and tensorboard at s3://deci-pretrained-models/ResNet50_ImageNet/average_model.pth
 
 
 # Instructions:
 # Instructions:
-# running from the command line, set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-#   export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/
-# Then:
-#   python train_from_recipe_example/train_from_recipe.py --config-name=imagenet_resnet50
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=4 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_resnet50
 
 
 
 
 defaults:
 defaults:
Discard
@@ -6,10 +6,11 @@
 #  Log and tensorboard at s3://deci-pretrained-models/KD_ResNet50_Beit_Base_ImageNet/average_model.pth
 #  Log and tensorboard at s3://deci-pretrained-models/KD_ResNet50_Beit_Base_ImageNet/average_model.pth
 
 
 # Instructions:
 # Instructions:
-# running from the command line, set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-#   export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/:"YOUR_LOCAL_PATH"/super_gradients/src/
-# Then:
-#   python train_from_recipe_example/train_from_kd_recipe.py --config-name=imagenet_resnet50_kd
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=8  src/super_gradients/examples/train_from_kd_recipe_example/train_from_kd_recipe.py --config-name=imagenet_resnet50_kd
+
 
 
 defaults:
 defaults:
   - training_hyperparams: imagenet_resnet50_kd_train_params
   - training_hyperparams: imagenet_resnet50_kd_train_params
Discard
@@ -6,11 +6,10 @@
 #  Log and tensorboard at s3://deci-pretrained-models/vit_base_imagenet1k/
 #  Log and tensorboard at s3://deci-pretrained-models/vit_base_imagenet1k/
 
 
 # Instructions:
 # Instructions:
-# running from the command line, set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-#   export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/
-# Then:
-# for vit_base:
-#   python -m torch.distributed.launch --nproc_per_node=8 train_from_recipe_example/train_from_recipe.py --config-name=imagenet_vit_base
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_vit_base
 
 
 
 
 defaults:
 defaults:
@@ -49,4 +48,5 @@ training_hyperparams:
 
 
 experiment_name: vit_base_imagenet1k
 experiment_name: vit_base_imagenet1k
 
 
-architecture: vit_base
+architecture: vit_base
+multi_gpu: DDP
Discard
@@ -6,10 +6,10 @@
 #  Log and tensorboard at s3://deci-pretrained-models/vit_large_cutmix_randaug_v2_lr=0.03/
 #  Log and tensorboard at s3://deci-pretrained-models/vit_large_cutmix_randaug_v2_lr=0.03/
 
 
 # Instructions:
 # Instructions:
-# running from the command line, set the PYTHONPATH environment variable: (Replace "YOUR_LOCAL_PATH" with the path to the downloaded repo):
-#   export PYTHONPATH="YOUR_LOCAL_PATH"/super_gradients/
-# Then:
-#   python -m torch.distributed.launch --nproc_per_node=8 train_from_recipe_example/train_from_recipe.py --config-name=imagenet_vit_large
+#   0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
+#   1. Move to the project root (where you will find the ReadMe and src folder)
+#   2. Run the command:
+#       python -m torch.distributed.launch --nproc_per_node=8 src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_vit_large
 
 
 
 
 defaults:
 defaults:
@@ -25,4 +25,5 @@ training_hyperparams:
 
 
 architecture: vit_large
 architecture: vit_large
 
 
-experiment_name: vit_large_imagenet1k
+experiment_name: vit_large_imagenet1k
+multi_gpu: DDP
Discard
@@ -2,7 +2,13 @@ defaults:
   - default_train_params
   - default_train_params
 
 
 max_epochs: 250
 max_epochs: 250
-lr_updates: [100, 150, 200]
+
+lr_updates:
+  _target_: numpy.arange
+  start: 100
+  stop: 250
+  step: 50
+
 lr_decay_factor: 0.1
 lr_decay_factor: 0.1
 lr_mode: step
 lr_mode: step
 lr_warmup_epochs: 0
 lr_warmup_epochs: 0
Discard
@@ -7,7 +7,11 @@ warmup_initial_lr: # Initial lr for linear_step. When none is given, initial_lr/
 step_lr_update_freq: # (float) update frequency in epoch units for computing lr_updates when lr_mode=`step`.
 step_lr_update_freq: # (float) update frequency in epoch units for computing lr_updates when lr_mode=`step`.
 cosine_final_lr_ratio: 0.01 # final learning rate ratio (only relevant when `lr_mode`='cosine')
 cosine_final_lr_ratio: 0.01 # final learning rate ratio (only relevant when `lr_mode`='cosine')
 warmup_mode: linear_step # learning rate warmup scheme, currently only 'linear_step' is supported
 warmup_mode: linear_step # learning rate warmup scheme, currently only 'linear_step' is supported
-lr_updates: []
+
+lr_updates:
+  _target_: super_gradients.training.utils.utils.empty_list # This is a workaround to instantiate a list using _target_. If we would instantiate as "lr_updates: []",
+                                                            # we would get an error every time we would want to overwrite lr_updates with a numpy array.
+
 pre_prediction_callback: # callback modifying images and targets right before forward pass.
 pre_prediction_callback: # callback modifying images and targets right before forward pass.
 
 
 optimizer: SGD # Optimization algorithm. One of ['Adam','SGD','RMSProp'] corresponding to the torch.optim optimizers
 optimizer: SGD # Optimization algorithm. One of ['Adam','SGD','RMSProp'] corresponding to the torch.optim optimizers
Discard
@@ -24,6 +24,11 @@ from super_gradients.common.abstractions.abstract_logger import get_logger
 logger = get_logger(__name__)
 logger = get_logger(__name__)
 
 
 
 
+def empty_list():
+    """Instantiate an empty list. This is a workaround to generate a list with a function call in hydra, instead of the "[]"."""
+    return list()
+
+
 def convert_to_tensor(array):
 def convert_to_tensor(array):
     """Converts numpy arrays and lists to Torch tensors before calculation losses
     """Converts numpy arrays and lists to Torch tensors before calculation losses
     :param array: torch.tensor / Numpy array / List
     :param array: torch.tensor / Numpy array / List
Discard