Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

#643 PPYolo-E

Merged
Ghost merged 1 commits into Deci-AI:master from deci-ai:feature/SG-344-PP-Yolo-E-Training-Replicate-Recipe
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
  1. # ViT Imagenet1K fine tuning from Imagenet21K classification training:
  2. # This example trains with batch_size = 32 * 8 GPUs, total 256.
  3. # Training time on 8 x GeForce RTX A5000 is 52min / epoch.
  4. # ViT Large : 85.64 (Final averaged model)
  5. #
  6. # Log and tensorboard at s3://deci-pretrained-models/vit_large_cutmix_randaug_v2_lr=0.03/
  7. # Instructions:
  8. # 0. Make sure that the data is stored in dataset_params.dataset_dir or add "dataset_params.data_dir=<PATH-TO-DATASET>" at the end of the command below (feel free to check ReadMe)
  9. # 1. Move to the project root (where you will find the ReadMe and src folder)
  10. # 2. Run the command:
  11. # python src/super_gradients/examples/train_from_recipe_example/train_from_recipe.py --config-name=imagenet_vit_large
  12. defaults:
  13. - imagenet_vit_base
  14. - _self_
  15. dataset_params:
  16. train_dataloader_params:
  17. batch_size: 32
  18. training_hyperparams:
  19. initial_lr: 0.06
  20. average_best_models: True
  21. architecture: vit_large
  22. experiment_name: vit_large_imagenet1k
  23. multi_gpu: DDP
  24. num_gpus: 8
  25. ckpt_root_dir:
  26. # THE FOLLOWING PARAMS ARE DIRECTLY USED BY HYDRA
  27. hydra:
  28. run:
  29. # Set the output directory (i.e. where .hydra folder that logs all the input params will be generated)
  30. dir: ${hydra_output_dir:${ckpt_root_dir}, ${experiment_name}}
Discard
Tip!

Press p or to see the previous file or, n or to see the next file