Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

#647 Feature/sg 573 Integrate new EMA decay schedules

Merged
Ghost merged 1 commits into Deci-AI:master from deci-ai:feature/SG-573-Integrate-EMA
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
  1. import dataclasses
  2. import torch
  3. from super_gradients.common.environment.env_variables import env_variables
  4. from super_gradients.common.environment.argparse_utils import pop_local_rank
  5. __all__ = ["device_config"]
  6. def _get_assigned_rank() -> int:
  7. """Get the rank assigned by DDP launcher. If not DDP subprocess, return -1."""
  8. if env_variables.LOCAL_RANK != -1:
  9. return env_variables.LOCAL_RANK
  10. else:
  11. return pop_local_rank()
  12. @dataclasses.dataclass
  13. class DeviceConfig:
  14. device: str = "cuda" if torch.cuda.is_available() else "cpu"
  15. multi_gpu: str = None
  16. assigned_rank: int = dataclasses.field(default=_get_assigned_rank(), init=False)
  17. # Singleton holding the device information
  18. device_config = DeviceConfig()
Discard
Tip!

Press p or to see the previous file or, n or to see the next file