Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

sgd.py 1.0 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  1. # Copyright (c) 2017-present, Facebook, Inc.
  2. # All rights reserved.
  3. #
  4. # This source code is licensed under the license found in the LICENSE file in
  5. # the root directory of this source tree. An additional grant of patent rights
  6. # can be found in the PATENTS file in the same directory.
  7. import torch.optim
  8. from . import FairseqOptimizer, register_optimizer
  9. @register_optimizer('sgd')
  10. class SGD(FairseqOptimizer):
  11. def __init__(self, args, params):
  12. super().__init__(args, params)
  13. self._optimizer = torch.optim.SGD(params, **self.optimizer_config)
  14. @property
  15. def optimizer_config(self):
  16. """
  17. Return a kwarg dictionary that will be used to override optimizer
  18. args stored in checkpoints. This allows us to load a checkpoint and
  19. resume training using a different set of optimizer args, e.g., with a
  20. different learning rate.
  21. """
  22. return {
  23. 'lr': self.args.lr[0],
  24. 'momentum': self.args.momentum,
  25. 'weight_decay': self.args.weight_decay,
  26. }
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...