Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Karl Hornlund 92220c3713
Added prediction function
5 years ago
92220c3713
Added prediction function
5 years ago
92220c3713
Added prediction function
5 years ago
e73e5b4d0a
Fixed bug in hybrid with output dims
5 years ago
a17a0a748c
Fixed bug with modified config
5 years ago
64dbf5d211
Added automated retrieval of kaggle datasets
5 years ago
20d92c69ee
Initial commit
5 years ago
2f40fabe5f
Added logging via config file
5 years ago
20d92c69ee
Initial commit
5 years ago
e73e5b4d0a
Fixed bug in hybrid with output dims
5 years ago
Storage Buckets

README.rst

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
  1. =====================================================================
  2. Kaggle Competition: Jigsaw Unintended Bias in Toxicity Classification
  3. =====================================================================
  4. https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification
  5. .. contents:: Table of Contents
  6. :depth: 2
  7. Requirements
  8. ============
  9. * Python >= 3.6
  10. * PyTorch >= 0.4
  11. Features
  12. ========
  13. * Clear folder structure which is suitable for many deep learning projects.
  14. * `.json` config file support for more convenient parameter tuning.
  15. * Checkpoint saving and resuming.
  16. * Abstract base classes for faster development:
  17. * `BaseTrainer` handles checkpoint saving/resuming, training process logging, and more.
  18. * `BaseDataLoader` handles batch generation, data shuffling, and validation data splitting.
  19. * `BaseModel` provides basic model summary.
  20. Folder Structure
  21. ================
  22. ::
  23. cookiecutter-pytorch/
  24. ├── <project name>/
  25. │ │
  26. │ ├── cli.py - command line interface
  27. │ ├── main.py - main script to start train/test
  28. │ │
  29. │ ├── base/ - abstract base classes
  30. │ │ ├── base_data_loader.py - abstract base class for data loaders
  31. │ │ ├── base_model.py - abstract base class for models
  32. │ │ └── base_trainer.py - abstract base class for trainers
  33. │ │
  34. │ ├── data_loader/ - anything about data loading goes here
  35. │ │ └── data_loaders.py
  36. │ │
  37. │ ├── model/ - models, losses, and metrics
  38. │ │ ├── loss.py
  39. │ │ ├── metric.py
  40. │ │ └── model.py
  41. │ │
  42. │ ├── trainer/ - trainers
  43. │ │ └── trainer.py
  44. │ │
  45. │ └── utils/
  46. │ ├── util.py
  47. │ ├── logger.py - class for train logging
  48. │ ├── visualization.py - class for tensorboardX visualization support
  49. │ └── ...
  50. ├── data/ - default directory for storing input data
  51. ├── experiments/ - default directory for storing configuration files
  52. ├── saved/ - default checkpoints folder
  53. │ └── runs/ - default logdir for tensorboardX
  54. Usage
  55. =====
  56. .. code-block:: bash
  57. $ conda create --name <name> python=3.6
  58. $ pip install -e .
  59. $ conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
  60. The code in this repo is an MNIST example of the template. You can run the tests,
  61. and the example project using:
  62. .. code-block:: bash
  63. $ pytest tests
  64. $ project name train -c experiments/config.json
  65. Config file format
  66. ------------------
  67. Config files are in `.json` format:
  68. .. code-block:: HTML
  69. {
  70. "name": "Mnist_LeNet", // training session name
  71. "n_gpu": 1, // number of GPUs to use for training.
  72. "arch": {
  73. "type": "MnistModel", // name of model architecture to train
  74. "args": {
  75. }
  76. },
  77. "data_loader": {
  78. "type": "MnistDataLoader", // selecting data loader
  79. "args":{
  80. "data_dir": "data/", // dataset path
  81. "batch_size": 64, // batch size
  82. "shuffle": true, // shuffle training data before splitting
  83. "validation_split": 0.1 // validation data ratio
  84. "num_workers": 2, // number of cpu processes to be used for data loading
  85. }
  86. },
  87. "optimizer": {
  88. "type": "Adam",
  89. "args":{
  90. "lr": 0.001, // learning rate
  91. "weight_decay": 0, // (optional) weight decay
  92. "amsgrad": true
  93. }
  94. },
  95. "loss": "nll_loss", // loss
  96. "metrics": [
  97. "my_metric", "my_metric2" // list of metrics to evaluate
  98. ],
  99. "lr_scheduler": {
  100. "type": "StepLR", // learning rate scheduler
  101. "args":{
  102. "step_size": 50,
  103. "gamma": 0.1
  104. }
  105. },
  106. "trainer": {
  107. "epochs": 100, // number of training epochs
  108. "save_dir": "saved/", // checkpoints are saved in save_dir/name
  109. "save_freq": 1, // save checkpoints every save_freq epochs
  110. "verbosity": 2, // 0: quiet, 1: per epoch, 2: full
  111. "monitor": "min val_loss" // mode and metric for model performance monitoring. set 'off' to disable.
  112. "early_stop": 10 // number of epochs to wait before early stop. set 0 to disable.
  113. "tensorboardX": true, // enable tensorboardX visualization support
  114. "log_dir": "saved/runs" // directory to save log files for visualization
  115. }
  116. }
  117. Add addional configurations if you need.
  118. Using config files
  119. ------------------
  120. Modify the configurations in `.json` config files, then run:
  121. .. code-block:: shell
  122. python train.py --config experiments/config.json
  123. Resuming from checkpoints
  124. -------------------------
  125. You can resume from a previously saved checkpoint by:
  126. .. code-block:: shell
  127. python train.py --resume path/to/checkpoint
  128. Using Multiple GPU
  129. ------------------
  130. You can enable multi-GPU training by setting `n_gpu` argument of the config file to larger number.
  131. If configured to use smaller number of gpu than available, first n devices will be used by default.
  132. Specify indices of available GPUs by cuda environmental variable.
  133. .. code-block:: shell
  134. python train.py --device 2,3 -c experiments/config.json
  135. This is equivalent to
  136. .. code-block:: shell
  137. CUDA_VISIBLE_DEVICES=2,3 python train.py -c config.py
  138. Customization
  139. =============
  140. Data Loader
  141. -----------
  142. Writing your own data loader
  143. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  144. Inherit `BaseDataLoader`
  145. ^^^^^^^^^^^^^^^^^^^^^^^^
  146. `BaseDataLoader` is a subclass of `torch.utils.data.DataLoader`, you can use either of them.
  147. `BaseDataLoader` handles:
  148. * Generating next batch
  149. * Data shuffling
  150. * Generating validation data loader by calling
  151. `BaseDataLoader.split_validation()`
  152. DataLoader Usage
  153. ~~~~~~~~~~~~~~~~
  154. `BaseDataLoader` is an iterator, to iterate through batches:
  155. .. code-block:: python
  156. for batch_idx, (x_batch, y_batch) in data_loader:
  157. pass
  158. Example
  159. ~~~~~~~
  160. Please refer to `data_loader/data_loaders.py` for an MNIST data loading example.
  161. Trainer
  162. -------
  163. Writing your own trainer
  164. ~~~~~~~~~~~~~~~~~~~~~~~~
  165. Inherit `BaseTrainer`
  166. ^^^^^^^^^^^^^^^^^^^^^
  167. `BaseTrainer` handles:
  168. 1. Training process logging
  169. 2. Checkpoint saving
  170. 3. Checkpoint resuming
  171. 4. Reconfigurable performance monitoring for saving current best model, and early stop training.
  172. 1. If config `monitor` is set to `max val_accuracy`, which means then the trainer will save a
  173. checkpoint `model_best.pth` when `validation accuracy` of epoch replaces current `maximum`.
  174. 2. If config `early_stop` is set, training will be automatically terminated when model
  175. performance does not improve for given number of epochs. This feature can be turned off by
  176. passing 0 to the `early_stop` option, or just deleting the line of config.
  177. Implementing abstract methods
  178. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  179. You need to implement `_train_epoch()` for your training process, if you need validation then
  180. you can implement `_valid_epoch()` as in `trainer/trainer.py`
  181. Example
  182. ~~~~~~~
  183. Please refer to `trainer/trainer.py` for MNIST training.
  184. Model
  185. -----
  186. Writing your own model
  187. ~~~~~~~~~~~~~~~~~~~~~~
  188. Inherit `BaseModel`
  189. ^^^^^^^^^^^^^^^^^^^
  190. `BaseModel` handles:
  191. * Inherited from `torch.nn.Module`
  192. * `__str__`: Modify native `print` function to prints the number of trainable parameters.
  193. Implementing abstract methods
  194. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  195. Implement the foward pass method `forward()`
  196. Example
  197. ~~~~~~~
  198. Please refer to `model/model.py` for a LeNet example.
  199. Loss
  200. ----
  201. Custom loss functions can be implemented in 'model/loss.py'. Use them by changing the name given in
  202. "loss" in config file, to corresponding name.
  203. Metrics
  204. ~~~~~~~
  205. Metric functions are located in `model/metric.py`.
  206. You can monitor multiple metrics by providing a list in the configuration file, eg.
  207. .. code-block:: HTML
  208. "metrics": ["my_metric", "my_metric2"]
  209. Additional logging
  210. ------------------
  211. If you have additional information to be logged, in `_train_epoch()` of your trainer class, merge
  212. them with `log` as shown below before returning:
  213. .. code-block:: python
  214. additional_log = {"gradient_norm": g, "sensitivity": s}
  215. log = {**log, **additional_log}
  216. return log
  217. Testing
  218. -------
  219. You can test trained model by running `test.py` passing path to the trained checkpoint by `--resume`
  220. argument.
  221. Validation data
  222. ---------------
  223. To split validation data from a data loader, call `BaseDataLoader.split_validation()`, it will
  224. return a validation data loader, with the number of samples according to the specified ratio in your
  225. config file.
  226. **Note**: the `split_validation()` method will modify the original data loader
  227. **Note**: `split_validation()` will return `None` if `"validation_split"` is set to `0`
  228. Checkpoints
  229. -----------
  230. You can specify the name of the training session in config files:
  231. .. code-block:: HTML
  232. "name": "MNIST_LeNet"
  233. The checkpoints will be saved in `save_dir/name/timestamp/checkpoint_epoch_n`, with timestamp in
  234. mmdd_HHMMSS format.
  235. A copy of config file will be saved in the same folder.
  236. **Note**: checkpoints contain:
  237. .. code-block:: python
  238. {
  239. 'arch': arch,
  240. 'epoch': epoch,
  241. 'logger': self.train_logger,
  242. 'state_dict': self.model.state_dict(),
  243. 'optimizer': self.optimizer.state_dict(),
  244. 'monitor_best': self.mnt_best,
  245. 'config': self.config
  246. }
  247. TensorboardX Visualization
  248. --------------------------
  249. This template supports `<https://github.com/lanpa/tensorboardX>`_ visualization.
  250. * **TensorboardX Usage**
  251. 1. **Install**
  252. Follow installation guide in `<https://github.com/lanpa/tensorboardX>`_
  253. 2. **Run training**
  254. Set `tensorboardX` option in config file true.
  255. 3. **Open tensorboard server**
  256. Type `tensorboard --logdir saved/runs/` at the project root, then server will open at
  257. `http://localhost:6006`
  258. By default, values of loss and metrics specified in config file, input images, and histogram of
  259. model parameters will be logged. If you need more visualizations, use `add_scalar('tag', data)`,
  260. `add_image('tag', image)`, etc in the `trainer._train_epoch` method. `add_something()` methods in
  261. this template are basically wrappers for those of `tensorboardX.SummaryWriter` module.
  262. **Note**: You don't have to specify current steps, since `WriterTensorboardX` class defined at
  263. `logger/visualization.py` will track current steps.
  264. Acknowledgments
  265. ===============
  266. This template is inspired by
  267. 1. `<https://github.com/victoresque/pytorch-template>`_
  268. 2. `<https://github.com/daemonslayer/cookiecutter-pytorch>`_
Tip!

Press p or to see the previous file or, n or to see the next file

About

No description

Collaborators 1

Comments

Loading...