Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

experiments.txt 2.4 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
  1. Experiments
  2. ---------------------------------------
  3. 1) FashionMNIST standard classificaton
  4. paper:
  5. arch: 3 conv layers with 64, 128, 128 channels and 3x3 filters
  6. followed by a fully connected layer.
  7. optim: SGD with initial learning rate of 0.05, decayed by .2 every
  8. 10 epochs. momentum = 0.9. weight decay = 10^-4
  9. preprocessing: normalize with channel mean & std
  10. data: validation set is 5000 images sampled at random from the
  11. 60000 images in the training set.
  12. training: 30 epochs, batch size unspecified.
  13. results: 0.924 +- 0.001
  14. reproduced:
  15. arch: same as paper
  16. optim: same as paper
  17. preprocessing: same as paper
  18. data: training set is 60000 images from fashionMNIST training set,
  19. test set is 10000 images from fashionMNIST test set
  20. training: same as paper, batch size = 100
  21. results: 0.923
  22. notebook: FM_classification.ipynb
  23. 2) FashionMNIST classifcation with DUQ model
  24. paper:
  25. arch: same as 1)
  26. optim: same as 1)
  27. preprocessing: same as 1)
  28. data: same as 1)
  29. training: same as 1)
  30. DUQ parameters: length scale (sigma) = 0.1,
  31. gradient penalty (lambda) = 0.05, Lipschitz constant not specified,
  32. found in source code to be 1.
  33. results: lambda = 0: .924 +- 0.02
  34. lambda = .05: .924 +- 0.02
  35. reproduced:
  36. arch: same as paper
  37. optim: same as paper
  38. preprocessing: same as paper
  39. data: same as 1
  40. training: same as paper
  41. DUQ parameters: same as paper
  42. results: lambda = 0: .926
  43. lambda = .05: .924
  44. notebook: FM_DUQ_classification
  45. 3) FashionMNIST OOD detection on MNIST
  46. paper:
  47. arch: same as 2)
  48. optim: same as 2)
  49. preprocessing: same as 2)
  50. data: same as 2)
  51. training: same as 2)
  52. DUQ paramenters: same as 2)
  53. gradient penalty (lambda) = 0.05
  54. results: AUROC .955 +- 0.007
  55. reproduced:
  56. arch: same as paper
  57. optim: same as paper
  58. preprocessing: same as paper
  59. data: same as 1
  60. training: same as paper
  61. DUQ parameters: same as paper
  62. gradient penalty (lambda) = 0.05
  63. results: .972
  64. notebook: FM_ood_detection
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...