Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

evaluate.py 1.1 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
  1. from src.utils.file import read_dvc_params
  2. dvc_params = read_dvc_params(__file__)
  3. pipeline_params = dvc_params['evaluate']
  4. from src.scripts.dataset import load_train_val
  5. train_set, val_set = load_train_val(pipeline_params['data'])
  6. inference_params = dvc_params['inference']
  7. from src.scripts.config import CustomConfig
  8. config = CustomConfig(inference_params['configs'])
  9. config.display()
  10. from mrcnn.model import MaskRCNN
  11. model = MaskRCNN(
  12. mode='inference',
  13. model_dir= pipeline_params['evaluate_dir'],
  14. config= config
  15. )
  16. model.load_weights(pipeline_params['model-weight'], by_name=True)
  17. from src.scripts.evaluate import evaluate_model
  18. from src.utils.benchmark import bench
  19. evaluation_result = {}
  20. for name, test_set in [('train', train_set), ('validation', val_set)]:
  21. benchmark_result = bench(
  22. f'Inference on {name} set',
  23. evaluate_model,
  24. val_set, model, config
  25. )
  26. evaluation_result[f'{name}_set'] = {
  27. 'inference_time' : benchmark_result['time'],
  28. 'mAP' : benchmark_result['result']
  29. }
  30. output_params = pipeline_params['output']
  31. from src.utils.output import write_file
  32. print(evaluation_result)
  33. write_file(output_params['metric'], evaluation_result, 'json')
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...