Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

inference.py 600 B

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
  1. # serve: deploys the model as a local REST API server.
  2. # build_docker: packages a REST API endpoint serving the model as a docker image.
  3. # predict: uses the model to generate a prediction for a local CSV or JSON file. Note that this method only supports DataFrame input.
  4. import pickle
  5. import requests
  6. with open("Std.pkl", "rb") as f:
  7. std = pickle.load(f)
  8. inference_request = {
  9. "dataframe_records": [[23,"F","HIGH","HIGH",25.355]]
  10. }
  11. endpoint = "http://localhost:5000/predict"
  12. response = requests.post(endpoint, json=inference_request)
  13. print(response.text)
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...