Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  dvc git github
D. Lowl a69ee802bb
Merge pull request #5 from fuzzylabs/seldon-core-deployment
2 years ago
10a0841ac2
Add DagsHub as the DVC remote
2 years ago
9e1eb09457
Pull in seldon core changes from monitoring
2 years ago
04fa195a39
Add serving documentation
2 years ago
c929a218aa
Initial commit
2 years ago
2c8c52f709
Add .gitignore
2 years ago
d722657856
Simple text input UI
2 years ago
2d032ec8b4
Add serving documentation
2 years ago
04fa195a39
Add serving documentation
2 years ago
c929a218aa
Initial commit
2 years ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

Sudoku Rating Modelling

Aims

We want to model sudoku solver rating and difficulty. We start with classic sudoku, with the intention of extending it to different sudoku variants.

Requirements

Getting started

Set up python environment, and install dependencies

pipenv install

Pull data from DVC remote storage, DagsHub

dvc pull

If you want to push data to DagsHub, you need to set up authentication:

dvc remote modify origin --local auth basic
dvc remote modify origin --local user <DagsHub-user-name>
dvc remote modify origin --local password <Token>

Training Linear model

There's a linear model training code in models/hello-world. The code can be run with Guild.AI to track the experiment:

guild run train

If you want to override hyperparameters (i.e. alpha), you can specify them as command line arguments:

guild run train alpha=0.1

For more details on hyperparameter optimision with Guild.AI, see the documentation

Experiment tracking

You can view experiments by running (it will open the browser window):

guild view

You can pull runs performed on other machines from S3:

guild pull REMOTE_NAME

You can push runs performed on your machine to the remote, so they are available to your collaborators:

guild push REMOTE_NAME

GuildAI remote configuration

To push to and pull from the remote storage (AWS S3) you need to configure guild with the S3 remote. The example configuration is as follows:

remotes:
  sudoku-s3:
    type: s3
    bucket: sudoku-rating-guildai
    root: runs
    region: eu-west-2

Adjust the configuration to reflect your environment. For more details see GuildAI docs

Deployment

Setting up Seldon Core locally (optional)

Note: for other ways of installing Seldon Core look at the installation documentation.

Firstly, install Kind and create a cluster:

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF

We also need install Ambassador to be used as an Ingress Controller:

kubectl apply -f https://github.com/datawire/ambassador-operator/releases/latest/download/ambassador-operator-crds.yaml

kubectl apply -n ambassador -f https://github.com/datawire/ambassador-operator/releases/latest/download/ambassador-operator-kind.yaml
kubectl wait --timeout=180s -n ambassador --for=condition=deployed ambassadorinstallations/ambassador

We can now install Seldon Core:

kubectl create namespace seldon-system

helm install seldon-core seldon-core-operator \
    --repo https://storage.googleapis.com/seldon-charts \
    --set usageMetrics.enabled=true \
    --set ambassador.enabled=true \
    --namespace seldon-system
	
kubectl get pods -n seldon-system

kubectl port-forward -n ambassador svc/ambassador 8080:80

Model deployment

First, we need to upload the model artifact models/hello-world/linearregression.joblib to a compatible remote storage (e.g. Google Storage or S3). Filename in the remote storage must be model.joblib.

Next, we need to replace the modelUri in models/hello-world/seldon/model.yaml with a path to the directory containing the model artifact. The file in the repository already, points to a public bucket containing a pre-trained model, that you can use.

Finally, we can deploy the model by running this command in models/hello-world directory:

kubectl apply -f seldon/model.yaml

Swagger UI should now be available here: http://localhost/seldon/seldon/sudoku-model/api/v1.0/doc/

An inference request can be made as follows:

curl -X POST http://localhost/seldon/seldon/sudoku-model/api/v1.0/predictions \
    -H 'Content-Type: application/json' \
    -d '{ "data": { "ndarray": [[0,0,0,5,0,0,0,1,0,6,0,0,0,0,9,0,0,2,0,0,0,3,0,0,4,0,0,5,0,0,0,0,0,2,0,7,2,0,0,9,0,0,0,4,1,0,0,4,0,0,3,8,0,0,9,0,0,0,0,1,0,0,5,0,0,0,7,0,0,0,0,0,3,1,0,0,6,5,0,9,0]] } }'

Server monitoring

To add service monitoring with Promethius and Grafana, such as request and error rates:

helm install seldon-core-analytics seldon-core-analytics \
   --repo https://storage.googleapis.com/seldon-charts \
   --namespace seldon-system

Add port-forwarding as follows:

kubectl port-forward svc/seldon-core-analytics-grafana 3000:80 -n seldon-system

The Grafana dashboard will be available at http://localhost:3000/dashboard/db/prediction-analytics.

Tip!

Press p or to see the previous file or, n or to see the next file

About

No description

Collaborators 3

Comments

Loading...