Are you sure you want to delete this access key?
In this project were going to create a simple stock prediction model anduse it to predict yahoo BSD stock. Once a model has been created we're going to monitor the model using an ML-Monitoring tool like evidently. We will be loading the initial model and testiing it on a weekly basis to get a data report.
In this project, we will create a simple stock prediction model using machine learning and monitor it using Evidently.
Steps:
Evidently is an ML-Monitoring tool that can be used to monitor our stock prediction model. Evidently provides a variety of metrics for evaluating the performance of our model, such as data drift , model quality and test reports. It also provides a variety of visualizations that can help us understand the performance of our model in more detail.
To monitor our stock prediction model using Evidently, we will follow these steps:
By following these steps, we can create a simple stock prediction model and monitor it using Evidently to ensure that it is performing well over time.
This repository uses the following directory structure:
├── README.md
├── Data_reports/
├── model/ # Sample Model
│ ├── conda.yml
| ├── MLmodel
| ├── model.pkl
\ python_env.yaml
├── generate_report.py
├── handler.py
├── config.ini
├── requirements.txt
├── data_report.ipynb
The README.md
file is the main documentation file for the repository. It should contain information about the project, such as its purpose, how to use it, and how to contribute to it.
The Data_reports/
directory contains the jupyter notebook reports that are generated when the models are run weekly. The files are sorted based on data and time and it helps you find reports easily
The model/
directory contains a sample regression model. The code contains a model creation function that load the model to mlflow. But if you'd like to skip all that and use a previous model, try this.
The generate_report.py
file contains the core logic for the project. It has functions to generate_model, load a streamlit app to interactively get user credentials and model uri, create various kinds of reports and commit them to your repository
The handler.py
file contains logic to load an mlflow experiment and a function to load data.
The config.ini
file is a configuration file that contains your User and model information.
The data_report.ipynb
file is a template jupter notebook report that is used to create weekly reports
To replicte the project, follow these steps:
git clone https://dagshub.com/Nikitha-Narendra/ML-Monitoring.mlflow
Create a virtual envionment and activate it
python3 -m venv model_report
.\model_report\Scripts\activate
conda create env -n model_report python==3.11
conda activate model_report
Install the dependencies
pip install -r requirements.txt
To run the demo:
a. In the terminal, go to the location of your cloned repository
cd {location of directory}
b. Start the streamlit app
streamlit run generate_report.py
c. This will open a streamlit server in your browser.
d. Fill out the form in the web-app and click Submit.
A jupyter notebook with the current data and time will be logged in the dagshub repository under the Data_reports folder.
Note: If you do not have a pretrained model on hand, leave the field blank and a default model will be loaded for you.
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?