Level 1 - Setup¶
This level of the tutorial covers setting up our project.
This includes the following tasks:
- Creating an account and repo in DAGsHub.
- Cloning it to your local machine.
- Creating a virtual python environment
venvand installing the needed requirements.
If you are familiar with these steps, you can skip to the next level, where we run some experiments to find a good model.
...is really easy. Just sign up.
Then, after logging in, create a new repo, simply by clicking on the plus sign and create repository in the
This opens up a dialog, which should be somewhat familiar, in which you can set the repository name, description and a few other options.
Repo creation dialog
For this tutorial, fill in the name and description, and leave everything else in the default settings.
Done with repo creation. On to project initialization.
Setting up our project¶
Create a directory named
dagshub_tutorial for the project somewhere on your computer.
Open a terminal and input the following:
cd path/to/folder/dagshub_tutorial git init
Now, we will set the remote to our repo on DAGsHub. This can be done using the following command:
git remote add origin https://dagshub.com/<username>/<repo-name>.git
Finally, let's create 4 folders, for each of our main project components.
mkdir data models metrics
Creating a virtual python environment¶
We assume you have a working Python 3 installation on your local system for the following explanations.
To ensure that you do, you can open a terminal and type in
See that this command succeeds and that you get at least version 3.7 - if it's smaller or if the command fails,
you should download the correct version for your operating system.
python3 -m venv .venv echo .venv/ >> .gitignore echo __pycache__/ >> .gitignore source .venv/bin/activate
python3 -m venv .venv echo .venv/ >> .gitignore echo __pycache__/ >> .gitignore .venv\Scripts\activate.bat
The first command creates your virtual environment - a directory named
.venv, located inside your project directory,
where all the Python packages used by your project will be installed without affecting the rest of your computer.
The second and third commands makes sure the virtual environment packages and pycache are not tracked by Git.
The fourth command activates our virtual python environment, which ensures that any python packages we use don't contaminate our global python installation.
The rest of this tutorial should be executed in the same shell session.
If you exit the shell session, or want to create another, make sure to activate the virtual environment in that shell session first.
To install the requirements for the first part of this project, simply download this requirements.txt into your project folder.
Alternatively, you can create a file called requirements.txt and copy the following into it:
dagshub==0.0.2rc1 joblib==0.14.1 numpy==1.18.1 pandas==1.0.0 python-dateutil==2.8.1 pytz==2019.3 PyYAML==5.3 scikit-learn==0.22.1 scipy==1.4.1 six==1.14.0 sklearn==0.0
Now, to install them type:
pip install -r requirements.txt
Downloading the data¶
We'll keep our data in a folder named, oddly enough,
It's also important to remember to add this folder to
We definitely don't want to accidentally commit large data files to Git.
The following commands should take care of everything:
mkdir -p data echo /data/ >> .gitignore wget https://dagshub-public.s3.us-east-2.amazonaws.com/tutorials/stackexchange/CrossValidated-Questions.csv -O data/CrossValidated-Questions.csv
Committing progress to Git¶
Let's check the Git status of our project:
$ git status -s ?? .gitignore ?? requirements.txt
Now let's commit this to Git using the command line:
git add . git commit -m "Initialized project"
In the next level, we'll start the really interesting part - running experiments to try to optimize our classification model.