Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
jrivera f768f81b80
Se crea libreta para la búsqueda del mejor modelo utilizando Optuna para la búsqueda de hiperparámetros y MLFlow para el registro de los experimentos.
1 month ago
5020e3e34d
Initial commit
2 months ago
edc35f6a3a
Se elimina .gitignore
1 month ago
f768f81b80
Se crea libreta para la búsqueda del mejor modelo utilizando Optuna para la búsqueda de hiperparámetros y MLFlow para el registro de los experimentos.
1 month ago
ddb0e77f7e
Se corrige nombre del archivo
1 month ago
cee75e8ce8
Se guarda gráfica de pairplot
1 month ago
src
036238e962
cambio nombre en df siap en make_dataset
1 month ago
5020e3e34d
Initial commit
2 months ago
5020e3e34d
Initial commit
2 months ago
e214be9a21
Se agrega restricción de carga de archivos .csv
2 months ago
5020e3e34d
Initial commit
2 months ago
22be0a3cb0
Se crea ETL para convertir datos de raw a proccesed
1 month ago
77927f9d84
Se indican pasos a realizar en la siguiente fase del proyecto
1 month ago
5020e3e34d
Initial commit
2 months ago
5020e3e34d
Initial commit
2 months ago
24e3e8355c
Se actualizan dependencias
1 month ago
5020e3e34d
Initial commit
2 months ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

AgroEcoAnalytics

Análisis de datos de producción agrícola, económicos y climáticos de frutas y hortalizas en México.

Instructions

  1. Clone the repo.
  2. Run make dirs to create the missing parts of the directory structure described below.
  3. Optional: Run make virtualenv to create a python virtual environment. Skip if using conda or some other env manager.
    1. Run source env/bin/activate to activate the virtualenv.
  4. Run make requirements to install required python packages.
  5. Put the raw data in data/raw.
  6. To save the raw data to the DVC cache, run dvc add data/raw
  7. Once you get the raw data, run make dataset to generate the final dataset.

TO-DO for next project stage:

  1. Process your data, train and evaluate your model using dvc repro or make reproduce
  2. To run the pre-commit hooks, run make pre-commit-install
  3. For setting up data validation tests, run make setup-setup-data-validation
  4. For running the data validation tests, run make run-data-validation
  5. When you're happy with the result, commit files (including .dvc files) to git.

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make dirs` or `make clean`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   ├── raw.dvc        <- DVC file that tracks the raw data
│   └── raw            <- The original, immutable data dump
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│   └── metrics.txt    <- Relevant metrics after evaluating the model.
│   └── training_metrics.txt    <- Relevant metrics from training the model.
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- Makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   ├── great_expectations  <- Folder containing data integrity check files
│   │   ├── make_dataset.py     <- Script to merge all raw data into final dataset
│   │   └── data_validation.py  <- Script to run data integrity checks
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
├── .pre-commit-config.yaml  <- pre-commit hooks file with selected hooks for the projects.
├── dvc.lock           <- The version definition of each dependency, stage, and output from the 
│                         data pipeline.
└── dvc.yaml           <- Defining the data pipeline stages, dependencies, and outputs.

Project based on the cookiecutter data science project template. #cookiecutterdatascience


To create a project like this, just go to https://dagshub.com/repo/create and select the Cookiecutter DVC project template.

Made with 🐶 by DAGsHub.

Tip!

Press p or to see the previous file or, n or to see the next file

About

Análisis de datos de producción agrícola, económicos y climáticos de frutas y hortalizas en México.

Collaborators 3

Comments

Loading...