Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Enrique Martinez 4f49abc622
Merge branch 'correct_requeriments' of Donna/PoC_NER into master
2 years ago
c35b8369dc
Initial commit
2 years ago
0df5101d89
third step with tox, with good typing and style
2 years ago
62fb0cd8d9
In this branch I added comments and corrected abstract model
2 years ago
176be71608
Fixing isort10
2 years ago
src
5fc6b0d422
before imports started by src and now start by nc43tech_poc_ner
2 years ago
5fc6b0d422
before imports started by src and now start by nc43tech_poc_ner
2 years ago
c35b8369dc
Initial commit
2 years ago
c35b8369dc
Initial commit
2 years ago
93a2a10add
2000 products added in tagging of descriptions
2 years ago
0df5101d89
third step with tox, with good typing and style
2 years ago
fca5d97bbf
Fixing isort18
2 years ago
32326e1f9e
Fixing isort19
2 years ago
0df5101d89
third step with tox, with good typing and style
2 years ago
0df5101d89
third step with tox, with good typing and style
2 years ago
3c9424fcb8
First step with tox, with good typing and style
2 years ago
3c9424fcb8
First step with tox, with good typing and style
2 years ago
0df5101d89
third step with tox, with good typing and style
2 years ago
16b22f90c5
get congratulations with tox and get 91 percent of coverage
2 years ago
5fc6b0d422
before imports started by src and now start by nc43tech_poc_ner
2 years ago
16b22f90c5
get congratulations with tox and get 91 percent of coverage
2 years ago
16b22f90c5
get congratulations with tox and get 91 percent of coverage
2 years ago
c35b8369dc
Initial commit
2 years ago
c35b8369dc
Initial commit
2 years ago
9cf5e70bd7
This is a branch with ner model for predicting entities
2 years ago
c35b8369dc
Initial commit
2 years ago
e50bc6f1e6
put in order the libraries in requeriments
2 years ago
0df5101d89
third step with tox, with good typing and style
2 years ago
0df5101d89
third step with tox, with good typing and style
2 years ago
0df5101d89
third step with tox, with good typing and style
2 years ago
3480710939
get congratulations with tox and get 91 percent of coverage
2 years ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

PoC_NER

This is a first PoC in dagshub, it is about named entity recognition

Instructions

  1. Clone the repo.
  2. Run make dirs to create the missing parts of the directory structure described below.
  3. Optional: Run make virtualenv to create a python virtual environment. Skip if using conda or some other env manager.
    1. Run source env/bin/activate to activate the virtualenv.
  4. Run make requirements to install required python packages.
  5. Put the raw data in data/raw.
  6. To save the raw data to the DVC cache, run dvc commit raw_data.dvc
  7. Edit the code files to your heart's desire.
  8. Process your data, train and evaluate your model using dvc repro eval.dvc or make reproduce
  9. When you're happy with the result, commit files (including .dvc files) to git.

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make dirs` or `make clean`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── eval.dvc           <- The end of the data pipeline - evaluates the trained model on the test dataset.
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── process_data.dvc   <- Process the raw data and prepare it for training.
├── raw_data.dvc       <- Keeps the raw data versioned.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│   └── metrics.txt    <- Relevant metrics after evaluating the model.
│   └── training_metrics.txt    <- Relevant metrics from training the model.
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
├── tox.ini            <- tox file with settings for running tox; see tox.testrun.org
└── train.dvc          <- Traing a model on the processed data.

Project based on the cookiecutter data science project template. #cookiecutterdatascience

Tip!

Press p or to see the previous file or, n or to see the next file

About

This is a first PoC in dagshub, it is about named entity recognition

Collaborators 4

Comments

Loading...