Are you sure you want to delete this access key?
This repo is about classifying german news/fake-news. The objective is to train a binary classifier that is capable of classifying news articles into fake and non-fake. This repo provides a stack for
All those functionalities are provided in a template-wise manner for you in order to to kick-off your own project or - even better - contribute to this repo ;-)
A few words regarding the data we used her to train our model(s):
For training a model we currently use 2 Datasources. One datasource stem from Kaggle (https://www.kaggle.com/kerneler/starter-fake-news-dataset-german-9cc110a2-9/data). This dataset is a collection of news (non-fake) and fake-news, whereas fake news are derived from satire online editors like "Die Tagespresse" or "Der Postillion". This way, sarcastic news articles are treated as fake news in this dataset. (see EDA on that dataset --> notebooks/01-eda-german-fake-news.ipynb)
The second source for fake news is a dataset from Inna Vogel and Peter Jiang (2019)*. Thereby, Every fake statement in the text was verified claim-by-claim by authoritative sources (e.g. from local police authorities, scientific studies, the police press office, etc.). The time interval for most of the news is established from December 2015 to March 2018.
*Fake News Detection with the New German Dataset "GermanFakeNC". In Digital Libraries for Open Knowledge - 23rd International Conference on Theory and Practice of Digital Libraries, TPDL 2019, Oslo, Norway, September 9-12, 2019, Proceedings (pp. 288–295).
anaconda-project prepare
in order to download and install the required packages to the conda env defined in anaconda-project.ymlNOTE: Be sure to merge the latest from "upstream" before making a pull request!
For reproduction consider the following:
.
├── LICENSE
├── .azureml <- Store Azure specific configurations
├── README.md <- The top-level README for developers using this project.
├── anaconda-project.yml
├── bin
│ └── models <- Trained and serialized models (model.pkl)
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
├── dvc.lock
├── dvc.yaml
├── envs
│ ├── fake_news_env
│ └── inference_env
│── .env <- Env file to store env specific and/or private variables
├── metrics.csv
├── notebooks <- Jupyter notebooks. Naming convention is a number
├── params.yml
├── references
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used reporting
└── src <- Source code for use in this project.
├── __init__.py <- Makes src a Python module
├── data <- Scripts to download or generate data
├── features <- Scripts to turn raw data into features for modeling
├── models <- Scripts to train, evaluate, test and deploy models
└── visualization <- Scripts to create exploratory and results oriented viz
Project based on the cookiecutter data science project template. #cookiecutterdatascience
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?