Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

#572 new generated docs

Merged
Ghost merged 1 commits into Deci-AI:master from deci-ai:feature/SG-000_new_generated_docs
Only showing up to 1000 lines per file, please use a local Git client to see the full diff.
Some lines were truncated since they exceed the maximum allowed length of 500, please use a local Git client to see the full diff.
@@ -1,390 +1,532 @@
 <!DOCTYPE html>
 <!DOCTYPE html>
 <html class="writer-html5" lang="en" >
 <html class="writer-html5" lang="en" >
 <head>
 <head>
-<meta charset="utf-8" /><meta name="generator" content="Docutils 0.17.1: http://docutils.sourceforge.net/" />
-
-<meta name="viewport" content="width=device-width, initial-scale=1.0" />
-<title>SuperGradients &mdash; SuperGradients 1.0 documentation</title>
-  <link rel="stylesheet" href="_static/pygments.css" type="text/css" />
-  <link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
-  <link rel="stylesheet" href="_static/graphviz.css" type="text/css" />
-<!--[if lt IE 9]>
-<script src="_static/js/html5shiv.min.js"></script>
-<![endif]-->
-
-    <script data-url_root="./" id="documentation_options" src="_static/documentation_options.js"></script>
-    <script src="_static/jquery.js"></script>
-    <script src="_static/underscore.js"></script>
-    <script src="_static/doctools.js"></script>
-<script src="_static/js/theme.js"></script>
-<link rel="index" title="Index" href="genindex.html" />
-<link rel="search" title="Search" href="search.html" />
-<link rel="next" title="Common package" href="super_gradients.common.html" />
-<link rel="prev" title="Welcome to SuperGradients’s documentation!" href="index.html" />
+  <meta charset="utf-8" />
+  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+  <title>Version 3 is out! Notebooks have been updated! &mdash; SuperGradients 3.0.3 documentation</title>
+      <link rel="stylesheet" href="_static/pygments.css" type="text/css" />
+      <link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
+      <link rel="stylesheet" href="_static/graphviz.css" type="text/css" />
+      <link rel="stylesheet" href="_static/custom.css" type="text/css" />
+  <!--[if lt IE 9]>
+    <script src="_static/js/html5shiv.min.js"></script>
+  <![endif]-->
+  
+        <script data-url_root="./" id="documentation_options" src="_static/documentation_options.js"></script>
+        <script src="_static/jquery.js"></script>
+        <script src="_static/underscore.js"></script>
+        <script src="_static/_sphinx_javascript_frameworks_compat.js"></script>
+        <script src="_static/doctools.js"></script>
+        <script src="_static/sphinx_highlight.js"></script>
+    <script src="_static/js/theme.js"></script>
+    <link rel="index" title="Index" href="genindex.html" />
+    <link rel="search" title="Search" href="search.html" />
+    <link rel="next" title="Common package" href="super_gradients.common.html" />
+    <link rel="prev" title="Welcome to SuperGradients’s documentation!" href="index.html" /> 
 </head>
 </head>
 
 
-<body class="wy-body-for-nav">
-<div class="wy-grid-for-nav">
-<nav data-toggle="wy-nav-shift" class="wy-nav-side">
-  <div class="wy-side-scroll">
-    <div class="wy-side-nav-search" >
-        <a href="index.html" class="icon icon-home"> SuperGradients
-      </a>
+<body class="wy-body-for-nav"> 
+  <div class="wy-grid-for-nav">
+    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
+      <div class="wy-side-scroll">
+        <div class="wy-side-nav-search" >
+            <a href="index.html" class="icon icon-home"> SuperGradients
+          </a>
 <div role="search">
 <div role="search">
-<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
-<input type="text" name="q" placeholder="Search docs" />
-<input type="hidden" name="check_keywords" value="yes" />
-<input type="hidden" name="area" value="default" />
-</form>
-</div>
-    </div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
-          <p class="caption"><span class="caption-text">Welcome To SuperGradients</span></p>
+  <form id="rtd-search-form" class="wy-form" action="search.html" method="get">
+    <input type="text" name="q" placeholder="Search docs" />
+    <input type="hidden" name="check_keywords" value="yes" />
+    <input type="hidden" name="area" value="default" />
+  </form>
+</div>
+        </div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
+              <p class="caption" role="heading"><span class="caption-text">Welcome To SuperGradients</span></p>
 <ul class="current">
 <ul class="current">
-<li class="toctree-l1 current"><a class="current reference internal" href="#">SuperGradients</a><ul>
-<li class="toctree-l2"><a class="reference internal" href="#introduction">Introduction</a><ul>
-<li class="toctree-l3"><a class="reference internal" href="#why-use-supergradients">Why use SuperGradients?</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#documentation">Documentation</a></li>
+<li class="toctree-l1 current"><a class="current reference internal" href="#">Version 3 is out! Notebooks have been updated!</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#build-with-supergradients">Build with SuperGradients</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="#support-various-computer-vision-tasks">Support various computer vision tasks</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#ready-to-deploy-pre-trained-sota-models">Ready to deploy pre-trained SOTA models</a><ul>
+<li class="toctree-l3"><a class="reference internal" href="#classification">Classification</a></li>
+<li class="toctree-l3"><a class="reference internal" href="#semantic-segmentation">Semantic Segmentation</a></li>
+<li class="toctree-l3"><a class="reference internal" href="#object-detection">Object Detection</a></li>
+</ul>
+</li>
+<li class="toctree-l2"><a class="reference internal" href="#easy-to-train-sota-models">Easy to train SOTA Models</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#plug-and-play-recipes">Plug and play recipes</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#production-readiness">Production readiness</a></li>
+</ul>
+</li>
+<li class="toctree-l1"><a class="reference internal" href="#quick-installation">Quick Installation</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#what-s-new">What’s New</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#coming-soon">Coming soon</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#table-of-content">Table of Content</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#getting-started">Getting Started</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="#start-training-with-just-1-command-line">Start Training with Just 1 Command Line</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#quickly-load-pre-trained-weights-for-your-desired-model-with-sota-performance">Quickly Load Pre-Trained Weights for Your Desired Model with SOTA Performance</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#id1">Classification</a><ul>
+<li class="toctree-l3"><a class="reference internal" href="#transfer-learning">Transfer Learning</a></li>
 </ul>
 </ul>
 </li>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="#what-s-new">What’s New</a></li>
-<li class="toctree-l2"><a class="reference internal" href="#comming-soon">Comming soon</a><ul>
-<li class="toctree-l3"><a class="reference internal" href="#table-of-content">Table of Content</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#id2">Semantic Segmentation</a><ul>
+<li class="toctree-l3"><a class="reference internal" href="#quick-start">Quick Start</a></li>
+<li class="toctree-l3"><a class="reference internal" href="#id3">Transfer Learning</a></li>
+<li class="toctree-l3"><a class="reference internal" href="#how-to-connect-custom-dataset">How to Connect Custom Dataset</a></li>
 </ul>
 </ul>
 </li>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="#getting-started">Getting Started</a><ul>
-<li class="toctree-l3"><a class="reference internal" href="#quick-start-notebook">Quick Start Notebook</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#supergradients-walkthrough-notebook">SuperGradients Walkthrough Notebook</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#transfer-learning-with-sg-notebook">Transfer Learning with SG Notebook</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#id4">Object Detection</a><ul>
+<li class="toctree-l3"><a class="reference internal" href="#id5">Transfer Learning</a></li>
+<li class="toctree-l3"><a class="reference internal" href="#id6">How to Connect Custom Dataset</a></li>
 </ul>
 </ul>
 </li>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="#installation-methods">Installation Methods</a><ul>
-<li class="toctree-l3"><a class="reference internal" href="#prerequisites">Prerequisites</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#quick-installation">Quick Installation</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#how-to-predict-using-pre-trained-model">How to Predict Using Pre-trained Model</a><ul>
+<li class="toctree-l3"><a class="reference internal" href="#segmentation-detection-and-classification-prediction">Segmentation, Detection and Classification Prediction</a></li>
 </ul>
 </ul>
 </li>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="#computer-vision-models-pretrained-checkpoints">Computer Vision Models’ Pretrained Checkpoints</a><ul>
-<li class="toctree-l3"><a class="reference internal" href="#pretrained-classification-pytorch-checkpoints">Pretrained Classification PyTorch Checkpoints</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#pretrained-object-detection-pytorch-checkpoints">Pretrained Object Detection PyTorch Checkpoints</a></li>
-<li class="toctree-l3"><a class="reference internal" href="#pretrained-semantic-segmentation-pytorch-checkpoints">Pretrained Semantic Segmentation PyTorch Checkpoints</a></li>
 </ul>
 </ul>
 </li>
 </li>
-<li class="toctree-l2"><a class="reference internal" href="#contributing">Contributing</a></li>
-<li class="toctree-l2"><a class="reference internal" href="#citation">Citation</a></li>
-<li class="toctree-l2"><a class="reference internal" href="#community">Community</a></li>
-<li class="toctree-l2"><a class="reference internal" href="#license">License</a></li>
-<li class="toctree-l2"><a class="reference internal" href="#deci-lab">Deci Lab</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#advanced-features">Advanced Features</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="#knowledge-distillation-training">Knowledge Distillation Training</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#recipes">Recipes</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#using-ddp">Using DDP</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#easily-change-architectures-parameters">Easily change architectures parameters</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#using-phase-callbacks">Using phase callbacks</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#integration-to-weights-and-biases">Integration to Weights and Biases</a></li>
 </ul>
 </ul>
 </li>
 </li>
+<li class="toctree-l1"><a class="reference internal" href="#installation-methods">Installation Methods</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="#prerequisites">Prerequisites</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#id7">Quick Installation</a></li>
 </ul>
 </ul>
-<p class="caption"><span class="caption-text">Technical Documentation</span></p>
+</li>
+<li class="toctree-l1"><a class="reference internal" href="#implemented-model-architectures">Implemented Model Architectures</a><ul>
+<li class="toctree-l2"><a class="reference internal" href="#image-classification">Image Classification</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#id8">Semantic Segmentation</a></li>
+<li class="toctree-l2"><a class="reference internal" href="#id9">Object Detection</a></li>
+</ul>
+</li>
+<li class="toctree-l1"><a class="reference internal" href="#documentation">Documentation</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#contributing">Contributing</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#citation">Citation</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#community">Community</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#license">License</a></li>
+<li class="toctree-l1"><a class="reference internal" href="#deci-platform">Deci Platform</a></li>
+</ul>
+<p class="caption" role="heading"><span class="caption-text">Technical Documentation</span></p>
 <ul>
 <ul>
 <li class="toctree-l1"><a class="reference internal" href="super_gradients.common.html">Common package</a></li>
 <li class="toctree-l1"><a class="reference internal" href="super_gradients.common.html">Common package</a></li>
 <li class="toctree-l1"><a class="reference internal" href="super_gradients.training.html">Training package</a></li>
 <li class="toctree-l1"><a class="reference internal" href="super_gradients.training.html">Training package</a></li>
-</ul>
-<p class="caption"><span class="caption-text">User Guide</span></p>
-<ul>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html">What is SuperGradients?</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#introducing-the-supergradients-library">Introducing the SuperGradients library</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#installation">Installation</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#integrating-your-training-code-complete-walkthrough">Integrating Your Training Code - Complete Walkthrough</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#training-parameters">Training Parameters</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#logs-and-checkpoints">Logs and Checkpoints</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#dataset-parameters">Dataset Parameters</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#network-architectures">Network Architectures</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#pretrained-models">Pretrained Models</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#how-to-reproduce-our-training-recipes">How To Reproduce Our Training Recipes</a></li>
-<li class="toctree-l1"><a class="reference internal" href="user_guide.html#supergradients-faq">SuperGradients FAQ</a></li>
 </ul>
 </ul>
 
 
-    </div>
-  </div>
-</nav>
-
-<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap"><nav class="wy-nav-top" aria-label="Mobile navigation menu" >
-      <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
-      <a href="index.html">SuperGradients</a>
-  </nav>
-
-  <div class="wy-nav-content">
-    <div class="rst-content">
-      <div role="navigation" aria-label="Page navigation">
-<ul class="wy-breadcrumbs">
-  <li><a href="index.html" class="icon icon-home"></a> &raquo;</li>
-  <li>SuperGradients</li>
-  <li class="wy-breadcrumbs-aside">
-        <a href="_sources/welcome.md.txt" rel="nofollow"> View page source</a>
-  </li>
-</ul>
-<hr/>
+        </div>
+      </div>
+    </nav>
+
+    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap"><nav class="wy-nav-top" aria-label="Mobile navigation menu" >
+          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
+          <a href="index.html">SuperGradients</a>
+      </nav>
+
+      <div class="wy-nav-content">
+        <div class="rst-content">
+          <div role="navigation" aria-label="Page navigation">
+  <ul class="wy-breadcrumbs">
+      <li><a href="index.html" class="icon icon-home"></a> &raquo;</li>
+      <li>Version 3 is out! Notebooks have been updated!</li>
+      <li class="wy-breadcrumbs-aside">
+            <a href="_sources/welcome.md.txt" rel="nofollow"> View page source</a>
+      </li>
+  </ul>
+  <hr/>
 </div>
 </div>
-      <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
-       <div itemprop="articleBody">
-
-<div align="center">
-<img src="assets/SG_img/SG - Horizontal.png" width="600"/>
-<br/><br/>
-<p><strong>Easily train or fine-tune SOTA computer vision models with one open source training library</strong>
+          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
+           <div itemprop="articleBody">
+             
+  <div align="center">
+  <img src="assets/SG_img/SG - Horizontal Glow 2.png" width="600"/>
+ <br/><br/>
+<p><strong>Build, train, and fine-tune production-ready deep learning  SOTA vision models</strong>
 <a class="reference external" href="https://twitter.com/intent/tweet?text=Easily%20train%20or%20fine-tune%20SOTA%20computer%20vision%20models%20from%20one%20training%20repository&amp;url=https://github.com/Deci-AI/super-gradients&amp;via=deci_ai&amp;hashtags=AI,deeplearning,computervision,training,opensource"><img alt="Tweet" src="https://img.shields.io/twitter/url/http/shields.io.svg?style=social" /></a></p>
 <a class="reference external" href="https://twitter.com/intent/tweet?text=Easily%20train%20or%20fine-tune%20SOTA%20computer%20vision%20models%20from%20one%20training%20repository&amp;url=https://github.com/Deci-AI/super-gradients&amp;via=deci_ai&amp;hashtags=AI,deeplearning,computervision,training,opensource"><img alt="Tweet" src="https://img.shields.io/twitter/url/http/shields.io.svg?style=social" /></a></p>
+<div class="section" id="version-3-is-out-notebooks-have-been-updated">
+<h1>Version 3 is out! Notebooks have been updated!<a class="headerlink" href="#version-3-is-out-notebooks-have-been-updated" title="Permalink to this heading"></a></h1>
+<hr class="docutils" />
+  <p align="center">
+  <a href="https://www.supergradients.com/">Website</a> •
+  <a href="https://deci-ai.github.io/super-gradients/user_guide.html#introducing-the-supergradients-library">User Guide</a> •
+  <a href="https://deci-ai.github.io/super-gradients/super_gradients.common.html">Docs</a> •
+  <a href="#getting-started">Getting Started</a> •
+  <a href="#implemented-model-architectures">Pretrained Models</a> •
+  <a href="#community">Community</a> •
+  <a href="#license">License</a> •
+  <a href="#deci-platform">Deci Platform</a>
+</p>
+<p align="center">
+  <a href="https://github.com/Deci-AI/super-gradients#prerequisites"><img src="https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9-blue" />
+  <a href="https://github.com/Deci-AI/super-gradients#prerequisites"><img src="https://img.shields.io/badge/pytorch-1.9%20%7C%201.10-blue" />
+  <a href="https://pypi.org/project/super-gradients/"><img src="https://img.shields.io/pypi/v/super-gradients" />
+  <a href="https://github.com/Deci-AI/super-gradients#computer-vision-models-pretrained-checkpoints" ><img src="https://img.shields.io/badge/pre--trained%20models-34-brightgreen" />
+  <a href="https://github.com/Deci-AI/super-gradients/releases"><img src="https://img.shields.io/github/v/release/Deci-AI/super-gradients" />
+  <a href="https://join.slack.com/t/supergradients-comm52/shared_invite/zt-10vz6o1ia-b_0W5jEPEnuHXm087K~t8Q"><img src="https://img.shields.io/badge/slack-community-blueviolet" />
+  <a href="https://github.com/Deci-AI/super-gradients/blob/master/LICENSE.md"><img src="https://img.shields.io/badge/license-Apache%202.0-blue" />
+  <a href="https://deci-ai.github.io/super-gradients/welcome.html"><img src="https://img.shields.io/badge/docs-sphinx-brightgreen" />
+</p>    
+</div>
+<p><a class="reference external" href="https://deci-ai.github.io/super-gradients/user_guide.html#introducing-the-supergradients-library"></a></p>
+</div>
+<div class="section" id="build-with-supergradients">
+<h1>Build with SuperGradients<a class="headerlink" href="#build-with-supergradients" title="Permalink to this heading"></a></h1>
 <hr class="docutils" />
 <hr class="docutils" />
-<p><a href="https://github.com/Deci-AI/super-gradients#prerequisites"><img src="https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9-blue" />
-<a href="https://github.com/Deci-AI/super-gradients#prerequisites"><img src="https://img.shields.io/badge/pytorch-1.9%20%7C%201.10-blue" />
-<a href="https://pypi.org/project/super-gradients/"><img src="https://img.shields.io/pypi/v/super-gradients" />
-<a href="https://github.com/Deci-AI/super-gradients#computer-vision-models-pretrained-checkpoints" ><img src="https://img.shields.io/badge/pre--trained%20models-25-brightgreen" />
-<a href="https://github.com/Deci-AI/super-gradients/releases"><img src="https://img.shields.io/github/v/release/Deci-AI/super-gradients" />
-<a href="https://join.slack.com/t/supergradients-comm52/shared_invite/zt-10vz6o1ia-b_0W5jEPEnuHXm087K~t8Q"><img src="https://img.shields.io/badge/slack-community-blueviolet" />
-<a href="https://github.com/Deci-AI/super-gradients/blob/master/LICENSE.md"><img src="https://img.shields.io/badge/license-Apache%202.0-blue" />
-<a href="https://deci-ai.github.io/super-gradients/welcome.html"><img src="https://img.shields.io/badge/docs-sphinx-brightgreen" /></p>
-</div>
-<section class="tex2jax_ignore mathjax_ignore" id="supergradients">
-<h1>SuperGradients<a class="headerlink" href="#supergradients" title="Permalink to this headline"></a></h1>
-<section id="introduction">
-<h2>Introduction<a class="headerlink" href="#introduction" title="Permalink to this headline"></a></h2>
-<p>Welcome to SuperGradients, a free, open-source training library for PyTorch-based deep learning models.
-SuperGradients allows you to train or fine-tune SOTA pre-trained models for all the most commonly applied computer vision tasks with just one training library. We currently support object detection, image classification and semantic segmentation for videos and images.</p>
-<p>Docs and full user guide<span class="xref myst"></span></p>
-<section id="why-use-supergradients">
-<h3>Why use SuperGradients?<a class="headerlink" href="#why-use-supergradients" title="Permalink to this headline"></a></h3>
-<p><strong>Built-in SOTA Models</strong></p>
-<p>Easily load and fine-tune production-ready, <a class="reference external" href="https://github.com/Deci-AI/super-gradients#pretrained-classification-pytorch-checkpoints">pre-trained SOTA models</a> that incorporate best practices and validated hyper-parameters for achieving best-in-class accuracy.</p>
-<p><strong>Easily Reproduce our Results</strong></p>
-<p>Why do all the grind work, if we already did it for you? leverage tested and proven <a class="reference external" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/recipes">recipes</a> &amp; <a class="reference external" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/examples">code examples</a> for a wide range of computer vision models generated by our team of deep learning experts. Easily configure your own or use plug &amp; 
-<p><strong>Production Readiness and Ease of Integration</strong></p>
-<p>All SuperGradients models’ are production ready in the sense that they are compatible with deployment tools such as TensorRT (Nvidia) and OpenVino (Intel) and can be easily taken into production. With a few lines of code you can easily integrate the models into your codebase.</p>
+<div class="section" id="support-various-computer-vision-tasks">
+<h2>Support various computer vision tasks<a class="headerlink" href="#support-various-computer-vision-tasks" title="Permalink to this heading"></a></h2>
 <div align="center">
 <div align="center">
-<img src="./assets/SG_img/detection-demo.png" width="600px">
+<img src="./assets/SG_img/Segmentation 1500x900 .png" width="250px">
+<img src="./assets/SG_img/Object detection 1500X900.png" width="250px">
+<img src="./assets/SG_img/Classification 1500x900.png" width="250px">
 </div>
 </div>
-</section>
-<section id="documentation">
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-<h3>Documentation<a class="headerlink" href="#documentation" title="Permalink to this headline"></a></h3>
-<p>Check SuperGradients <a class="reference external" href="https://deci-ai.github.io/super-gradients/welcome.html">Docs</a> for full documentation, user guide, and examples.</p>
-</section>
-</section>
+</div>
+<div class="section" id="ready-to-deploy-pre-trained-sota-models">
+<h2>Ready to deploy pre-trained SOTA models<a class="headerlink" href="#ready-to-deploy-pre-trained-sota-models" title="Permalink to this heading"></a></h2>
+<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Load model with pretrained weights</span>
+<span class="n">model</span> <span class="o">=</span> <span class="n">models</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;yolox_s&quot;</span><span class="p">,</span> <span class="n">pretrained_weights</span><span class="o">=</span><span class="s2">&quot;coco&quot;</span><span class="p">)</span>
+</pre></div>
+</div>
+<div class="section" id="classification">
+<h3>Classification<a class="headerlink" href="#classification" title="Permalink to this heading"></a></h3>
+<div align="center">
+<img src="./assets/SG_img/Classification@2xDark.png" width="800px">
+</div>
+</div>
+<div class="section" id="semantic-segmentation">
+<h3>Semantic Segmentation<a class="headerlink" href="#semantic-segmentation" title="Permalink to this heading"></a></h3>
+<div align="center">
+<img src="./assets/SG_img/Semantic Segmentation@2xDark.png" width="800px">
+</div>
+</div>
+<div class="section" id="object-detection">
+<h3>Object Detection<a class="headerlink" href="#object-detection" title="Permalink to this heading"></a></h3>
+<div align="center">
+<img src="./assets/SG_img/Object Detection@2xDark.png" width="800px">
+</div>
+<p>All Computer Vision Models - Pretrained Checkpoints can be found <span class="xref myst">here</span></p>
+</div>
+</div>
+<div class="section" id="easy-to-train-sota-models">
+<h2>Easy to train SOTA Models<a class="headerlink" href="#easy-to-train-sota-models" title="Permalink to this heading"></a></h2>
+<p>Easily load and fine-tune production-ready, pre-trained SOTA models that incorporate best practices and validated hyper-parameters for achieving best-in-class accuracy.
+For more information on how to do it go to <span class="xref myst">Getting Started</span></p>
+</div>
+<div class="section" id="plug-and-play-recipes">
+<h2>Plug and play recipes<a class="headerlink" href="#plug-and-play-recipes" title="Permalink to this heading"></a></h2>
+<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">python</span> <span class="o">-</span><span class="n">m</span> <span class="n">super_gradients</span><span class="o">.</span><span class="n">train_from_recipe</span> <span class="o">--</span><span class="n">config</span><span class="o">-</span><span class="n">name</span><span class="o">=</span><span class="n">imagenet_regnetY</span> <span class="n">architecture</span><span class="o">=</span><span 
+</pre></div>
+</div>
+<p>More example on how and why to use recipes can be found in <span class="xref myst">Recipes</span></p>
+</div>
+<div class="section" id="production-readiness">
+<h2>Production readiness<a class="headerlink" href="#production-readiness" title="Permalink to this heading"></a></h2>
+<p>All SuperGradients models’ are production ready in the sense that they are compatible with deployment tools such as TensorRT (Nvidia) and OpenVINO (Intel) and can be easily taken into production. With a few lines of code you can easily integrate the models into your codebase.</p>
+<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Load model with pretrained weights</span>
+<span class="n">model</span> <span class="o">=</span> <span class="n">models</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;yolox_s&quot;</span><span class="p">,</span> <span class="n">pretrained_weights</span><span class="o">=</span><span class="s2">&quot;coco&quot;</span><span class="p">)</span>
+
+<span class="c1"># Prepare model for conversion</span>
+<span class="c1"># Input size is in format of [Batch x Channels x Width x Height] where 640 is the standart COCO dataset dimensions</span>
+<span class="n">model</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
+<span class="n">model</span><span class="o">.</span><span class="n">prep_model_for_conversion</span><span class="p">(</span><span class="n">input_size</span><span class="o">=</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">640</span><span class="p">,</span> <span class="mi">640</span><span class="p">])</span>
+    
+<span class="c1"># Create dummy_input</span>
+
+<span class="c1"># Convert model to onnx</span>
+<span class="n">torch</span><span class="o">.</span><span class="n">onnx</span><span class="o">.</span><span class="n">export</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">dummy_input</span><span class="p">,</span>  <span class="s2">&quot;yolox_s.onnx&quot;</span><span class="p">)</span>
+</pre></div>
+</div>
+<p>More information on how to take your model to production can be found in <span class="xref myst">Getting Started</span> notebooks</p>
+</div>
+</div>
+<div class="section" id="quick-installation">
+<h1>Quick Installation<a class="headerlink" href="#quick-installation" title="Permalink to this heading"></a></h1>
+<hr class="docutils" />
+<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip install super-gradients
+</pre></div>
+</div>
+</div>
+<div class="section" id="what-s-new">
+<h1>What’s New<a class="headerlink" href="#what-s-new" title="Permalink to this heading"></a></h1>
 <hr class="docutils" />
 <hr class="docutils" />
-<section id="table-of-content">
-<h3>Table of Content<a class="headerlink" href="#table-of-content" title="Permalink to this headline"></a></h3>
-<details>
-<summary>See Table </summary>
-<!-- toc -->
 <ul class="simple">
 <ul class="simple">
-<li><p><a class="reference external" href="#getting-started">Getting Started</a></p>
-<ul>
-<li><p><a class="reference external" href="#quick-start-notebook">Quick Start Notebook</a></p></li>
-<li><p><a class="reference external" href="#supergradients-walkthrough-notebook">Walkthrough Notebook</a></p></li>
-<li><p><a class="reference external" href="#transfer-learning-with-sg-notebook">Transfer Learning with SG Notebook</a></p></li>
+<li><p>【06/9/2022】 PP-LiteSeg - new pre-trained <span class="xref myst">checkpoints</span>  for Cityscapes with SOTA mIoU scores (~1.5% above paper)🎯</p></li>
+<li><p>【07/08/2022】DDRNet23 -  new pre-trained <span class="xref myst">checkpoints</span> and <a class="reference external" href="https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes">recipes</a> for Cityscapes with SOTA mIoU scores (~1% above paper)🎯</p></li>
+<li><p>【27/07/2022】YOLOX models (object detection) - recipes and pre-trained checkpoints.</p></li>
+<li><p>【07/07/2022】SSD Lite MobileNet V2,V1 - Training <a class="reference external" href="https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/coco_ssd_lite_mobilenet_v2.yaml">recipes</a> and pre-trained <span class="xref myst">checkpoints</span> on COCO - Tailored for edge devices! 📱</p></li>
+<li><p>【07/07/2022】 STDC  - new pre-trained <span class="xref myst">checkpoints</span> and <a class="reference external" href="https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes">recipes</a> for Cityscapes with super SOTA mIoU scores (~2.5% above paper)🎯</p></li>
 </ul>
 </ul>
-</li>
-<li><p><a class="reference external" href="#installation-methods">Installation Methods</a></p>
-<ul>
-<li><p><a class="reference external" href="#prerequisites">Prerequisites</a></p></li>
-<li><p><a class="reference external" href="#quick-installation">Quick Installation</a></p></li>
+<p>Check out SG full <a class="reference external" href="https://github.com/Deci-AI/super-gradients/releases">release notes</a>.</p>
+</div>
+<div class="section" id="coming-soon">
+<h1>Coming soon<a class="headerlink" href="#coming-soon" title="Permalink to this heading"></a></h1>
+<hr class="docutils" />
+<ul class="simple">
+<li><p>[ ] PP-LiteSeg recipes for Cityscapes with SOTA mIoU scores (~1.5% above paper)🎯</p></li>
+<li><p>[ ] Single class detectors (recipes, pre-trained checkpoints) for edge devices deployment.</p></li>
+<li><p>[ ] Single class segmentation (recipes, pre-trained checkpoints) for edge devices deployment.</p></li>
+<li><p>[ ] QAT capabilities (Quantization Aware Training).</p></li>
+<li><p>[ ] Integration with more professional tools.</p></li>
 </ul>
 </ul>
-</li>
-<li><p><a class="reference external" href="#computer-vision-models-pretrained-checkpoints">Computer Vision Models’ Pretrained Checkpoints</a></p>
+</div>
+<div class="section" id="table-of-content">
+<h1>Table of Content<a class="headerlink" href="#table-of-content" title="Permalink to this heading"></a></h1>
+<hr class="docutils" />
+<!-- toc -->
+<ul class="simple">
+<li><p><span class="xref myst">Getting Started</span></p></li>
+<li><p><span class="xref myst">Advanced Features</span></p></li>
+<li><p><span class="xref myst">Installation Methods</span></p>
 <ul>
 <ul>
-<li><p><a class="reference external" href="#pretrained-classification-pytorch-checkpoints">Pretrained Classification PyTorch Checkpoints</a></p></li>
-<li><p><a class="reference external" href="#pretrained-object-detection-pytorch-checkpoints">Pretrained Object Detection PyTorch Checkpoints</a></p></li>
-<li><p><a class="reference external" href="#pretrained-semantic-segmentation-pytorch-checkpoints">Pretrained Semantic Segmentation PyTorch Checkpoints</a></p></li>
+<li><p><span class="xref myst">Prerequisites</span></p></li>
+<li><p><span class="xref myst">Quick Installation</span></p></li>
 </ul>
 </ul>
 </li>
 </li>
-<li><p><a class="reference external" href="#contributing">Contributing</a></p></li>
-<li><p><a class="reference external" href="#citation">Citation</a></p></li>
-<li><p><a class="reference external" href="#community">Community</a></p></li>
-<li><p><a class="reference external" href="#license">License</a></p></li>
-<li><p><a class="reference external" href="#deci-lab">Deci Lab</a></p></li>
+<li><p><span class="xref myst">Implemented Model Architectures</span></p></li>
+<li><p><span class="xref myst">Contributing</span></p></li>
+<li><p><span class="xref myst">Citation</span></p></li>
+<li><p><span class="xref myst">Community</span></p></li>
+<li><p><span class="xref myst">License</span></p></li>
+<li><p><span class="xref myst">Deci Platform</span></p></li>
 </ul>
 </ul>
 <!-- tocstop -->
 <!-- tocstop -->
-</details>
-</section>
-</section>
-<section id="getting-started">
-<h2>Getting Started<a class="headerlink" href="#getting-started" title="Permalink to this headline"></a></h2>
-<section id="quick-start-notebook">
-
-
-
-
-
-
+</div>
+<div class="section" id="getting-started">
+<h1>Getting Started<a class="headerlink" href="#getting-started" title="Permalink to this heading"></a></h1>
+<hr class="docutils" />
+<div class="section" id="start-training-with-just-1-command-line">
+<h2>Start Training with Just 1 Command Line<a class="headerlink" href="#start-training-with-just-1-command-line" title="Permalink to this heading"></a></h2>
+<p>The most simple and straightforward way to start training SOTA performance models with SuperGradients reproducible recipes. Just define your dataset path and where you want your checkpoints to be saved and you are good to go from your terminal!</p>
+<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>python -m super_gradients.train_from_recipe --config-name<span class="o">=</span>imagenet_regnetY <span class="nv">architecture</span><span class="o">=</span>regnetY800 dataset_interface.data_dir<span class="o">=</span>&lt;YOUR_Imagenet_LOCAL_PATH&gt; <span class="nv">ckpt_root_dir</span><span class="o">=</span>&lt;CHEKPOINT_DIRECTORY&gt;
+</pre></div>
+</div>
+</div>
+<div class="section" id="quickly-load-pre-trained-weights-for-your-desired-model-with-sota-performance">
+<h2>Quickly Load Pre-Trained Weights for Your Desired Model with SOTA Performance<a class="headerlink" href="#quickly-load-pre-trained-weights-for-your-desired-model-with-sota-performance" title="Permalink to this heading"></a></h2>
+<p>Want to try our pre-trained models on your machine? Import SuperGradients, initialize your Trainer, and load your desired architecture and pre-trained weights from our <span class="xref myst">SOTA model zoo</span></p>
+<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># The pretrained_weights argument will load a pre-trained architecture on the provided dataset</span>
+    
+<span class="kn">import</span> <span class="nn">super_gradients</span>
 
 
+<span class="n">model</span> <span class="o">=</span> <span class="n">models</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;model-name&quot;</span><span class="p">,</span> <span class="n">pretrained_weights</span><span class="o">=</span><span class="s2">&quot;pretrained-model-name&quot;</span><span class="p">)</span>
 
 
-<h3>Quick Start Notebook - Classification<a class="headerlink" href="#quick-start-notebook-classification" title="Permalink to this headline"></a></h3>
-<p>Get started with our quick start notebook for image classification tasks on Google Colab for a quick and easy start using free GPU hardware.</p>
-<table class="tfo-notebook-buttons" align="left">
-<td>
-<a target="_blank" href="https://bit.ly/3ufnsgT"><img src="./assets/SG_img/colab_logo.png" />Classification Quick Start in Google Colab</a>
-</td>
-<td>
-<a href="https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/SG_quickstart_classification.ipynb"><img src="./assets/SG_img/download_logo.png" />Download notebook</a>
-</td>
-<td>
-<a target="_blank" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/examples"><img src="./assets/SG_img/GitHub_logo.png" />View source on GitHub</a>
-</td>
+</pre></div>
+</div>
+</div>
+<div class="section" id="id1">
+<h2>Classification<a class="headerlink" href="#id1" title="Permalink to this heading"></a></h2>
+<div class="section" id="transfer-learning">
+<h3>Transfer Learning<a class="headerlink" href="#transfer-learning" title="Permalink to this heading"></a></h3>
+  <table class="tfo-notebook-buttons" align="left">
+ <td width="500">  
+  <a target="_blank" href="https://bit.ly/3xzIutb"><img src="./assets/SG_img/colab_logo.png" /> Classification Transfer Learning</a>
+  </td>
+ <td width="200">    
+ <a target="_blank" href="https://bit.ly/3xwYEn1"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source</a>
+ </td>
 </table>
 </table>
-</br></br>
-</section>
-<section id="quick-start-notebook-semantic-segmentation">
-<h3>Quick Start Notebook - Semantic Segmentation<a class="headerlink" href="#quick-start-notebook-semantic-segmentation" title="Permalink to this headline"></a></h3>
-<p>Get started with our quick start notebook for semantic segmentation tasks on Google Colab for a quick and easy start using free GPU hardware.</p>
+ </br></br>
+</div>
+</div>
+<div class="section" id="id2">
+<h2>Semantic Segmentation<a class="headerlink" href="#id2" title="Permalink to this heading"></a></h2>
+<div class="section" id="quick-start">
+<h3>Quick Start<a class="headerlink" href="#quick-start" title="Permalink to this heading"></a></h3>
 <table class="tfo-notebook-buttons" align="left">
 <table class="tfo-notebook-buttons" align="left">
-<td>
-<a target="_blank" href="https://bit.ly/3Jp7w1U"><img src="./assets/SG_img/colab_logo.png" />Segmentation Quick Start in Google Colab</a>
-</td>
-<td>
-<a href="https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/SG_quickstart_segmentation.ipynb"><img src="./assets/SG_img/download_logo.png" />Download notebook</a>
-</td>
-<td>
-<a target="_blank" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/examples"><img src="./assets/SG_img/GitHub_logo.png" />View source on GitHub</a>
-</td>
+ <td width="500">
+<a target="_blank" href="https://bit.ly/3qKx9m8"><img src="./assets/SG_img/colab_logo.png" /> Segmentation Quick Start</a>
+ </td>
+ <td width="200">
+<a target="_blank" href="https://bit.ly/3qJjxYq"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source </a>
+ </td>
 </table>
 </table>
-</br></br>
-<!--
-### Quick Start Notebook - Object Detection
-
-Get started with our quick start notebook for object detection tasks on Google Colab for a quick and easy start using free GPU hardware.
-
+ </br></br>
+</div>
+<div class="section" id="id3">
+<h3>Transfer Learning<a class="headerlink" href="#id3" title="Permalink to this heading"></a></h3>
 <table class="tfo-notebook-buttons" align="left">
 <table class="tfo-notebook-buttons" align="left">
-<table class="tfo-notebok-buttons" align="left">
-<td>
-<a target="_blank" href="https://bit.ly/3wqMsEM"><img src="./docs/assets/SG_img/colab_logo.png" />Detection Quick Start in Google Colab</a>
-</td>
-<td>
-<a href="https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/SG_quickstart_detection.ipynb"><img src="./docs/assets/SG_img/download_logo.png" />Download notebook</a>
-</td>
-<td>
-<a target="_blank" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/examples"><img src="./docs/assets/SG_img/GitHub_logo.png" />View source on GitHub</a>
-</td>
+ <td width="500">
+<a target="_blank" href="https://bit.ly/3qKwMbe"><img src="./assets/SG_img/colab_logo.png" /> Segmentation Transfer Learning</a>
+ </td>
+ <td width="200">
+<a target="_blank" href="https://bit.ly/3ShJlXn"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source</a>
+ </td>
 </table>
 </table>
-</br></br>
-
-### Quick Start Notebook - Upload your model to Deci Platform
-
-Get Started with an example of how to upload your trained model to Deci Platform for runtime optimization and compilation to your target deployment HW.
-<table class="tfo-notebook-buttons" align="left">
-<tbody>
- <tr>
-   <td vertical-align="middle">
-     <img src="./docs/assets/SG_img/colab_logo.png" />
-     <a target="_blank" href="https://bit.ly/3cAkoXG">
-       Upload to Deci Platform in Google Colab
-     </a>
-   </td>
-   <td vertical-align="middle">
-     <img src="./docs/assets/SG_img/download_logo.png" />
-     <a href="https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/SG_quickstart_model_upload_deci_lab.ipynb">
-       Download notebook
-     </a>
+ </br></br>
+</div>
+<div class="section" id="how-to-connect-custom-dataset">
+<h3>How to Connect Custom Dataset<a class="headerlink" href="#how-to-connect-custom-dataset" title="Permalink to this heading"></a></h3>
+  <table class="tfo-notebook-buttons" align="left">
+ <td width="500"> 
+<a target="_blank" href="https://bit.ly/3QQBVJp"><img src="./assets/SG_img/colab_logo.png" /> Segmentation How to Connect Custom Dataset</a>
    </td>
    </td>
-   <td>
-     <img src="./docs/assets/SG_img/GitHub_logo.png" />
-     <a target="_blank" href="https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/deci_lab_export_example/deci_lab_export_example.py">
-       View source on GitHub
-     </a>
+ <td width="200">
+ <a target="_blank" href="https://bit.ly/3Us2WGi"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source</a>
+ </td>
+</table>
+ </br></br>
+</div>
+</div>
+<div class="section" id="id4">
+<h2>Object Detection<a class="headerlink" href="#id4" title="Permalink to this heading"></a></h2>
+<div class="section" id="id5">
+<h3>Transfer Learning<a class="headerlink" href="#id5" title="Permalink to this heading"></a></h3>
+  <table class="tfo-notebook-buttons" align="left">
+ <td width="500">   
+<a target="_blank" href="https://bit.ly/3SkMohx"><img src="./assets/SG_img/colab_logo.png" /> Detection Transfer Learning</a>
    </td>
    </td>
- </tr>
-</tbody>
+ <td width="200">   
+<a target="_blank" href="https://bit.ly/3DF8siG"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source</a>
+ </td>
+</table>
+ </br></br>
+</div>
+<div class="section" id="id6">
+<h3>How to Connect Custom Dataset<a class="headerlink" href="#id6" title="Permalink to this heading"></a></h3>
+  <table class="tfo-notebook-buttons" align="left">
+ <td width="500">  
+  <a target="_blank" href="https://bit.ly/3dqDlg3"><img src="./assets/SG_img/colab_logo.png" /> Detection How to Connect Custom Dataset</a>
+  </td>
+ <td width="200">      
+<a target="_blank" href="https://bit.ly/3xBlcmq"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source</a>
+ </td>
+</table>
+ </br></br>
+</div>
+</div>
+<div class="section" id="how-to-predict-using-pre-trained-model">
+<h2>How to Predict Using Pre-trained Model<a class="headerlink" href="#how-to-predict-using-pre-trained-model" title="Permalink to this heading"></a></h2>
+<div class="section" id="segmentation-detection-and-classification-prediction">
+<h3>Segmentation, Detection and Classification Prediction<a class="headerlink" href="#segmentation-detection-and-classification-prediction" title="Permalink to this heading"></a></h3>
+  <table class="tfo-notebook-buttons" align="left">
+ <td width="500">    
+<a target="_blank" href="https://bit.ly/3f4mssd"><img src="./assets/SG_img/colab_logo.png" /> How to Predict Using Pre-trained Model</a>
+  </td>
+ <td width="200">   
+<a target="_blank" href="https://bit.ly/3Sf59Tr"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source</a>
+ </td>
 </table>
 </table>
-</br></br>
+ </br></br>
+</div>
+</div>
+</div>
+<div class="section" id="advanced-features">
+<h1>Advanced Features<a class="headerlink" href="#advanced-features" title="Permalink to this heading"></a></h1>
+<hr class="docutils" />
+<div class="section" id="knowledge-distillation-training">
+<h2>Knowledge Distillation Training<a class="headerlink" href="#knowledge-distillation-training" title="Permalink to this heading"></a></h2>
+<p>Knowledge Distillation is a training technique that uses a large model, teacher model, to improve the performance of a smaller model, the student model.
+Learn more about SuperGradients knowledge distillation training with our pre-trained BEiT base teacher model and Resnet18 student model on CIFAR10 example notebook on Google Colab for an easy to use tutorial using free GPU hardware</p>
+  <table class="tfo-notebook-buttons" align="left">
+ <td width="500">   
+   <a target="_blank" href="https://bit.ly/3BLA5oR"><img src="./assets/SG_img/colab_logo.png" /> Knowledge Distillation Training</a>
+  </td>
+ <td width="200">   
+<a target="_blank" href="https://bit.ly/3S9UlG4"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source</a>
+ </td>
+</table>
+ </br></br>
+</div>
+<div class="section" id="recipes">
+<h2>Recipes<a class="headerlink" href="#recipes" title="Permalink to this heading"></a></h2>
+<p>To train a model, it is necessary to configure 4 main components. These components are aggregated into a single “main” recipe <code class="docutils literal notranslate"><span class="pre">.yaml</span></code> file that inherits the aforementioned dataset, architecture, raining and checkpoint params. It is also possible (and recomended for flexibility) to override default settings with custom ones.
+All recipes can be found <span class="xref myst">here</span></p>
+  <table class="tfo-notebook-buttons" align="left">
+ <td width="500">   
+   <a target="_blank" href="https://bit.ly/3UiY5ab"><img src="./assets/SG_img/colab_logo.png" /> How to Use Recipes</a>
+  </td>
+ <td width="200">  
+<a target="_blank" href="https://bit.ly/3QSrHbm"><img src="./assets/SG_img/GitHub_logo.png" /> GitHub source</a>
+ </td>
+</table>
+ </br></br>
+</div>
+<div class="section" id="using-ddp">
+<h2>Using DDP<a class="headerlink" href="#using-ddp" title="Permalink to this heading"></a></h2>
+<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">super_gradients</span> <span class="kn">import</span> <span class="n">init_trainer</span>
+<span class="kn">from</span> <span class="nn">super_gradients.common</span> <span class="kn">import</span> <span class="n">MultiGPUMode</span>
+<span class="kn">from</span> <span class="nn">super_gradients.training.utils.distributed_training_utils</span> <span class="kn">import</span> <span class="n">setup_gpu_mode</span>
 
 
-### SuperGradients Complete Walkthrough Notebook
+<span class="c1"># Initialize the environment</span>
+<span class="n">init_trainer</span><span class="p">()</span>
 
 
-Learn more about SuperGradients training components with our walkthrough notebook on Google Colab for an easy to use tutorial using free GPU hardware
+<span class="c1"># Launch DDP on 1 device (node) of 4 GPU&#39;s</span>
+<span class="n">setup_gpu_mode</span><span class="p">(</span><span class="n">gpu_mode</span><span class="o">=</span><span class="n">MultiGPUMode</span><span class="o">.</span><span class="n">DISTRIBUTED_DATA_PARALLEL</span><span class="p">,</span> <span class="n">num_gpus</span><span class="o">=</span><span class="mi">4</span><span class="p">)</span>
 
 
-<table class="tfo-notebook-buttons" align="left">
-<td>
-<a target="_blank" href="https://bit.ly/3JspSPF"><img src="./docs/assets/SG_img/colab_logo.png" />SuperGradients Walkthrough in Google Colab</a>
-</td>
-<td>
-<a href="https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/SG_Walkthrough.ipynb"><img src="./docs/assets/SG_img/download_logo.png" />Download notebook</a>
-</td>
-<td>
-<a target="_blank" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/examples"><img src="./docs/assets/SG_img/GitHub_logo.png" />View source on GitHub</a>
-</td>
-</table>
-</br></br>
+<span class="c1"># Define the objects</span>
 
 
-### Transfer Learning with SG Notebook - Object Detection
+<span class="c1"># The trainer will run on DDP without anything else to change</span>
+</pre></div>
+</div>
+</div>
+<div class="section" id="easily-change-architectures-parameters">
+<h2>Easily change architectures parameters<a class="headerlink" href="#easily-change-architectures-parameters" title="Permalink to this heading"></a></h2>
+<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">super_gradients.training</span> <span class="kn">import</span> <span class="n">models</span>
 
 
-Learn more about SuperGradients transfer learning or fine tuning abilities with our COCO pre-trained YoloX nano fine tuning into a sub-dataset of PASCAL VOC example notebook on Google Colab for an easy to use tutorial using free GPU hardware
+<span class="c1"># instantiate default pretrained resnet18</span>
+<span class="n">default_resnet18</span> <span class="o">=</span> <span class="n">models</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;resnet18&quot;</span><span class="p">,</span> <span class="n">num_classes</span><span class="o">=</span><span class="mi">100</span><span class="p">,</span> <span class="n">pretrained_weights</span><span class="o">=</span><span class="s2">&quot;imagenet&quot
 
 
-<table class="tfo-notebook-buttons" align="left">
-<td>
-<a target="_blank" href="https://bit.ly/3iGvnP7"><img src="./docs/assets/SG_img/colab_logo.png" />Detection Transfer Learning in Google Colab</a>
-</td>
-<td>
-<a href="https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/SG_transfer_learning_object_detection.ipynb"><img src="./docs/assets/SG_img/download_logo.png" />Download notebook</a>
-</td>
-<td>
-<a target="_blank" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/examples"><img src="./docs/assets/SG_img/GitHub_logo.png" />View source on GitHub</a>
-</td>
-</table>
-</br></br>
--->
-</section>
-</section>
-<section id="transfer-learning">
-<h2>Transfer Learning<a class="headerlink" href="#transfer-learning" title="Permalink to this headline"></a></h2>
-<section id="transfer-learning-with-sg-notebook-semantic-segmentation">
-<h3>Transfer Learning with SG Notebook - Semantic Segmentation<a class="headerlink" href="#transfer-learning-with-sg-notebook-semantic-segmentation" title="Permalink to this headline"></a></h3>
-<p>Learn more about SuperGradients transfer learning or fine tuning abilities with our Citiscapes pre-trained RegSeg48 fine tuning into a sub-dataset of Supervisely example notebook on Google Colab for an easy to use tutorial using free GPU hardware</p>
-<table class="tfo-notebook-buttons" align="left">
-<td>
-<a target="_blank" href="https://bit.ly/37P04PN"><img src="./assets/SG_img/colab_logo.png" />Segmentation Transfer Learning in Google Colab</a>
-</td>
-<td>
-<a href="https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/SG_transfer_learning_semantic_segmentation.ipynb"><img src="./assets/SG_img/download_logo.png" />Download notebook</a>
-</td>
-<td>
-<a target="_blank" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/examples"><img src="./assets/SG_img/GitHub_logo.png" />View source on GitHub</a>
-</td>
-</table>
-</br></br>
-</section>
-</section>
-<section id="knowledge-distillation-training">
-<h2>Knowledge Distillation Training<a class="headerlink" href="#knowledge-distillation-training" title="Permalink to this headline"></a></h2>
-<section id="knowledge-distillation-training-quick-start-with-sg-notebook-resnet18-example">
-<h3>Knowledge Distillation Training Quick Start with SG Notebook - ResNet18 example<a class="headerlink" href="#knowledge-distillation-training-quick-start-with-sg-notebook-resnet18-example" title="Permalink to this headline"></a></h3>
-<p>Knowledge Distillation is a training technique that uses a large model, teacher model, to improve the performance of a smaller model, the student model.
-Learn more about SuperGradients knowledge distillation training with our pre-trained BEiT base teacher model and Resnet18 student model on CIFAR10 example notebook on Google Colab for an easy to use tutorial using free GPU hardware</p>
-<table class="tfo-notebook-buttons" align="left">
-<td>
-<a target="_blank" href="https://bit.ly/3HQvbsg"><img src="./assets/SG_img/colab_logo.png" />KD Training in Google Colab</a>
-</td>
-<td>
-<a href="https://minhaskamal.github.io/DownGit/#/home?url=https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/examples/SG_knowledge_distillation_quickstart.ipynb"><img src="./assets/SG_img/download_logo.png" />Download notebook</a>
-</td>
-<td>
-<a target="_blank" href="https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/examples"><img src="./assets/SG_img/GitHub_logo.png" />View source on GitHub</a>
-</td>
-</table>
-</br></br>
-</section>
-</section>
+<span class="c1"># instantiate pretrained resnet18, turning DropPath on with probability 0.5</span>
+<span class="n">droppath_resnet18</span> <span class="o">=</span> <span class="n">models</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;resnet18&quot;</span><span class="p">,</span> <span class="n">arch_params</span><span class="o">=</span><span class="p">{</span><span class="s2">&quot;droppath_prob&quot;</span><span class="p">:</span> <span class="mf">0.5</span><span class="p">},</span> <
+
+<span class="c1"># instantiate pretrained resnet18, without classifier head. Output will be from the last stage before global pooling</span>
+<span class="n">backbone_resnet18</span> <span class="o">=</span> <span class="n">models</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;resnet18&quot;</span><span class="p">,</span> <span class="n">arch_params</span><span class="o">=</span><span class="p">{</span><span class="s2">&quot;backbone_mode&quot;</span><span class="p">:</span> <span class="kc">True</span><span class="p">},</span> 
+</pre></div>
+</div>
+</div>
+<div class="section" id="using-phase-callbacks">
+<h2>Using phase callbacks<a class="headerlink" href="#using-phase-callbacks" title="Permalink to this heading"></a></h2>
+<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">super_gradients</span> <span class="kn">import</span> <span class="n">Trainer</span>
+<span class="kn">from</span> <span class="nn">torch.optim.lr_scheduler</span> <span class="kn">import</span> <span class="n">ReduceLROnPlateau</span>
+<span class="kn">from</span> <span class="nn">super_gradients.training.utils.callbacks</span> <span class="kn">import</span> <span class="n">Phase</span><span class="p">,</span> <span class="n">LRSchedulerCallback</span>
+<span class="kn">from</span> <span class="nn">super_gradients.training.metrics.classification_metrics</span> <span class="kn">import</span> <span class="n">Accuracy</span>
 
 
-<section id="installation-methods">
-<h2>Installation Methods<a class="headerlink" href="#installation-methods" title="Permalink to this headline"></a></h2>
-<section id="prerequisites">
-<h3>Prerequisites<a class="headerlink" href="#prerequisites" title="Permalink to this headline"></a></h3>
+<span class="c1"># define PyTorch train and validation loaders and optimizer</span>
+
+<span class="c1"># define what to be called in the callback</span>
+<span class="n">rop_lr_scheduler</span> <span class="o">=</span> <span class="n">ReduceLROnPlateau</span><span class="p">(</span><span class="n">optimizer</span><span class="p">,</span> <span class="n">mode</span><span class="o">=</span><span class="s2">&quot;max&quot;</span><span class="p">,</span> <span class="n">patience</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span> <span class="n">verbose</span><span class="o">=</span><span class="kc">True</span><span clas
+
+<span class="c1"># define phase callbacks, they will fire as defined in Phase</span>
+<span class="n">phase_callbacks</span> <span class="o">=</span> <span class="p">[</span><span class="n">LRSchedulerCallback</span><span class="p">(</span><span class="n">scheduler</span><span class="o">=</span><span class="n">rop_lr_scheduler</span><span class="p">,</span>
+                                       <span class="n">phase</span><span class="o">=</span><span class="n">Phase</span><span class="o">.</span><span class="n">VALIDATION_EPOCH_END</span><span class="p">,</span>
+                                       <span class="n">metric_name</span><span class="o">=</span><span class="s2">&quot;Accuracy&quot;</span><span class="p">)]</span>
+
+<span class="c1"># create a trainer object, look the declaration for more parameters</span>
+<span class="n">trainer</span> <span class="o">=</span> <span class="n">Trainer</span><span class="p">(</span><span class="s2">&quot;experiment_name&quot;</span><span class="p">)</span>
+
+<span class="c1"># define phase_callbacks as part of the training parameters</span>
+<span class="n">train_params</span> <span class="o">=</span> <span class="p">{</span><span class="s2">&quot;phase_callbacks&quot;</span><span class="p">:</span> <span class="n">phase_callbacks</span><span class="p">}</span>
+</pre></div>
+</div>
+</div>
+<div class="section" id="integration-to-weights-and-biases">
+<h2>Integration to Weights and Biases<a class="headerlink" href="#integration-to-weights-and-biases" title="Permalink to this heading"></a></h2>
+<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">super_gradients</span> <span class="kn">import</span> <span class="n">Trainer</span>
+
+<span class="c1"># create a trainer object, look the declaration for more parameters</span>
+<span class="n">trainer</span> <span class="o">=</span> <span class="n">Trainer</span><span class="p">(</span><span class="s2">&quot;experiment_name&quot;</span><span class="p">)</span>
+
+<span class="n">train_params</span> <span class="o">=</span> <span class="p">{</span> <span class="o">...</span> <span class="c1"># training parameters</span>
+                <span class="s2">&quot;sg_logger&quot;</span><span class="p">:</span> <span class="s2">&quot;wandb_sg_logger&quot;</span><span class="p">,</span> <span class="c1"># Weights&amp;Biases Logger, see class WandBSGLogger for details</span>
+                <span class="s2">&quot;sg_logger_params&quot;</span><span class="p">:</span> <span class="c1"># paramenters that will be passes to __init__ of the logger </span>
+                  <span class="p">{</span>
+                    <span class="s2">&quot;project_name&quot;</span><span class="p">:</span> <span class="s2">&quot;project_name&quot;</span><span class="p">,</span> <span class="c1"># W&amp;B project name</span>
+                    <span class="s2">&quot;save_checkpoints_remote&quot;</span><span class="p">:</span> <span class="kc">True</span>
+                    <span class="s2">&quot;save_tensorboard_remote&quot;</span><span class="p">:</span> <span class="kc">True</span>
+                    <span class="s2">&quot;save_logs_remote&quot;</span><span class="p">:</span> <span class="kc">True</span>
+                  <span class="p">}</span> 
+               <span class="p">}</span>
+</pre></div>
+</div>
+</div>
+</div>
+<div class="section" id="installation-methods">
+<h1>Installation Methods<a class="headerlink" href="#installation-methods" title="Permalink to this heading"></a></h1>
+<hr class="docutils" />
+<div class="section" id="prerequisites">
+<h2>Prerequisites<a class="headerlink" href="#prerequisites" title="Permalink to this heading"></a></h2>
 <details>
 <details>
 <summary>General requirements</summary>
 <summary>General requirements</summary>
 <ul class="simple">
 <ul class="simple">
@@ -405,9 +547,9 @@ Learn more about SuperGradients knowledge distillation training with our pre-tra
 <li><p>Nvidia Driver with CUDA &gt;= 11.2 support (≥460.x)</p></li>
 <li><p>Nvidia Driver with CUDA &gt;= 11.2 support (≥460.x)</p></li>
 </ul>
 </ul>
 </details>
 </details>
-</section>
-<section id="quick-installation">
-<h3>Quick Installation<a class="headerlink" href="#quick-installation" title="Permalink to this headline"></a></h3>
+</div>
+<div class="section" id="id7">
+<h2>Quick Installation<a class="headerlink" href="#id7" title="Permalink to this heading"></a></h2>
 <details>
 <details>
 <summary>Install stable version using PyPi</summary>
 <summary>Install stable version using PyPi</summary>
 <p>See in <a class="reference external" href="https://pypi.org/project/super-gradients/">PyPi</a></p>
 <p>See in <a class="reference external" href="https://pypi.org/project/super-gradients/">PyPi</a></p>
@@ -421,540 +563,136 @@ Learn more about SuperGradients knowledge distillation training with our pre-tra
 <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip install git+https://github.com/Deci-AI/super-gradients.git@stable
 <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip install git+https://github.com/Deci-AI/super-gradients.git@stable
 </pre></div>
 </pre></div>
 </div>
 </div>
-</details>
-</section>
-</section>
-<section id="computer-vision-models-pretrained-checkpoints">
-
-
-
-<h2>Computer Vision Models - Pretrained Checkpoints<a class="headerlink" href="#computer-vision-models-pretrained-checkpoints" title="Permalink to this headline"></a></h2>
-<section id="pretrained-classification-pytorch-checkpoints">
-<h3>Pretrained Classification PyTorch Checkpoints<a class="headerlink" href="#pretrained-classification-pytorch-checkpoints" title="Permalink to this headline"></a></h3>
-<table class="colwidths-auto docutils align-default">
-<thead>
-<tr class="row-odd"><th class="head"><p>Model</p></th>
-<th class="head"><p>Dataset</p></th>
-<th class="head"><p>Resolution</p></th>
-<th class="head"><p>Top-1</p></th>
-<th class="head"><p>Top-5</p></th>
-<th class="head"><p>Latency (HW)*<sub>T4</sub></p></th>
-<th class="head"><p>Latency (Production)**<sub>T4</sub></p></th>
-<th class="head"><p>Latency (HW)*<sub>Jetson Xavier NX</sub></p></th>
-<th class="head"><p>Latency (Production)**<sub>Jetson Xavier NX</sub></p></th>
-<th class="text-center head"><p>Latency <sub>Cascade Lake</sub></p></th>
-</tr>
-</thead>
-<tbody>
-<tr class="row-even"><td><p>ViT base</p></td>
-<td><p>ImageNet21K</p></td>
-<td><p>224x224</p></td>
-<td><p>84.15</p></td>
-<td><p>-</p></td>
-<td><p><strong>4.46ms</strong></p></td>
-<td><p><strong>4.60ms</strong></p></td>
-<td><p><strong>-</strong> *</p></td>
-<td><p><strong>-</strong></p></td>
-<td class="text-center"><p><strong>57.22ms</strong></p></td>
-</tr>
-<tr class="row-odd"><td><p>ViT large</p></td>
-<td><p>ImageNet21K</p></td>
-<td><p>224x224</p></td>
-<td><p>85.64</p></td>
-<td><p>-</p></td>
-<td><p><strong>12.81ms</strong></p></td>
-<td><p><strong>13.19ms</strong></p></td>
-<td><p><strong>-</strong> *</p></td>
-<td><p><strong>-</strong></p></td>
-<td class="text-center"><p><strong>187.22ms</strong></p></td>
-</tr>
-<tr class="row-even"><td><p>BEiT</p></td>
-<td><p>ImageNet21K</p></td>
-<td><p>224x224</p></td>
-<td><p>-</p></td>
-<td><p>-</p></td>
-<td><p><strong>-ms</strong></p></td>
-<td><p><strong>-ms</strong></p></td>
-<td><p><strong>-</strong> *</p></td>
-<td><p><strong>-</strong></p></td>
-<td class="text-center"><p><strong>-ms</strong></p></td>
-</tr>
-<tr class="row-odd"><td><p>EfficientNet B0</p></td>
-<td><p>ImageNet</p></td>
-<td><p>224x224</p></td>
-<td><p>77.62</p></td>
-<td><p>93.49</p></td>
-<td><p><strong>0.93ms</strong></p></td>
-<td><p><strong>1.38ms</strong></p></td>
-<td><p><strong>-</strong> *</p></td>
-<td><p><strong>-</strong></p></td>
-<td class="text-center"><p><strong>3.44ms</strong></p></td>
-</tr>
-<tr class="row-even"><td><p>RegNet Y200</p></td>
-<td><p>ImageNet</p></td>
-<td><p>224x224</p></td>
-<td><p>70.88</p></td>
-<td><p>89.35</p></td>
-<td><p><strong>0.63ms</strong></p></td>
-<td><p><strong>1.08ms</strong></p></td>
-<td><p><strong>2.16ms</strong></p></td>
-<td><p><strong>2.47ms</strong></p></td>
-<td class="text-center"><p><strong>2.06ms</strong></p></td>
-</tr>
-<tr class="row-odd"><td><p>RegNet Y400</p></td>
-<td><p>ImageNet</p></td>
-<td><p>224x224</p></td>
-<td><p>74.74</p></td>
-<td><p>91.46</p></td>
-<td><p><strong>0.80ms</strong></p></td>
-<td><p><strong>1.25ms</strong></p></td>
-<td><p><strong>2.62ms</strong></p></td>
-<td><p><strong>2.91ms</strong></p></td>
-<td class="text-center"><p><strong>2.87ms</strong></p></td>
-</tr>
-<tr class="row-even"><td><p>RegNet Y600</p></td>
-<td><p>ImageNet</p></td>
-<td><p>224x224</p></td>
-<td><p>76.18</p></td>
-<td><p>92.34</p></td>
-<td><p><strong>0.77ms</strong></p></td>
-<td><p><strong>1.22ms</strong></p></td>
-<td><p><strong>2.64ms</strong></p></td>
-<td><p><strong>2.93ms</strong></p></td>
-<td class="text-center"><p><strong>2.39ms</strong></p></td>
-</tr>
-<tr class="row-odd"><td><p>RegNet Y800</p></td>
-<td><p>ImageNet</p></td>
-<td><p>224x224</p></td>
-<td><p>77.07</p></td>
-<td><p>93.26</p></td>
-<td><p><strong>0.74ms</strong></p></td>
-<td><p><strong>1.19ms</strong></p></td>
-<td><p><strong>2.77ms</strong></p></td>
-<td><p><strong>3.04ms</strong></p></td>
-<td class="text-center"><p><strong>2.81ms</strong></p></td>
-</tr>
-<tr class="row-even"><td><p>ResNet 18</p></td>
-<td><p>ImageNet</p></td>
-<td><p>224x224</p></td>
-<td><p>70.6</p></td>
-<td><p>89.64</p></td>
-<td><p><strong>0.52ms</strong></p></td>
Discard
Tip!

Press p or to see the previous file or, n or to see the next file