Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

#315 updating docs using script + change welcome.html manually

Merged
Ghost merged 1 commits into Deci-AI:master from deci-ai:update-docs
Only showing up to 1000 lines per file, please use a local Git client to see the full diff.
Some lines were truncated since they exceed the maximum allowed length of 500, please use a local Git client to see the full diff.
@@ -38,6 +38,7 @@
         </div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
         </div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
               <p class="caption"><span class="caption-text">Welcome To SuperGradients</span></p>
               <p class="caption"><span class="caption-text">Welcome To SuperGradients</span></p>
 <ul>
 <ul>
+<li class="toctree-l1"><a class="reference internal" href="welcome.html">Fill our 4-question quick survey! We will raffle free SuperGradients swag between those who will participate -&gt; Fill Survey</a></li>
 <li class="toctree-l1"><a class="reference internal" href="welcome.html#supergradients">SuperGradients</a></li>
 <li class="toctree-l1"><a class="reference internal" href="welcome.html#supergradients">SuperGradients</a></li>
 </ul>
 </ul>
 <p class="caption"><span class="caption-text">Technical Documentation</span></p>
 <p class="caption"><span class="caption-text">Technical Documentation</span></p>
@@ -167,9 +168,16 @@
 
 
 </dd></dl>
 </dd></dl>
 
 
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.ContextSgMethods">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">ContextSgMethods</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">methods</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gr
+<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p>
+<p>Class for delegating SgModel’s methods, so that only the relevant ones are (“phase wise”) are accessible.</p>
+</dd></dl>
+
 <dl class="py class">
 <dl class="py class">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.PhaseContext">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.PhaseContext">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">PhaseContext</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">epoch</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><spa
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">PhaseContext</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">epoch</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><spa
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p>
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p>
 <p>Represents the input for phase callbacks, and is constantly updated after callback calls.</p>
 <p>Represents the input for phase callbacks, and is constantly updated after callback calls.</p>
 <dl class="py method">
 <dl class="py method">
@@ -187,7 +195,7 @@
 
 
 <dl class="py class">
 <dl class="py class">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.ModelConversionCheckCallback">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.ModelConversionCheckCallback">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">ModelConversionCheckCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_meta_data</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs<
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">ModelConversionCheckCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_meta_data</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">deci_lab_client.models.model_metadat
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseCallback" title="super_gradients.training.utils.callbacks.PhaseCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.PhaseCallback</span></code></a></p>
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseCallback" title="super_gradients.training.utils.callbacks.PhaseCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.PhaseCallback</span></code></a></p>
 <p>Pre-training callback that verifies model conversion to onnx given specified conversion parameters.</p>
 <p>Pre-training callback that verifies model conversion to onnx given specified conversion parameters.</p>
 <p>The model is converted, then inference is applied with onnx runtime.</p>
 <p>The model is converted, then inference is applied with onnx runtime.</p>
@@ -207,7 +215,7 @@
 
 
 <dl class="py class">
 <dl class="py class">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.DeciLabUploadCallback">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.DeciLabUploadCallback">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">DeciLabUploadCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">email</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">model_meta_data</span></span></em>, <em class="sig-param"><span class="n">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">DeciLabUploadCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_meta_data</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">optimization_request_form</span></span></em>, <em class="sig-par
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseCallback" title="super_gradients.training.utils.callbacks.PhaseCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.PhaseCallback</span></code></a></p>
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseCallback" title="super_gradients.training.utils.callbacks.PhaseCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.PhaseCallback</span></code></a></p>
 <p>Post-training callback for uploading and optimizing a model.</p>
 <p>Post-training callback for uploading and optimizing a model.</p>
 <dl class="py attribute">
 <dl class="py attribute">
@@ -245,6 +253,40 @@
 <span class="sig-name descname"><span class="pre">The</span> <span class="pre">following</span> <span class="pre">parameters</span> <span class="pre">may</span> <span class="pre">be</span> <span class="pre">passed</span> <span class="pre">as</span> <span class="pre">kwargs</span> <span class="pre">in</span> <span class="pre">order</span> <span class="pre">to</span> <span class="pre">control</span> <span class="pre">the</span> <span class="pre">conversion</span> <span class="pre">to</span> <span
 <span class="sig-name descname"><span class="pre">The</span> <span class="pre">following</span> <span class="pre">parameters</span> <span class="pre">may</span> <span class="pre">be</span> <span class="pre">passed</span> <span class="pre">as</span> <span class="pre">kwargs</span> <span class="pre">in</span> <span class="pre">order</span> <span class="pre">to</span> <span class="pre">control</span> <span class="pre">the</span> <span class="pre">conversion</span> <span class="pre">to</span> <span
 <dd></dd></dl>
 <dd></dd></dl>
 
 
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.DeciLabUploadCallback.log_optimization_failed">
+<em class="property"><span class="pre">static</span> </em><span class="sig-name descname"><span class="pre">log_optimization_failed</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#DeciLabUploadCallback.log_optimization_failed"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.callbacks.DeciLabUpl
+<dd></dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.DeciLabUploadCallback.upload_model">
+<span class="sig-name descname"><span class="pre">upload_model</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#DeciLabUploadCallback.upload_model"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.callbacks.DeciLabUpl
+<dd><p>This function will upload the trained model to the Deci Lab</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><p><strong>model</strong> – The resulting model from the training process</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.DeciLabUploadCallback.get_optimization_status">
+<span class="sig-name descname"><span class="pre">get_optimization_status</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">optimized_model_name</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">str</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#DeciLabUploadCallback.get_optimization_status"><span class="v
+<dd><p>This function will do fetch the optimized version of the trained model and check on its benchmark status.
+The status will be checked against the server every 30 seconds and the process will timeout after 30 minutes
+or log about the successful optimization - whichever happens first.
+:param optimized_model_name: Optimized model name
+:type optimized_model_name: str</p>
+<dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p>whether or not the optimized model has been benchmarked</p>
+</dd>
+<dt class="field-even">Return type</dt>
+<dd class="field-even"><p>bool</p>
+</dd>
+</dl>
+</dd></dl>
+
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py class">
 <dl class="py class">
@@ -252,6 +294,21 @@
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">LRCallbackBase</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">phase</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">initial_lr</span></span></em>, <em class="sig-param"><span class="n"><span class=
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">LRCallbackBase</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">phase</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">initial_lr</span></span></em>, <em class="sig-param"><span class="n"><span class=
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseCallback" title="super_gradients.training.utils.callbacks.PhaseCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.PhaseCallback</span></code></a></p>
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseCallback" title="super_gradients.training.utils.callbacks.PhaseCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.PhaseCallback</span></code></a></p>
 <p>Base class for hard coded learning rate scheduling regimes, implemented as callbacks.</p>
 <p>Base class for hard coded learning rate scheduling regimes, implemented as callbacks.</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.LRCallbackBase.is_lr_scheduling_enabled">
+<span class="sig-name descname"><span class="pre">is_lr_scheduling_enabled</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseContext" title="super_gradients.training.utils.callbacks.PhaseContext"><span class="pre">super_gradients.training.utils.callbacks.PhaseContext</span></a>
+<dd><p>Predicate that controls whether to perform lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.
+&#64;return: bool, whether to apply lr scheduling or not.</p>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.LRCallbackBase.perform_scheduling">
+<span class="sig-name descname"><span class="pre">perform_scheduling</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseContext" title="super_gradients.training.utils.callbacks.PhaseContext"><span class="pre">super_gradients.training.utils.callbacks.PhaseContext</span></a></span
+<dd><p>Performs lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.</p>
+</dd></dl>
+
 <dl class="py method">
 <dl class="py method">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.LRCallbackBase.update_lr">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.LRCallbackBase.update_lr">
 <span class="sig-name descname"><span class="pre">update_lr</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">optimizer</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">epoch</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">batch_idx</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</sp
 <span class="sig-name descname"><span class="pre">update_lr</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">optimizer</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">epoch</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">batch_idx</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</sp
@@ -268,12 +325,21 @@ LR climbs from warmup_initial_lr with even steps to initial lr. When warmup_init
 <blockquote>
 <blockquote>
 <div><p>initial_lr/(1+warmup_epochs).</p>
 <div><p>initial_lr/(1+warmup_epochs).</p>
 </div></blockquote>
 </div></blockquote>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.WarmupLRCallback.perform_scheduling">
+<span class="sig-name descname"><span class="pre">perform_scheduling</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#WarmupLRCallback.perform_scheduling"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.callbacks.W
+<dd><p>Performs lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.</p>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.WarmupLRCallback.is_lr_scheduling_enabled">
+<span class="sig-name descname"><span class="pre">is_lr_scheduling_enabled</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#WarmupLRCallback.is_lr_scheduling_enabled"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils
+<dd><p>Predicate that controls whether to perform lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.
+&#64;return: bool, whether to apply lr scheduling or not.</p>
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.YoloV5WarmupLRCallback">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">YoloWarmupLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super
-<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py class">
 <dl class="py class">
@@ -281,6 +347,43 @@ LR climbs from warmup_initial_lr with even steps to initial lr. When warmup_init
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">StepLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">lr_updates</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">lr_decay_factor</span></span></em>, <em class="sig-param"><span class="n"><s
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">StepLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">lr_updates</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">lr_decay_factor</span></span></em>, <em class="sig-param"><span class="n"><s
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 <p>Hard coded step learning rate scheduling (i.e at specific milestones).</p>
 <p>Hard coded step learning rate scheduling (i.e at specific milestones).</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.StepLRCallback.perform_scheduling">
+<span class="sig-name descname"><span class="pre">perform_scheduling</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#StepLRCallback.perform_scheduling"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.callbacks.Ste
+<dd><p>Performs lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.</p>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.StepLRCallback.is_lr_scheduling_enabled">
+<span class="sig-name descname"><span class="pre">is_lr_scheduling_enabled</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#StepLRCallback.is_lr_scheduling_enabled"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.c
+<dd><p>Predicate that controls whether to perform lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.
+&#64;return: bool, whether to apply lr scheduling or not.</p>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.ExponentialLRCallback">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">ExponentialLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">lr_decay_factor</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span></span></em>, <em class="sig-pa
+<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
+<p>Exponential decay learning rate scheduling. Decays the learning rate by <cite>lr_decay_factor</cite> every epoch.</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.ExponentialLRCallback.perform_scheduling">
+<span class="sig-name descname"><span class="pre">perform_scheduling</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#ExponentialLRCallback.perform_scheduling"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.callba
+<dd><p>Performs lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.</p>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.ExponentialLRCallback.is_lr_scheduling_enabled">
+<span class="sig-name descname"><span class="pre">is_lr_scheduling_enabled</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#ExponentialLRCallback.is_lr_scheduling_enabled"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.
+<dd><p>Predicate that controls whether to perform lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.
+&#64;return: bool, whether to apply lr scheduling or not.</p>
+</dd></dl>
+
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py class">
 <dl class="py class">
@@ -288,6 +391,21 @@ LR climbs from warmup_initial_lr with even steps to initial lr. When warmup_init
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">PolyLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">max_epochs</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">PolyLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">max_epochs</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 <p>Hard coded polynomial decay learning rate scheduling (i.e at specific milestones).</p>
 <p>Hard coded polynomial decay learning rate scheduling (i.e at specific milestones).</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.PolyLRCallback.perform_scheduling">
+<span class="sig-name descname"><span class="pre">perform_scheduling</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#PolyLRCallback.perform_scheduling"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.callbacks.Pol
+<dd><p>Performs lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.</p>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.PolyLRCallback.is_lr_scheduling_enabled">
+<span class="sig-name descname"><span class="pre">is_lr_scheduling_enabled</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#PolyLRCallback.is_lr_scheduling_enabled"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.c
+<dd><p>Predicate that controls whether to perform lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.
+&#64;return: bool, whether to apply lr scheduling or not.</p>
+</dd></dl>
+
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py class">
 <dl class="py class">
@@ -295,6 +413,21 @@ LR climbs from warmup_initial_lr with even steps to initial lr. When warmup_init
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">CosineLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">max_epochs</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">cosine_final_lr_ratio</span></span></em>, <em class="sig-param"><span clas
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">CosineLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">max_epochs</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">cosine_final_lr_ratio</span></span></em>, <em class="sig-param"><span clas
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 <p>Hard coded step Cosine anealing learning rate scheduling.</p>
 <p>Hard coded step Cosine anealing learning rate scheduling.</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.CosineLRCallback.perform_scheduling">
+<span class="sig-name descname"><span class="pre">perform_scheduling</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#CosineLRCallback.perform_scheduling"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.callbacks.C
+<dd><p>Performs lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.</p>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.CosineLRCallback.is_lr_scheduling_enabled">
+<span class="sig-name descname"><span class="pre">is_lr_scheduling_enabled</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#CosineLRCallback.is_lr_scheduling_enabled"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils
+<dd><p>Predicate that controls whether to perform lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.
+&#64;return: bool, whether to apply lr scheduling or not.</p>
+</dd></dl>
+
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py class">
 <dl class="py class">
@@ -302,6 +435,21 @@ LR climbs from warmup_initial_lr with even steps to initial lr. When warmup_init
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">FunctionLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">max_epochs</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">lr_schedule_function</span></span></em>, <em class="sig-param"><span cla
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">FunctionLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">max_epochs</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">lr_schedule_function</span></span></em>, <em class="sig-param"><span cla
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 <dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.LRCallbackBase" title="super_gradients.training.utils.callbacks.LRCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.LRCallbackBase</span></code></a></p>
 <p>Hard coded rate scheduling for user defined lr scheduling function.</p>
 <p>Hard coded rate scheduling for user defined lr scheduling function.</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.FunctionLRCallback.is_lr_scheduling_enabled">
+<span class="sig-name descname"><span class="pre">is_lr_scheduling_enabled</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#FunctionLRCallback.is_lr_scheduling_enabled"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.uti
+<dd><p>Predicate that controls whether to perform lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.
+&#64;return: bool, whether to apply lr scheduling or not.</p>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.FunctionLRCallback.perform_scheduling">
+<span class="sig-name descname"><span class="pre">perform_scheduling</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#FunctionLRCallback.perform_scheduling"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.callbacks
+<dd><p>Performs lr scheduling based on values in context.</p>
+<p>&#64;param context: PhaseContext: current phase’s context.</p>
+</dd></dl>
+
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py exception">
 <dl class="py exception">
@@ -412,6 +560,57 @@ LR climbs from warmup_initial_lr with even steps to initial lr. When warmup_init
 
 
 </dd></dl>
 </dd></dl>
 
 
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.TrainingStageSwitchCallbackBase">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">TrainingStageSwitchCallbackBase</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">next_stage_start_epoch</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">int</span></span></em><spa
+<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseCallback" title="super_gradients.training.utils.callbacks.PhaseCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.PhaseCallback</span></code></a></p>
+<p>TrainingStageSwitchCallback</p>
+<p>A phase callback that is called at a specific epoch (epoch start) to support multi-stage training.
+It does so by manipulating the objects inside the context.</p>
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.TrainingStageSwitchCallbackBase.next_stage_start_epoch">
+<span class="sig-name descname"><span class="pre">next_stage_start_epoch</span></span><a class="headerlink" href="#super_gradients.training.utils.callbacks.TrainingStageSwitchCallbackBase.next_stage_start_epoch" title="Permalink to this definition"></a></dt>
+<dd><p>int, the epoch idx to apply the stage change.</p>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.TrainingStageSwitchCallbackBase.apply_stage_change">
+<span class="sig-name descname"><span class="pre">apply_stage_change</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseContext" title="super_gradients.training.utils.callbacks.PhaseContext"><span class="pre">super_gradients.training.utils.callbacks.PhaseContext</span></a></span
+<dd><dl class="simple">
+<dt>This method is called when the callback is fired on the next_stage_start_epoch,</dt><dd><p>and holds the stage change logic that should be applied to the context’s objects.</p>
+</dd>
+</dl>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><p><strong>context</strong> – PhaseContext, context of current phase</p>
+</dd>
+</dl>
+</dd></dl>
+
+</dd></dl>
+
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.YoloXTrainingStageSwitchCallback">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">YoloXTrainingStageSwitchCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">next_stage_start_epoch</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">int</span></span> <span c
+<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.TrainingStageSwitchCallbackBase" title="super_gradients.training.utils.callbacks.TrainingStageSwitchCallbackBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.TrainingStageSwitchCallbackBase</span></code></a></p>
+<p>Training stage switch for YoloX training.
+Disables mosaic, and manipulates YoloX loss to use L1.</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.YoloXTrainingStageSwitchCallback.apply_stage_change">
+<span class="sig-name descname"><span class="pre">apply_stage_change</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">context</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseContext" title="super_gradients.training.utils.callbacks.PhaseContext"><span class="pre">super_gradients.training.utils.callbacks.PhaseContext</span></a></span
+<dd><dl class="simple">
+<dt>This method is called when the callback is fired on the next_stage_start_epoch,</dt><dd><p>and holds the stage change logic that should be applied to the context’s objects.</p>
+</dd>
+</dl>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><p><strong>context</strong> – PhaseContext, context of current phase</p>
+</dd>
+</dl>
+</dd></dl>
+
+</dd></dl>
+
 <dl class="py class">
 <dl class="py class">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.CallbackHandler">
 <dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.CallbackHandler">
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">CallbackHandler</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">callbacks</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#CallbackHan
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">CallbackHandler</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">callbacks</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#CallbackHan
@@ -425,6 +624,17 @@ LR climbs from warmup_initial_lr with even steps to initial lr. When warmup_init
 
 
 </dd></dl>
 </dd></dl>
 
 
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.utils.callbacks.TestLRCallback">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.callbacks.</span></span><span class="sig-name descname"><span class="pre">TestLRCallback</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">lr_placeholder</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/callbacks.html#TestLRC
+<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.callbacks.PhaseCallback" title="super_gradients.training.utils.callbacks.PhaseCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.callbacks.PhaseCallback</span></code></a></p>
+<dl class="simple">
+<dt>Phase callback that collects the learning rates in lr_placeholder at the end of each epoch (used for testing). In</dt><dd><p>the case of multiple parameter groups (i.e multiple learning rates) the learning rate is collected from the first
+one. The phase is VALIDATION_EPOCH_END to ensure all lr updates have been performed before calling this callback.</p>
+</dd>
+</dl>
+</dd></dl>
+
 </section>
 </section>
 <section id="module-super_gradients.training.utils.checkpoint_utils">
 <section id="module-super_gradients.training.utils.checkpoint_utils">
 <span id="super-gradients-training-utils-checkpoint-utils-module"></span><h2>super_gradients.training.utils.checkpoint_utils module<a class="headerlink" href="#module-super_gradients.training.utils.checkpoint_utils" title="Permalink to this headline"></a></h2>
 <span id="super-gradients-training-utils-checkpoint-utils-module"></span><h2>super_gradients.training.utils.checkpoint_utils module<a class="headerlink" href="#module-super_gradients.training.utils.checkpoint_utils" title="Permalink to this headline"></a></h2>
@@ -472,7 +682,7 @@ YOUR_REPO_ROOT/super_gradients/checkpoints/experiment_name/ckpt_name if such fil
 
 
 <dl class="py function">
 <dl class="py function">
 <dt class="sig sig-object py" id="super_gradients.training.utils.checkpoint_utils.adapt_state_dict_to_fit_model_layer_names">
 <dt class="sig sig-object py" id="super_gradients.training.utils.checkpoint_utils.adapt_state_dict_to_fit_model_layer_names">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.checkpoint_utils.</span></span><span class="sig-name descname"><span class="pre">adapt_state_dict_to_fit_model_layer_names</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_state_dict</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">dict</span></span></em>, <em class="sig-param"><span class="n"><span cla
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.checkpoint_utils.</span></span><span class="sig-name descname"><span class="pre">adapt_state_dict_to_fit_model_layer_names</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_state_dict</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">dict</span></span></em>, <em class="sig-param"><span class="n"><span cla
 <dd><p>Given a model state dict and source checkpoints, the method tries to correct the keys in the model_state_dict to fit
 <dd><p>Given a model state dict and source checkpoints, the method tries to correct the keys in the model_state_dict to fit
 the ckpt in order to properly load the weights into the model. If unsuccessful - returns None</p>
 the ckpt in order to properly load the weights into the model. If unsuccessful - returns None</p>
 <blockquote>
 <blockquote>
@@ -484,8 +694,16 @@ the ckpt in order to properly load the weights into the model. If unsuccessful -
 <dd class="field-even"><p>checkpoint dict</p>
 <dd class="field-even"><p>checkpoint dict</p>
 </dd>
 </dd>
 </dl>
 </dl>
-<p>:exclude                  optional list for excluded layers
-:return: renamed checkpoint dict (if possible)</p>
+<p>:param exclude                  optional list for excluded layers
+:param solver:                  callable with signature (ckpt_key, ckpt_val, model_key, model_val)</p>
+<blockquote>
+<div><p>that returns a desired weight for ckpt_val.</p>
+</div></blockquote>
+<dl class="field-list simple">
+<dt class="field-odd">return</dt>
+<dd class="field-odd"><p>renamed checkpoint dict (if possible)</p>
+</dd>
+</dl>
 </div></blockquote>
 </div></blockquote>
 </dd></dl>
 </dd></dl>
 
 
@@ -534,38 +752,65 @@ and enhances the exception_msg if loading the checkpoint_dict via the conversion
 </section>
 </section>
 <section id="module-super_gradients.training.utils.detection_utils">
 <section id="module-super_gradients.training.utils.detection_utils">
 <span id="super-gradients-training-utils-detection-utils-module"></span><h2>super_gradients.training.utils.detection_utils module<a class="headerlink" href="#module-super_gradients.training.utils.detection_utils" title="Permalink to this headline"></a></h2>
 <span id="super-gradients-training-utils-detection-utils-module"></span><h2>super_gradients.training.utils.detection_utils module<a class="headerlink" href="#module-super_gradients.training.utils.detection_utils" title="Permalink to this headline"></a></h2>
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.base_detection_collate_fn">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">base_detection_collate_fn</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">batch</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#base_detection_collate_fn"><span class="viewcode-l
-<dd><p>Batch Processing helper function for detection training/testing.
-stacks the lists of images and targets into tensors and adds the image index to each target (so the targets could
-later be associated to the correct images)</p>
-<blockquote>
-<div><dl class="field-list simple">
-<dt class="field-odd">param batch</dt>
-<dd class="field-odd"><p>Input batch from the Dataset __get_item__ method</p>
-</dd>
-<dt class="field-even">return</dt>
-<dd class="field-even"><p>batch with the transformed values</p>
-</dd>
-</dl>
-</div></blockquote>
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">DetectionTargetsFormat</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">value</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.h
+<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">enum.Enum</span></code></p>
+<p>Enum class for the different detection output formats</p>
+<p>When NORMALIZED is not specified- the type refers to unnormalized image coordinates (of the bboxes).</p>
+<p>For example:
+LABEL_NORMALIZED_XYXY means [class_idx,x1,y1,x2,y2]</p>
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat.LABEL_XYXY">
+<span class="sig-name descname"><span class="pre">LABEL_XYXY</span></span><em class="property"> <span class="pre">=</span> <span class="pre">'LABEL_XYXY'</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat.LABEL_XYXY" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat.XYXY_LABEL">
+<span class="sig-name descname"><span class="pre">XYXY_LABEL</span></span><em class="property"> <span class="pre">=</span> <span class="pre">'XYXY_LABEL'</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat.XYXY_LABEL" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat.LABEL_NORMALIZED_XYXY">
+<span class="sig-name descname"><span class="pre">LABEL_NORMALIZED_XYXY</span></span><em class="property"> <span class="pre">=</span> <span class="pre">'LABEL_NORMALIZED_XYXY'</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat.LABEL_NORMALIZED_XYXY" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat.NORMALIZED_XYXY_LABEL">
+<span class="sig-name descname"><span class="pre">NORMALIZED_XYXY_LABEL</span></span><em class="property"> <span class="pre">=</span> <span class="pre">'NORMALIZED_XYXY_LABEL'</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat.NORMALIZED_XYXY_LABEL" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat.LABEL_CXCYWH">
+<span class="sig-name descname"><span class="pre">LABEL_CXCYWH</span></span><em class="property"> <span class="pre">=</span> <span class="pre">'LABEL_CXCYWH'</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat.LABEL_CXCYWH" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat.CXCYWH_LABEL">
+<span class="sig-name descname"><span class="pre">CXCYWH_LABEL</span></span><em class="property"> <span class="pre">=</span> <span class="pre">'CXCYWH_LABEL'</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat.CXCYWH_LABEL" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat.LABEL_NORMALIZED_CXCYWH">
+<span class="sig-name descname"><span class="pre">LABEL_NORMALIZED_CXCYWH</span></span><em class="property"> <span class="pre">=</span> <span class="pre">'LABEL_NORMALIZED_CXCYWH'</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat.LABEL_NORMALIZED_CXCYWH" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionTargetsFormat.NORMALIZED_CXCYWH_LABEL">
+<span class="sig-name descname"><span class="pre">NORMALIZED_CXCYWH_LABEL</span></span><em class="property"> <span class="pre">=</span> <span class="pre">'NORMALIZED_CXCYWH_LABEL'</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat.NORMALIZED_CXCYWH_LABEL" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py function">
 <dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.convert_xyxy_bbox_to_xywh">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">convert_xyxy_bbox_to_xywh</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">input_bbox</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#convert_xyxy_bbox_to_xywh"><span class="viewc
-<dd><dl class="simple">
-<dt>convert_xyxy_bbox_to_xywh - Converts bounding box format from [x1, y1, x2, y2] to [x, y, w, h]</dt><dd><dl class="field-list simple">
-<dt class="field-odd">param input_bbox</dt>
-<dd class="field-odd"><p>input bbox</p>
-</dd>
-<dt class="field-even">return</dt>
-<dd class="field-even"><p>Converted bbox</p>
-</dd>
-</dl>
-</dd>
-</dl>
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.get_cls_posx_in_target">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">get_cls_posx_in_target</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">target_format</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="#super_gradients.training.utils.detection_utils.DetectionTargetsFormat" title
+<dd><p>Get the label of a given target
+:param target_format:   Representation of the target (ex: LABEL_XYXY)
+:return:                Position of the class id in a bbox</p>
+<blockquote>
+<div><p>ex: 0 if bbox of format label_xyxy | -1 if bbox of format xyxy_label</p>
+</div></blockquote>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py function">
 <dl class="py function">
@@ -585,23 +830,6 @@ boxes of a batch of images)</p>
 </dl>
 </dl>
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.calculate_wh_iou">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">calculate_wh_iou</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">box1</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">box2</span></span></em><span class="sig-paren">)</span> &#x2192; <span class="pre">float</span><a class="reference internal" href="
-<dd><dl class="simple">
-<dt>calculate_wh_iou - Gets the Intersection over Union of the w,h values of the bboxes</dt><dd><dl class="field-list simple">
-<dt class="field-odd">param box1</dt>
-<dd class="field-odd"><p></p></dd>
-<dt class="field-even">param box2</dt>
-<dd class="field-even"><p></p></dd>
-<dt class="field-odd">return</dt>
-<dd class="field-odd"><p>IOU</p>
-</dd>
-</dl>
-</dd>
-</dl>
-</dd></dl>
-
 <dl class="py function">
 <dl class="py function">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.calculate_bbox_iou_matrix">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.calculate_bbox_iou_matrix">
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">calculate_bbox_iou_matrix</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">box1</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">box2</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">x1y1x2y2</span></span><span class="o"><sp
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">calculate_bbox_iou_matrix</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">box1</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">box2</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">x1y1x2y2</span></span><span class="o"><sp
@@ -624,28 +852,6 @@ boxes of a batch of images)</p>
 </dl>
 </dl>
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.calculate_bbox_iou_elementwise">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">calculate_bbox_iou_elementwise</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">box1</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">box2</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">x1y1x2y2</span></span><span class="o
-<dd><dl class="simple">
-<dt>calculate elementwise iou of two bbox tensors</dt><dd><dl class="field-list simple">
-<dt class="field-odd">param box1</dt>
-<dd class="field-odd"><p>a 2D tensor of boxes (shape N x 4)</p>
-</dd>
-<dt class="field-even">param box2</dt>
-<dd class="field-even"><p>a 2D tensor of boxes (shape N x 4)</p>
-</dd>
-<dt class="field-odd">param x1y1x2y2</dt>
-<dd class="field-odd"><p>boxes format is x1y1x2y2 (True) or xywh where xy is the center (False)</p>
-</dd>
-<dt class="field-even">return</dt>
-<dd class="field-even"><p>a 1D iou tensor (shape N)</p>
-</dd>
-</dl>
-</dd>
-</dl>
-</dd></dl>
-
 <dl class="py function">
 <dl class="py function">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.calc_bbox_iou_matrix">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.calc_bbox_iou_matrix">
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">calc_bbox_iou_matrix</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">pred</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_module
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">calc_bbox_iou_matrix</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">pred</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_module
@@ -661,83 +867,11 @@ boxes of a batch of images)</p>
 </dl>
 </dl>
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.build_detection_targets">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">build_detection_targets</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">detection_net</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.nn.modules.module.Module</span></span></em>, <em class="sig-param"><span class="n"><span
-<dd><dl>
-<dt>build_detection_targets - Builds the outputs of the Detection NN</dt><dd><blockquote>
-<div><p>This function filters all of the targets that don’t have a sufficient iou coverage
-of the Model’s pre-trained k-means anchors
-The iou_threshold is a parameter of the NN Model</p>
-</div></blockquote>
-<dl class="field-list simple">
-<dt class="field-odd">param detection_net</dt>
-<dd class="field-odd"><p>The nn.Module of the Detection Algorithm</p>
-</dd>
-<dt class="field-even">param targets</dt>
-<dd class="field-even"><p>targets (labels)</p>
-</dd>
-<dt class="field-odd">return</dt>
-<dd class="field-odd"><p></p></dd>
-</dl>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.yolo_v3_non_max_suppression">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">yolo_v3_non_max_suppression</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">prediction</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">conf_thres</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="
-<dd><dl>
-<dt>non_max_suppression - Removes detections with lower object confidence score than ‘conf_thres’</dt><dd><blockquote>
-<div><p>Non-Maximum Suppression to further filter detections.</p>
-</div></blockquote>
-<dl class="field-list simple">
-<dt class="field-odd">param prediction</dt>
-<dd class="field-odd"><p>the raw prediction as produced by the yolo_v3 network</p>
-</dd>
-<dt class="field-even">param conf_thres</dt>
-<dd class="field-even"><p>confidence threshold - only prediction with confidence score higher than the threshold
-will be considered</p>
-</dd>
-<dt class="field-odd">param nms_thres</dt>
-<dd class="field-odd"><p>IoU threshold for the nms algorithm</p>
-</dd>
-<dt class="field-even">param device</dt>
-<dd class="field-even"><p>the device to move all output tensors into</p>
-</dd>
-<dt class="field-odd">return</dt>
-<dd class="field-odd"><p>(x1, y1, x2, y2, object_conf, class_conf, class)</p>
-</dd>
-</dl>
-</dd>
-</dl>
-</dd></dl>
-
 <dl class="py function">
 <dl class="py function">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.change_bbox_bounds_for_image_size">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.change_bbox_bounds_for_image_size">
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">change_bbox_bounds_for_image_size</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">boxes</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">img_shape</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gr
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">change_bbox_bounds_for_image_size</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">boxes</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">img_shape</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gr
 <dd></dd></dl>
 <dd></dd></dl>
 
 
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.rescale_bboxes_for_image_size">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">rescale_bboxes_for_image_size</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">current_image_shape</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">bbox</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">original_image_shape</
-<dd><dl class="simple">
-<dt>rescale_bboxes_for_image_size - Changes the bboxes to fit the original image</dt><dd><dl class="field-list simple">
-<dt class="field-odd">param current_image_shape</dt>
-<dd class="field-odd"><p></p></dd>
-<dt class="field-even">param bbox</dt>
-<dd class="field-even"><p></p></dd>
-<dt class="field-odd">param original_image_shape</dt>
-<dd class="field-odd"><p></p></dd>
-<dt class="field-even">param ratio_pad</dt>
-<dd class="field-even"><p></p></dd>
-<dt class="field-odd">return</dt>
-<dd class="field-odd"><p></p></dd>
-</dl>
-</dd>
-</dl>
-</dd></dl>
-
 <dl class="py class">
 <dl class="py class">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionPostPredictionCallback">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionPostPredictionCallback">
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">DetectionPostPredictionCallback</span></span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#DetectionPostPredictionCallback"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">DetectionPostPredictionCallback</span></span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#DetectionPostPredictionCallback"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href
@@ -766,34 +900,6 @@ with shape: nx6 (x1, y1, x2, y2, confidence, class) where x and y are in range [
 
 
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.YoloV3NonMaxSuppression">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">YoloV3NonMaxSuppression</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">conf</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span></span> <span class="o"><span clas
-<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.detection_utils.DetectionPostPredictionCallback" title="super_gradients.training.utils.detection_utils.DetectionPostPredictionCallback"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.detection_utils.DetectionPostPredictionCallback</span></code></a></p>
-<dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.YoloV3NonMaxSuppression.forward">
-<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">x</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">device</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">str</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#YoloV3N
-<dd><dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>x</strong> – the output of your model</p></li>
-<li><p><strong>device</strong> – the device to move all output tensors into</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p>a list with length batch_size, each item in the list is a detections
-with shape: nx6 (x1, y1, x2, y2, confidence, class) where x and y are in range [0,1]</p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.YoloV3NonMaxSuppression.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.YoloV3NonMaxSuppression.training" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
-</dd></dl>
-
 <dl class="py class">
 <dl class="py class">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.IouThreshold">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.IouThreshold">
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">IouThreshold</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">value</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#IouThr
 <em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">IouThreshold</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">value</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#IouThr
@@ -814,27 +920,11 @@ with shape: nx6 (x1, y1, x2, y2, confidence, class) where x and y are in range [
 <span class="sig-name descname"><span class="pre">is_range</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#IouThreshold.is_range"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.detection_utils.IouThreshold.is_range" title="Permalink to this definition"></a></dt>
 <span class="sig-name descname"><span class="pre">is_range</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#IouThreshold.is_range"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.detection_utils.IouThreshold.is_range" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
-</dd></dl>
-
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.scale_img">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">scale_img</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">img</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">ratio</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">1.0</span></span></em>, <
-<dd><p>Scales the image by ratio (image dims is (batch_size, channels, height, width)
-Taken from Yolov5 Ultralitics repo</p>
-</dd></dl>
-
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.fuse_conv_and_bn">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">fuse_conv_and_bn</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">conv</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">bn</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/de
-<dd><p>Fuse convolution and batchnorm layers <a class="reference external" href="https://tehnokv.com/posts/fusing-batchnorm-and-conv/">https://tehnokv.com/posts/fusing-batchnorm-and-conv/</a>
-Taken from Yolov5 Ultralitics repo</p>
-</dd></dl>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.IouThreshold.to_tensor">
+<span class="sig-name descname"><span class="pre">to_tensor</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#IouThreshold.to_tensor"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.utils.detection_utils.IouThreshold.to_tensor" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
 
 
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.check_anchor_order">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">check_anchor_order</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">m</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#check_anchor_order"><span class="viewcode-link"><span class="
-<dd><p>Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary
-Taken from Yolov5 Ultralitics repo</p>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py function">
 <dl class="py function">
@@ -858,12 +948,11 @@ Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
 <dd class="field-even"><p>iou (Tensor[N, M])</p>
 <dd class="field-even"><p>iou (Tensor[N, M])</p>
 </dd>
 </dd>
 </dl>
 </dl>
-<p>Taken from Yolov5 Ultralitics repo</p>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py function">
 <dl class="py function">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.non_max_suppression">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.non_max_suppression">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">non_max_suppression</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">prediction</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">conf_thres</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">0.1
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">non_max_suppression</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">prediction</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">conf_thres</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">0.1
 <dd><dl class="simple">
 <dd><dl class="simple">
 <dt>Performs Non-Maximum Suppression (NMS) on inference results</dt><dd><dl class="field-list simple">
 <dt>Performs Non-Maximum Suppression (NMS) on inference results</dt><dd><dl class="field-list simple">
 <dt class="field-odd">param prediction</dt>
 <dt class="field-odd">param prediction</dt>
@@ -875,17 +964,17 @@ Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
 <dt class="field-odd">param iou_thres</dt>
 <dt class="field-odd">param iou_thres</dt>
 <dd class="field-odd"><p>IoU threshold for the nms algorithm</p>
 <dd class="field-odd"><p>IoU threshold for the nms algorithm</p>
 </dd>
 </dd>
-<dt class="field-even">param merge</dt>
-<dd class="field-even"><p>Merge boxes using weighted mean</p>
-</dd>
-<dt class="field-odd">param classes</dt>
-<dd class="field-odd"><p>(optional list) filter by class</p>
+<dt class="field-even">param multi_label_per_box</dt>
+<dd class="field-even"><p>whether to use re-use each box with all possible labels
+(instead of the maximum confidence all confidences above threshold
+will be sent to NMS); by default is set to True</p>
 </dd>
 </dd>
-<dt class="field-even">param agnostic</dt>
-<dd class="field-even"><p>Determines if is class agnostic. i.e. may display a box with 2 predictions</p>
+<dt class="field-odd">param with_confidence</dt>
+<dd class="field-odd"><p>whether to multiply objectness score with class score.
+usually valid for Yolo models only.</p>
 </dd>
 </dd>
-<dt class="field-odd">return</dt>
-<dd class="field-odd"><p>(x1, y1, x2, y2, object_conf, class_conf, class)</p>
+<dt class="field-even">return</dt>
+<dd class="field-even"><p>(x1, y1, x2, y2, object_conf, class_conf, class)</p>
 </dd>
 </dd>
 </dl>
 </dl>
 </dd>
 </dd>
@@ -900,31 +989,6 @@ Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
 </dl>
 </dl>
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.check_img_size_divisibilty">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">check_img_size_divisibilty</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">img_size</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">int</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">stride</span></s
-<dd><dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>img_size</strong> – Int, the size of the image (H or W).</p></li>
-<li><p><strong>stride</strong> – Int, the number to check if img_size is divisible by.</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p>(True, None) if img_size is divisble by stride, (False, Suggestions) if it’s not.
-Note: Suggestions are the two closest numbers to img_size that <em>are</em> divisible by stride.
-For example if img_size=321, stride=32, it will return (False,(352, 320)).</p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.make_divisible">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">make_divisible</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">x</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">divisor</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">ceil</span></span><span class="o"><span class="pre">
-<dd><p>Returns x evenly divisible by divisor.
-If ceil=True it will return the closest larger number to the original x, and ceil=False the closest smaller number.</p>
-</dd></dl>
-
 <dl class="py function">
 <dl class="py function">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.matrix_non_max_suppression">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.matrix_non_max_suppression">
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">matrix_non_max_suppression</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">pred</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">conf_thres</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span><
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">matrix_non_max_suppression</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">pred</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">conf_thres</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span><
@@ -966,55 +1030,6 @@ where each item format is (x, y, w, h, object_conf, class_conf, … 80 classes s
 
 
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.calc_batch_prediction_accuracy">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">calc_batch_prediction_accuracy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">output</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targe
-<dd><dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>output</strong> – list (of length batch_size) of Tensors of shape (num_detections, 6)
-format:     (x1, y1, x2, y2, confidence, class_label) where x1,y1,x2,y2 are according to image size</p></li>
-<li><p><strong>targets</strong> – targets for all images of shape (total_num_targets, 6)
-format:     (image_index, x, y, w, h, label) where x,y,w,h are in range [0,1]</p></li>
-<li><p><strong>height</strong><strong>,</strong><strong>width</strong> – dimensions of the image</p></li>
-<li><p><strong>iou_thres</strong> – Threshold to compute the mAP</p></li>
-<li><p><strong>device</strong> – ‘cuda’’cpu’ - where the computations are made</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p></p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.AnchorGenerator">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">AnchorGenerator</span></span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#AnchorGenerator"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.util
-<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p>
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.AnchorGenerator.logger">
-<span class="sig-name descname"><span class="pre">logger</span></span><em class="property"> <span class="pre">=</span> <span class="pre">&lt;Logger</span> <span class="pre">super_gradients.training.utils.detection_utils</span> <span class="pre">(INFO)&gt;</span></em><a class="headerlink" href="#super_gradients.training.utils.detection_utils.AnchorGenerator.logger" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
-</dd></dl>
-
-<dl class="py function">
-<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.plot_coco_datasaet_images_with_detections">
-<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">plot_coco_datasaet_images_with_detections</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">data_loader</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">num_images_to_plot</span></span><span class="o"><span class="pre">=</span></span><span class="defau
-<dd><dl class="simple">
-<dt>plot_coco_images</dt><dd><dl class="field-list simple">
-<dt class="field-odd">param data_loader</dt>
-<dd class="field-odd"><p></p></dd>
-<dt class="field-even">param num_images_to_plot</dt>
-<dd class="field-even"><p></p></dd>
-<dt class="field-odd">return</dt>
-<dd class="field-odd"><p></p></dd>
-</dl>
-</dd>
-</dl>
-<p>#</p>
-</dd></dl>
-
 <dl class="py function">
 <dl class="py function">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.undo_image_preprocessing">
 <dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.undo_image_preprocessing">
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">undo_image_preprocessing</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">im_tensor</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em><span class="sig-paren">)</span> &#x2192; <span class="pre">numpy.
 <span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">undo_image_preprocessing</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">im_tensor</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em><span class="sig-paren">)</span> &#x2192; <span class="pre">numpy.
@@ -1108,6 +1123,321 @@ e.g. incoming images are (320x320), use scale = 2. to preview in (640x640)</p></
 
 
 </dd></dl>
 </dd></dl>
 
 
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.xyxy2cxcywh">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">xyxy2cxcywh</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">bboxes</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#xyxy2cxcywh"><span class="viewcode-link"><span class="pre">[sou
+<dd><p>Transforms bboxes from xyxy format to centerized xy wh format
+:param bboxes: array, shaped (nboxes, 4)
+:return: modified bboxes</p>
+</dd></dl>
+
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.cxcywh2xyxy">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">cxcywh2xyxy</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">bboxes</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#cxcywh2xyxy"><span class="viewcode-link"><span class="pre">[sou
+<dd><p>Transforms bboxes from centerized xy wh format to xyxy format
+:param bboxes: array, shaped (nboxes, 4)
+:return: modified bboxes</p>
+</dd></dl>
+
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.get_mosaic_coordinate">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">get_mosaic_coordinate</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">mosaic_index</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">xc</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">yc</span></span></em>, <em class="sig-p
+<dd><p>Returns the mosaic coordinates of final mosaic image according to mosaic image index.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>mosaic_index</strong> – (int) mosaic image index</p></li>
+<li><p><strong>xc</strong> – (int) center x coordinate of the entire mosaic grid.</p></li>
+<li><p><strong>yc</strong> – (int) center y coordinate of the entire mosaic grid.</p></li>
+<li><p><strong>w</strong> – (int) width of bbox</p></li>
+<li><p><strong>h</strong> – (int) height of bbox</p></li>
+<li><p><strong>input_h</strong> – (int) image input height (should be 1/2 of the final mosaic output image height).</p></li>
+<li><p><strong>input_w</strong> – (int) image input width (should be 1/2 of the final mosaic output image width).</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>(x1, y1, x2, y2), (x1s, y1s, x2s, y2s) where (x1, y1, x2, y2) are the coordinates in the final mosaic
+output image, and (x1s, y1s, x2s, y2s) are the coordinates in the placed image.</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.adjust_box_anns">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">adjust_box_anns</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">bbox</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">scale_ratio</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">padw</span></span></em>, <em class="sig-para
+<dd><p>Adjusts the bbox annotations of rescaled, padded image.</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>bbox</strong> – (np.array) bbox to modify.</p></li>
+<li><p><strong>scale_ratio</strong> – (float) scale ratio between rescale output image and original one.</p></li>
+<li><p><strong>padw</strong> – (int) width padding size.</p></li>
+<li><p><strong>padh</strong> – (int) height padding size.</p></li>
+<li><p><strong>w_max</strong> – (int) width border.</p></li>
+<li><p><strong>h_max</strong> – (int) height border</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>modified bbox (np.array)</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.DetectionCollateFN">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">DetectionCollateFN</span></span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#DetectionCollateFN"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.trainin
+<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p>
+<p>Collate function for Yolox training</p>
+</dd></dl>
+
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.CrowdDetectionCollateFN">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">CrowdDetectionCollateFN</span></span><a class="reference internal" href="_modules/super_gradients/training/utils/detection_utils.html#CrowdDetectionCollateFN"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradien
+<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.utils.detection_utils.DetectionCollateFN" title="super_gradients.training.utils.detection_utils.DetectionCollateFN"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.utils.detection_utils.DetectionCollateFN</span></code></a></p>
+<p>Collate function for Yolox training with additional_batch_items that includes crowd targets</p>
+</dd></dl>
+
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.compute_box_area">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">compute_box_area</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">box</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em><span class="sig-paren">)</span> &#x2192; <span class="pre">torch.Tensor</span><
+<dd><dl class="simple">
+<dt>Compute the area of one or many boxes.</dt><dd><dl class="field-list simple">
+<dt class="field-odd">param box</dt>
+<dd class="field-odd"><p>One or many boxes, shape = (4, ?), each box in format (x1, y1, x2, y2)</p>
+</dd>
+</dl>
+</dd>
+</dl>
+<dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p>Area of every box, shape = (1, ?)</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.crowd_ioa">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">crowd_ioa</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">det_box</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">crowd_box</span></span><s
+<dd><p>Return intersection-over-detection_area of boxes, used for crowd ground truths.
+Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
+:param det_box:
+:type det_box: Tensor[N, 4]
+:param crowd_box:
+:type crowd_box: Tensor[M, 4]</p>
+<dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p><dl class="simple">
+<dt>the NxM matrix containing the pairwise</dt><dd><p>IoA values for every element in det_box and crowd_box</p>
+</dd>
+</dl>
+</p>
+</dd>
+<dt class="field-even">Return type</dt>
+<dd class="field-even"><p>crowd_ioa (Tensor[N, M])</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.compute_detection_matching">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">compute_detection_matching</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">output</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</
+<dd><p>Match predictions (NMS output) and the targets (ground truth) with respect to IoU and confidence score.
+:param output:          list (of length batch_size) of Tensors of shape (num_predictions, 6)</p>
+<blockquote>
+<div><p>format:     (x1, y1, x2, y2, confidence, class_label) where x1,y1,x2,y2 are according to image size</p>
+</div></blockquote>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>targets</strong> – targets for all images of shape (total_num_targets, 6)
+format:     (index, x, y, w, h, label) where x,y,w,h are in range [0,1]</p></li>
+<li><p><strong>height</strong> – dimensions of the image</p></li>
+<li><p><strong>width</strong> – dimensions of the image</p></li>
+<li><p><strong>iou_thresholds</strong> – Threshold to compute the mAP</p></li>
+<li><p><strong>device</strong> – Device</p></li>
+<li><p><strong>crowd_targets</strong> – crowd targets for all images of shape (total_num_crowd_targets, 6)
+format:     (index, x, y, w, h, label) where x,y,w,h are in range [0,1]</p></li>
+<li><p><strong>top_k</strong> – Number of predictions to keep per class, ordered by confidence score</p></li>
+<li><p><strong>denormalize_targets</strong> – If True, denormalize the targets and crowd_targets</p></li>
+<li><p><strong>return_on_cpu</strong> – If True, the output will be returned on “CPU”, otherwise it will be returned on “device”</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p><p>list of the following tensors, for every image:
+:preds_matched:     Tensor of shape (num_img_predictions, n_iou_thresholds)</p>
+<blockquote>
+<div><p>True when prediction (i) is matched with a target with respect to the (j)th IoU threshold</p>
+</div></blockquote>
+<dl class="field-list simple">
+<dt class="field-odd">preds_to_ignore</dt>
+<dd class="field-odd"><p>Tensor of shape (num_img_predictions, n_iou_thresholds)
+True when prediction (i) is matched with a crowd target with respect to the (j)th IoU threshold</p>
+</dd>
+<dt class="field-even">preds_scores</dt>
+<dd class="field-even"><p>Tensor of shape (num_img_predictions), confidence score for every prediction</p>
+</dd>
+<dt class="field-odd">preds_cls</dt>
+<dd class="field-odd"><p>Tensor of shape (num_img_predictions), predicted class for every prediction</p>
+</dd>
+<dt class="field-even">targets_cls</dt>
+<dd class="field-even"><p>Tensor of shape (num_img_targets), ground truth class for every target</p>
+</dd>
+</dl>
+</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.compute_img_detection_matching">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">compute_img_detection_matching</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">preds</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">target
+<dd><p>Match predictions (NMS output) and the targets (ground truth) with respect to IoU and confidence score
+for a given image.
+:param preds:           Tensor of shape (num_img_predictions, 6)</p>
+<blockquote>
+<div><p>format:     (x1, y1, x2, y2, confidence, class_label) where x1,y1,x2,y2 are according to image size</p>
+</div></blockquote>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>targets</strong> – targets for this image of shape (num_img_targets, 6)
+format:     (index, x, y, w, h, label) where x,y,w,h are in range [0,1]</p></li>
+<li><p><strong>height</strong> – dimensions of the image</p></li>
+<li><p><strong>width</strong> – dimensions of the image</p></li>
+<li><p><strong>iou_thresholds</strong> – Threshold to compute the mAP</p></li>
+<li><p><strong>device</strong> – </p></li>
+<li><p><strong>crowd_targets</strong> – crowd targets for all images of shape (total_num_crowd_targets, 6)
+format:     (index, x, y, w, h, label) where x,y,w,h are in range [0,1]</p></li>
+<li><p><strong>top_k</strong> – Number of predictions to keep per class, ordered by confidence score</p></li>
+<li><p><strong>device</strong> – Device</p></li>
+<li><p><strong>denormalize_targets</strong> – If True, denormalize the targets and crowd_targets</p></li>
+<li><p><strong>return_on_cpu</strong> – If True, the output will be returned on “CPU”, otherwise it will be returned on “device”</p></li>
+</ul>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p><dl class="field-list simple">
+<dt class="field-odd">preds_matched</dt>
+<dd class="field-odd"><p>Tensor of shape (num_img_predictions, n_iou_thresholds)
+True when prediction (i) is matched with a target with respect to the (j)th IoU threshold</p>
+</dd>
+<dt class="field-even">preds_to_ignore</dt>
+<dd class="field-even"><p>Tensor of shape (num_img_predictions, n_iou_thresholds)
+True when prediction (i) is matched with a crowd target with respect to the (j)th IoU threshold</p>
+</dd>
+<dt class="field-odd">preds_scores</dt>
+<dd class="field-odd"><p>Tensor of shape (num_img_predictions), confidence score for every prediction</p>
+</dd>
+<dt class="field-even">preds_cls</dt>
+<dd class="field-even"><p>Tensor of shape (num_img_predictions), predicted class for every prediction</p>
+</dd>
+<dt class="field-odd">targets_cls</dt>
+<dd class="field-odd"><p>Tensor of shape (num_img_targets), ground truth class for every target</p>
+</dd>
+</dl>
+</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py function">
+<dt class="sig sig-object py" id="super_gradients.training.utils.detection_utils.get_top_k_idx_per_cls">
+<span class="sig-prename descclassname"><span class="pre">super_gradients.training.utils.detection_utils.</span></span><span class="sig-name descname"><span class="pre">get_top_k_idx_per_cls</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">preds_scores</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.Tensor</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">preds_cl
+<dd><p>Get the indexes of all the top k predictions for every class</p>
+<dl class="field-list simple">
Discard
Tip!

Press p or to see the previous file or, n or to see the next file