Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

#315 updating docs using script + change welcome.html manually

Merged
Ghost merged 1 commits into Deci-AI:master from deci-ai:update-docs
Some lines were truncated since they exceed the maximum allowed length of 500, please use a local Git client to see the full diff.
@@ -38,6 +38,7 @@
         </div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
         </div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
               <p class="caption"><span class="caption-text">Welcome To SuperGradients</span></p>
               <p class="caption"><span class="caption-text">Welcome To SuperGradients</span></p>
 <ul>
 <ul>
+<li class="toctree-l1"><a class="reference internal" href="welcome.html">Fill our 4-question quick survey! We will raffle free SuperGradients swag between those who will participate -&gt; Fill Survey</a></li>
 <li class="toctree-l1"><a class="reference internal" href="welcome.html#supergradients">SuperGradients</a></li>
 <li class="toctree-l1"><a class="reference internal" href="welcome.html#supergradients">SuperGradients</a></li>
 </ul>
 </ul>
 <p class="caption"><span class="caption-text">Technical Documentation</span></p>
 <p class="caption"><span class="caption-text">Technical Documentation</span></p>
@@ -147,11 +148,6 @@ registered hooks while the latter silently ignores them.</p>
 </div>
 </div>
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.focal_loss.FocalLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.focal_loss.FocalLoss.training" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
 </dd></dl>
 </dd></dl>
 
 
 </section>
 </section>
@@ -201,34 +197,62 @@ registered hooks while the latter silently ignores them.</p>
 <span class="sig-name descname"><span class="pre">label_smoothing</span></span><em class="property"><span class="pre">:</span> <span class="pre">float</span></em><a class="headerlink" href="#super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss.label_smoothing" title="Permalink to this definition"></a></dt>
 <span class="sig-name descname"><span class="pre">label_smoothing</span></span><em class="property"><span class="pre">:</span> <span class="pre">float</span></em><a class="headerlink" href="#super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss.label_smoothing" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
+</dd></dl>
+
+</section>
+<section id="module-super_gradients.training.losses.ohem_ce_loss">
+<span id="super-gradients-training-losses-ohem-ce-loss-module"></span><h2>super_gradients.training.losses.ohem_ce_loss module<a class="headerlink" href="#module-super_gradients.training.losses.ohem_ce_loss" title="Permalink to this headline"></a></h2>
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemLoss">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.ohem_ce_loss.</span></span><span class="sig-name descname"><span class="pre">OhemLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">threshold</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span></span></em>, <em class="sig-param"><span clas
+<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
+<p>OhemLoss - Online Hard Example Mining Cross Entropy Loss</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemLoss.forward">
+<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">logits</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">labels</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ohem_ce_loss.html#OhemLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="heade
+<dd><p>Defines the computation performed at every call.</p>
+<p>Should be overridden by all subclasses.</p>
+<div class="admonition note">
+<p class="admonition-title">Note</p>
+<p>Although the recipe for forward pass needs to be defined within
+this function, one should call the <code class="xref py py-class docutils literal notranslate"><span class="pre">Module</span></code> instance afterwards
+instead of this since the former takes care of running the
+registered hooks while the latter silently ignores them.</p>
+</div>
+</dd></dl>
+
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss.weight">
-<span class="sig-name descname"><span class="pre">weight</span></span><em class="property"><span class="pre">:</span> <span class="pre">Optional</span><span class="p"><span class="pre">[</span></span><span class="pre">Tensor</span><span class="p"><span class="pre">]</span></span></em><a class="headerlink" href="#super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss.weight" title="Permalink to this definition"></a></dt>
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemLoss.reduction">
+<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.ohem_ce_loss.OhemLoss.reduction" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
+</dd></dl>
+
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemCELoss">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.ohem_ce_loss.</span></span><span class="sig-name descname"><span class="pre">OhemCELoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">threshold</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span></span></em>, <em class="sig-param"><span cl
+<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.losses.ohem_ce_loss.OhemLoss" title="super_gradients.training.losses.ohem_ce_loss.OhemLoss"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.losses.ohem_ce_loss.OhemLoss</span></code></a></p>
+<p>OhemLoss - Online Hard Example Mining Cross Entropy Loss</p>
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss.reduction">
-<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss.reduction" title="Permalink to this definition"></a></dt>
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemCELoss.reduction">
+<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.ohem_ce_loss.OhemCELoss.reduction" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss.training" title="Permalink to this definition"></a></dt>
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemCELoss.training">
+<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.ohem_ce_loss.OhemCELoss.training" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
 </dd></dl>
 </dd></dl>
 
 
-</section>
-<section id="module-super_gradients.training.losses.ohem_ce_loss">
-<span id="super-gradients-training-losses-ohem-ce-loss-module"></span><h2>super_gradients.training.losses.ohem_ce_loss module<a class="headerlink" href="#module-super_gradients.training.losses.ohem_ce_loss" title="Permalink to this headline"></a></h2>
 <dl class="py class">
 <dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemCELoss">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.ohem_ce_loss.</span></span><span class="sig-name descname"><span class="pre">OhemCELoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">threshold</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span></span></em>, <em class="sig-param"><span cl
-<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
-<p>OhemCELoss - Online Hard Example Mining Cross Entropy Loss</p>
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemBCELoss">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.ohem_ce_loss.</span></span><span class="sig-name descname"><span class="pre">OhemBCELoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">threshold</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span></span></em>, <em class="sig-param"><span c
+<dd><p>Bases: <a class="reference internal" href="#super_gradients.training.losses.ohem_ce_loss.OhemLoss" title="super_gradients.training.losses.ohem_ce_loss.OhemLoss"><code class="xref py py-class docutils literal notranslate"><span class="pre">super_gradients.training.losses.ohem_ce_loss.OhemLoss</span></code></a></p>
+<p>OhemBCELoss - Online Hard Example Mining Binary Cross Entropy Loss</p>
 <dl class="py method">
 <dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemCELoss.forward">
-<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">logits</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">labels</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ohem_ce_loss.html#OhemCELoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="hea
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemBCELoss.forward">
+<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">logits</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">labels</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ohem_ce_loss.html#OhemBCELoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="he
 <dd><p>Defines the computation performed at every call.</p>
 <dd><p>Defines the computation performed at every call.</p>
 <p>Should be overridden by all subclasses.</p>
 <p>Should be overridden by all subclasses.</p>
 <div class="admonition note">
 <div class="admonition note">
@@ -241,8 +265,13 @@ registered hooks while the latter silently ignores them.</p>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemCELoss.reduction">
-<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.ohem_ce_loss.OhemCELoss.reduction" title="Permalink to this definition"></a></dt>
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemBCELoss.reduction">
+<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.ohem_ce_loss.OhemBCELoss.reduction" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.losses.ohem_ce_loss.OhemBCELoss.training">
+<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.ohem_ce_loss.OhemBCELoss.training" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
 </dd></dl>
 </dd></dl>
@@ -275,11 +304,6 @@ The corresponding lables</p>
 <span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.r_squared_loss.RSquaredLoss.reduction" title="Permalink to this definition"></a></dt>
 <span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.r_squared_loss.RSquaredLoss.reduction" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.r_squared_loss.RSquaredLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.r_squared_loss.RSquaredLoss.training" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
 </dd></dl>
 </dd></dl>
 
 
 </section>
 </section>
@@ -347,80 +371,21 @@ registered hooks while the latter silently ignores them.</p>
 <span class="sig-name descname"><span class="pre">label_smoothing</span></span><em class="property"><span class="pre">:</span> <span class="pre">float</span></em><a class="headerlink" href="#super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss.label_smoothing" title="Permalink to this definition"></a></dt>
 <span class="sig-name descname"><span class="pre">label_smoothing</span></span><em class="property"><span class="pre">:</span> <span class="pre">float</span></em><a class="headerlink" href="#super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss.label_smoothing" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss.weight">
-<span class="sig-name descname"><span class="pre">weight</span></span><em class="property"><span class="pre">:</span> <span class="pre">Optional</span><span class="p"><span class="pre">[</span></span><span class="pre">Tensor</span><span class="p"><span class="pre">]</span></span></em><a class="headerlink" href="#super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss.weight" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss.reduction">
-<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss.reduction" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss.training" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
 </dd></dl>
 </dd></dl>
 
 
 </section>
 </section>
 <section id="module-super_gradients.training.losses.ssd_loss">
 <section id="module-super_gradients.training.losses.ssd_loss">
 <span id="super-gradients-training-losses-ssd-loss-module"></span><h2>super_gradients.training.losses.ssd_loss module<a class="headerlink" href="#module-super_gradients.training.losses.ssd_loss" title="Permalink to this headline"></a></h2>
 <span id="super-gradients-training-losses-ssd-loss-module"></span><h2>super_gradients.training.losses.ssd_loss module<a class="headerlink" href="#module-super_gradients.training.losses.ssd_loss" title="Permalink to this headline"></a></h2>
 <dl class="py class">
 <dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.ssd_loss.</span></span><span class="sig-name descname"><span class="pre">SSDLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">dboxes</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="super_gradients.training.utils.html#super_gradie
+<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.HardMiningCrossEntropyLoss">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.ssd_loss.</span></span><span class="sig-name descname"><span class="pre">HardMiningCrossEntropyLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">neg_pos_ratio</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">float</span></span></em><span class="sig
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
-<p>Implements the loss as the sum of the followings:
-1. Confidence Loss: All labels, with hard negative mining
-2. Localization Loss: Only on positive labels</p>
+<p>L_cls = [CE of all positives] + [CE of the hardest backgrounds]
+where the second term is built from [neg_pos_ratio * positive pairs] background cells with the highest CE
+(the hardest background cells)</p>
 <dl class="py method">
 <dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss.match_dboxes">
-<span class="sig-name descname"><span class="pre">match_dboxes</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#SSDLoss.match_dboxes"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.losses.ssd_loss.SSDLoss.match_dboxes" 
-<dd><p>convert ground truth boxes into a tensor with the same size as dboxes. each gt bbox is matched to every
-destination box which overlaps it over 0.5 (IoU). so some gt bboxes can be duplicated to a few destination boxes
-:param targets: a tensor containing the boxes for a single image. shape [num_boxes, 5] (x,y,w,h,label)
-:return: two tensors</p>
-<blockquote>
-<div><p>boxes - shape of dboxes [4, num_dboxes] (x,y,w,h)
-labels - sahpe [num_dboxes]</p>
-</div></blockquote>
-</dd></dl>
-
-<dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss.forward">
-<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#SSDLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="head
-<dd><dl class="simple">
-<dt>Compute the loss</dt><dd><p>:param predictions - predictions tensor coming from the network. shape [N, num_classes+4, num_dboxes]
-were the first four items are (x,y,w,h) and the rest are class confidence
-:param targets - targets for the batch. [num targets, 6] (index in batch, label, x,y,w,h)</p>
-</dd>
-</dl>
-</dd></dl>
-
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss.reduction">
-<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.ssd_loss.SSDLoss.reduction" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.ssd_loss.SSDLoss.training" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
-</dd></dl>
-
-</section>
-<section id="module-super_gradients.training.losses.yolo_v3_loss">
-<span id="super-gradients-training-losses-yolo-v3-loss-module"></span><h2>super_gradients.training.losses.yolo_v3_loss module<a class="headerlink" href="#module-super_gradients.training.losses.yolo_v3_loss" title="Permalink to this headline"></a></h2>
-<dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v3_loss.YoLoV3DetectionLoss">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.yolo_v3_loss.</span></span><span class="sig-name descname"><span class="pre">YoLoV3DetectionLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.nn.modules.module.Module</span></span></em>, 
-<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
-<p>YoLoV3DetectionLoss - Loss Class for Object Detection</p>
-<dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v3_loss.YoLoV3DetectionLoss.forward">
-<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_output</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/yolo_v3_loss.html#YoLoV3DetectionLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span><
+<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.HardMiningCrossEntropyLoss.forward">
+<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">pred_labels</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">target_labels</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#HardMiningCrossEntropyLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span>
 <dd><p>Defines the computation performed at every call.</p>
 <dd><p>Defines the computation performed at every call.</p>
 <p>Should be overridden by all subclasses.</p>
 <p>Should be overridden by all subclasses.</p>
 <div class="admonition note">
 <div class="admonition note">
@@ -433,131 +398,77 @@ registered hooks while the latter silently ignores them.</p>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v3_loss.YoLoV3DetectionLoss.reduction">
-<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.yolo_v3_loss.YoLoV3DetectionLoss.reduction" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v3_loss.YoLoV3DetectionLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.yolo_v3_loss.YoLoV3DetectionLoss.training" title="Permalink to this definition"></a></dt>
+<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.HardMiningCrossEntropyLoss.reduction">
+<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.ssd_loss.HardMiningCrossEntropyLoss.reduction" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
 </dd></dl>
 </dd></dl>
 
 
-</section>
-<section id="module-super_gradients.training.losses.yolo_v5_loss">
-<span id="super-gradients-training-losses-yolo-v5-loss-module"></span><h2>super_gradients.training.losses.yolo_v5_loss module<a class="headerlink" href="#module-super_gradients.training.losses.yolo_v5_loss" title="Permalink to this headline"></a></h2>
 <dl class="py class">
 <dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.yolo_v5_loss.</span></span><span class="sig-name descname"><span class="pre">YoLoV5DetectionLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">anchors</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="super_gradients.training.utils.
+<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.ssd_loss.</span></span><span class="sig-name descname"><span class="pre">SSDLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">dboxes</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="super_gradients.training.utils.html#super_gradie
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
-<p>Calculate YOLO V5 loss:
-L = L_objectivness + L_boxes + L_classification</p>
-<dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss.forward">
-<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_output</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/yolo_v5_loss.html#YoLoV5DetectionLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span><
-<dd><p>Defines the computation performed at every call.</p>
-<p>Should be overridden by all subclasses.</p>
-<div class="admonition note">
-<p class="admonition-title">Note</p>
-<p>Although the recipe for forward pass needs to be defined within
-this function, one should call the <code class="xref py py-class docutils literal notranslate"><span class="pre">Module</span></code> instance afterwards
-instead of this since the former takes care of running the
-registered hooks while the latter silently ignores them.</p>
-</div>
-</dd></dl>
-
-<dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss.build_targets">
-<span class="sig-name descname"><span class="pre">build_targets</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">List</span><span class="p"><span class="pre">[</span></span><span class="pre">torch.Tensor</span><span class="p"><span class="pre">]</span></span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</
-<dd><dl>
-<dt>Assign targets to anchors to use in L_boxes &amp; L_classification calculation:</dt><dd><ul class="simple">
-<li><p>each target can be assigned to a few anchors,</p></li>
+<blockquote>
+<div><p>Implements the loss as the sum of the followings:
+1. Confidence Loss: All labels, with hard negative mining
+2. Localization Loss: Only on positive labels</p>
+</div></blockquote>
+<dl class="simple">
+<dt>L = (2 - alpha) * L_l1 + alpha * L_cls, where</dt><dd><ul class="simple">
+<li><p>L_cls is HardMiningCrossEntropyLoss</p></li>
+<li><p>L_l1 = [SmoothL1Loss for all positives]</p></li>
 </ul>
 </ul>
-<p>all anchors that are within [1/self.anchor_threshold, self.anchor_threshold] times target size range
-* each anchor can be assigned to a few targets</p>
 </dd>
 </dd>
 </dl>
 </dl>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss.match_dboxes">
+<span class="sig-name descname"><span class="pre">match_dboxes</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#SSDLoss.match_dboxes"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.losses.ssd_loss.SSDLoss.match_dboxes" 
+<dd><p>creates tensors with target boxes and labels for each dboxes, so with the same len as dboxes.</p>
+<ul class="simple">
+<li><p>Each GT is assigned with a grid cell with the highest IoU, this creates a pair for each GT and some cells;</p></li>
+<li><p>The rest of grid cells are assigned to a GT with the highest IoU, assuming it’s &gt; self.iou_thresh;
+If this condition is not met the grid cell is marked as background</p></li>
+</ul>
+<p>GT-wise: one to many
+Grid-cell-wise: one to one</p>
 <dl class="field-list simple">
 <dl class="field-list simple">
 <dt class="field-odd">Parameters</dt>
 <dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>predictions</strong> – Yolo predictions</p></li>
-<li><p><strong>targets</strong> – ground truth targets</p></li>
-</ul>
+<dd class="field-odd"><p><strong>targets</strong> – a tensor containing the boxes for a single image;
+shape [num_boxes, 6] (image_id, label, x, y, w, h)</p>
 </dd>
 </dd>
 <dt class="field-even">Returns</dt>
 <dt class="field-even">Returns</dt>
-<dd class="field-even"><p><p>each of 4 outputs contains one element for each Yolo output,
-correspondences are raveled over the whole batch and all anchors:</p>
-<blockquote>
-<div><ul class="simple">
-<li><p>classes of the targets;</p></li>
-<li><p>boxes of the targets;</p></li>
-<li><p>image id in a batch, anchor id, grid y, grid x coordinates;</p></li>
-<li><p>anchor sizes.</p></li>
-</ul>
-</div></blockquote>
-<p>All the above can be indexed in parallel to get the selected correspondences</p>
-</p>
+<dd class="field-even"><p>two tensors
+boxes - shape of dboxes [4, num_dboxes] (x,y,w,h)
+labels - sahpe [num_dboxes]</p>
 </dd>
 </dd>
 </dl>
 </dl>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py method">
 <dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss.compute_loss">
-<span class="sig-name descname"><span class="pre">compute_loss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">List</span><span class="p"><span class="pre">[</span></span><span class="pre">torch.Tensor</span><span class="p"><span class="pre">]</span></span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</s
-<dd><p>L = L_objectivness + L_boxes + L_classification
-where:</p>
-<blockquote>
-<div><ul class="simple">
-<li><p>L_boxes and L_classification are calculated only between anchors and targets that suit them;</p></li>
-<li><p>L_objectivness is calculated on all anchors.</p></li>
-</ul>
-</div></blockquote>
-<dl>
-<dt>L_classification:</dt><dd><p>for anchors that have suitable ground truths in their grid locations add BCEs
-to force max probability for each GT class in a multi-label way
-Coef: self.cls_loss_gain</p>
-</dd>
-<dt>L_boxes:</dt><dd><p>for anchors that have suitable ground truths in their grid locations
-add (1 - IoU), IoU between a predicted box and each GT box, force maximum IoU
-Coef: self.box_loss_gain</p>
-</dd>
-<dt>L_objectness:</dt><dd><p>for each anchor add BCE to force a prediction of (1 - giou_loss_ratio) + giou_loss_ratio * IoU,
-IoU between a predicted box and random GT in it
-Coef: self.obj_loss_gain, loss from each YOLO grid is additionally multiplied by balance = [4.0, 1.0, 0.4]</p>
-<blockquote>
-<div><p>to balance different contributions coming from different numbers of grid cells</p>
-</div></blockquote>
-</dd>
-</dl>
-<dl class="field-list simple">
-<dt class="field-odd">Parameters</dt>
-<dd class="field-odd"><ul class="simple">
-<li><p><strong>predictions</strong> – output from all Yolo levels, each of shape
-[Batch x Num_Anchors x GridSizeY x GridSizeX x (4 + 1 + Num_classes)]</p></li>
-<li><p><strong>targets</strong> – [Num_targets x (4 + 2)], values on dim 1 are: image id in a batch, class, box x y w h</p></li>
-<li><p><strong>giou_loss_ratio</strong> – a coef in L_objectness defining what should be predicted as objecness
-in a call with a target: can be a value in [IoU, 1] range</p></li>
-</ul>
-</dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p>loss, all losses separately in a detached tensor</p>
+<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss.forward">
+<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">Tuple</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#
+<dd><dl class="simple">
+<dt>Compute the loss</dt><dd><p>:param predictions - predictions tensor coming from the network,
+tuple with shapes ([Batch Size, 4, num_dboxes], [Batch Size, num_classes + 1, num_dboxes])
+were predictions have logprobs for background and other classes
+:param targets - targets for the batch. [num targets, 6] (index in batch, label, x,y,w,h)</p>
 </dd>
 </dd>
 </dl>
 </dl>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss.reduction">
-<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss.reduction" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
-
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss.training" title="Permalink to this definition"></a></dt>
+<dt class="sig sig-object py" id="super_gradients.training.losses.ssd_loss.SSDLoss.reduction">
+<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.ssd_loss.SSDLoss.reduction" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
 </dd></dl>
 </dd></dl>
 
 
+</section>
+<section id="super-gradients-training-losses-yolo-v3-loss-module">
+<h2>super_gradients.training.losses.yolo_v3_loss module<a class="headerlink" href="#super-gradients-training-losses-yolo-v3-loss-module" title="Permalink to this headline"></a></h2>
+</section>
+<section id="super-gradients-training-losses-yolo-v5-loss-module">
+<h2>super_gradients.training.losses.yolo_v5_loss module<a class="headerlink" href="#super-gradients-training-losses-yolo-v5-loss-module" title="Permalink to this headline"></a></h2>
 </section>
 </section>
 <section id="module-super_gradients.training.losses">
 <section id="module-super_gradients.training.losses">
 <span id="module-contents"></span><h2>Module contents<a class="headerlink" href="#module-super_gradients.training.losses" title="Permalink to this headline"></a></h2>
 <span id="module-contents"></span><h2>Module contents<a class="headerlink" href="#module-super_gradients.training.losses" title="Permalink to this headline"></a></h2>
@@ -715,143 +626,236 @@ registered hooks while the latter silently ignores them.</p>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py class">
 <dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV3DetectionLoss">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.</span></span><span class="sig-name descname"><span class="pre">YoLoV3DetectionLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.nn.modules.module.Module</span></span></em>, <em class="si
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.</span></span><span class="sig-name descname"><span class="pre">YoloXDetectionLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">strides</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">list</span></span></em>, <em class="sig-param"><span class="n">
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
-<p>YoLoV3DetectionLoss - Loss Class for Object Detection</p>
-<dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV3DetectionLoss.forward">
-<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_output</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/yolo_v3_loss.html#YoLoV3DetectionLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span><
-<dd><p>Defines the computation performed at every call.</p>
-<p>Should be overridden by all subclasses.</p>
-<div class="admonition note">
-<p class="admonition-title">Note</p>
-<p>Although the recipe for forward pass needs to be defined within
-this function, one should call the <code class="xref py py-class docutils literal notranslate"><span class="pre">Module</span></code> instance afterwards
-instead of this since the former takes care of running the
-registered hooks while the latter silently ignores them.</p>
-</div>
+<p>Calculate YOLOX loss:
+L = L_objectivness + L_iou + L_classification + 1[use_l1]*L_l1</p>
+<dl>
+<dt>where:</dt><dd><ul class="simple">
+<li><p>L_iou, L_classification and L_l1 are calculated only between cells and targets that suit them;</p></li>
+<li><p>L_objectivness is calculated for all cells.</p></li>
+</ul>
+<dl class="simple">
+<dt>L_classification:</dt><dd><p>for cells that have suitable ground truths in their grid locations add BCEs
+to force a prediction of IoU with a GT in a multi-label way
+Coef: 1.</p>
+</dd>
+<dt>L_iou:</dt><dd><p>for cells that have suitable ground truths in their grid locations
+add (1 - IoU^2), IoU between a predicted box and each GT box, force maximum IoU
+Coef: 5.</p>
+</dd>
+<dt>L_l1:</dt><dd><p>for cells that have suitable ground truths in their grid locations
+l1 distance between the logits and GTs in “logits” format (the inverse of “logits to predictions” ops)
+Coef: 1[use_l1]</p>
+</dd>
+<dt>L_objectness:</dt><dd><p>for each cell add BCE with a label of 1 if there is GT assigned to the cell
+Coef: 1</p>
+</dd>
+</dl>
+</dd>
+</dl>
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.strides">
+<span class="sig-name descname"><span class="pre">strides</span></span><a class="headerlink" href="#super_gradients.training.losses.YoloXDetectionLoss.strides" title="Permalink to this definition"></a></dt>
+<dd><p>list: List of Yolo levels output grid sizes (i.e [8, 16, 32]).</p>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV3DetectionLoss.reduction">
-<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.YoLoV3DetectionLoss.reduction" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.num_classes">
+<span class="sig-name descname"><span class="pre">num_classes</span></span><a class="headerlink" href="#super_gradients.training.losses.YoloXDetectionLoss.num_classes" title="Permalink to this definition"></a></dt>
+<dd><p>int: Number of classes.</p>
+</dd></dl>
 
 
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV3DetectionLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.YoLoV3DetectionLoss.training" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.use_l1">
+<span class="sig-name descname"><span class="pre">use_l1</span></span><a class="headerlink" href="#super_gradients.training.losses.YoloXDetectionLoss.use_l1" title="Permalink to this definition"></a></dt>
+<dd><p>bool: Controls the L_l1 Coef as discussed above (default=False).</p>
+</dd></dl>
 
 
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.center_sampling_radius">
+<span class="sig-name descname"><span class="pre">center_sampling_radius</span></span><a class="headerlink" href="#super_gradients.training.losses.YoloXDetectionLoss.center_sampling_radius" title="Permalink to this definition"></a></dt>
+<dd><p>float: Sampling radius used for center sampling when creating the fg mask (default=2.5).</p>
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py class">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV5DetectionLoss">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.</span></span><span class="sig-name descname"><span class="pre">YoLoV5DetectionLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">anchors</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="super_gradients.training.utils.html#super_gr
-<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
-<p>Calculate YOLO V5 loss:
-L = L_objectivness + L_boxes + L_classification</p>
-<dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV5DetectionLoss.forward">
-<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_output</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/yolo_v5_loss.html#YoLoV5DetectionLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span><
-<dd><p>Defines the computation performed at every call.</p>
-<p>Should be overridden by all subclasses.</p>
-<div class="admonition note">
-<p class="admonition-title">Note</p>
-<p>Although the recipe for forward pass needs to be defined within
-this function, one should call the <code class="xref py py-class docutils literal notranslate"><span class="pre">Module</span></code> instance afterwards
-instead of this since the former takes care of running the
-registered hooks while the latter silently ignores them.</p>
-</div>
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.iou_type">
+<span class="sig-name descname"><span class="pre">iou_type</span></span><a class="headerlink" href="#super_gradients.training.losses.YoloXDetectionLoss.iou_type" title="Permalink to this definition"></a></dt>
+<dd><p>str: Iou loss type, one of [“iou”,”giou”] (deafult=”iou”).</p>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py method">
 <dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV5DetectionLoss.build_targets">
-<span class="sig-name descname"><span class="pre">build_targets</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">List</span><span class="p"><span class="pre">[</span></span><span class="pre">torch.Tensor</span><span class="p"><span class="pre">]</span></span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</
-<dd><dl>
-<dt>Assign targets to anchors to use in L_boxes &amp; L_classification calculation:</dt><dd><ul class="simple">
-<li><p>each target can be assigned to a few anchors,</p></li>
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.forward">
+<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model_output</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">Union</span><span class="p"><span class="pre">[</span></span><span class="pre">list</span><span class="p"><span class="pre">,</span> </span><span class="pre">Tuple</span><span class="p"><span class="pre">[</span></span><span class
+<dd><dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>model_output</strong> – <p>Union[list, Tuple[torch.Tensor, List]]:
+When list-</p>
+<blockquote>
+<div><p>output from all Yolo levels, each of shape [Batch x 1 x GridSizeY x GridSizeX x (4 + 1 + Num_classes)]</p>
+</div></blockquote>
+<p>And when tuple- the second item is the described list (first item is discarded)</p>
+</p></li>
+<li><p><strong>targets</strong> – torch.Tensor: Num_targets x (4 + 2)], values on dim 1 are: image id in a batch, class, box x y w h</p></li>
 </ul>
 </ul>
-<p>all anchors that are within [1/self.anchor_threshold, self.anchor_threshold] times target size range
-* each anchor can be assigned to a few targets</p>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>loss, all losses separately in a detached tensor</p>
 </dd>
 </dd>
 </dl>
 </dl>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.prepare_predictions">
+<span class="sig-name descname"><span class="pre">prepare_predictions</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">List</span><span class="p"><span class="pre">[</span></span><span class="pre">torch.Tensor</span><span class="p"><span class="pre">]</span></span></span></em><span class="sig-paren">)</span> &#x2192; <span class="pre">T
+<dd><p>Convert raw outputs of the network into a format that merges outputs from all levels
+:param predictions:     output from all Yolo levels, each of shape</p>
+<blockquote>
+<div><p>[Batch x 1 x GridSizeY x GridSizeX x (4 + 1 + Num_classes)]</p>
+</div></blockquote>
 <dl class="field-list simple">
 <dl class="field-list simple">
+<dt class="field-odd">Returns</dt>
+<dd class="field-odd"><p><p>5 tensors representing predictions:
+* x_shifts: shape [1 x * num_cells x 1],</p>
+<blockquote>
+<div><p>where num_cells = grid1X * grid1Y + grid2X * grid2Y + grid3X * grid3Y,
+x coordinate on the grid cell the prediction is coming from</p>
+</div></blockquote>
+<ul class="simple">
+<li><p>y_shifts: shape [1 x  num_cells x 1],
+y coordinate on the grid cell the prediction is coming from</p></li>
+<li><p>expanded_strides: shape [1 x num_cells x 1],
+stride of the output grid the prediction is coming from</p></li>
+<li><p>transformed_outputs: shape [batch_size x num_cells x (num_classes + 5)],
+predictions with boxes in real coordinates and logprobabilities</p></li>
+<li><p>raw_outputs: shape [batch_size x num_cells x (num_classes + 5)],
+raw predictions with boxes and confidences as logits</p></li>
+</ul>
+</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.get_l1_target">
+<span class="sig-name descname"><span class="pre">get_l1_target</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">l1_target</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">gt</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">stride</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">x_shifts</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">y
+<dd><dl class="field-list simple">
 <dt class="field-odd">Parameters</dt>
 <dt class="field-odd">Parameters</dt>
 <dd class="field-odd"><ul class="simple">
 <dd class="field-odd"><ul class="simple">
-<li><p><strong>predictions</strong> – Yolo predictions</p></li>
-<li><p><strong>targets</strong> – ground truth targets</p></li>
+<li><p><strong>l1_target</strong> – tensor of zeros of shape [Num_cell_gt_pairs x 4]</p></li>
+<li><p><strong>gt</strong> – targets in coordinates [Num_cell_gt_pairs x (4 + 1 + num_classes)]</p></li>
 </ul>
 </ul>
 </dd>
 </dd>
 <dt class="field-even">Returns</dt>
 <dt class="field-even">Returns</dt>
-<dd class="field-even"><p><p>each of 4 outputs contains one element for each Yolo output,
-correspondences are raveled over the whole batch and all anchors:</p>
-<blockquote>
-<div><ul class="simple">
-<li><p>classes of the targets;</p></li>
-<li><p>boxes of the targets;</p></li>
-<li><p>image id in a batch, anchor id, grid y, grid x coordinates;</p></li>
-<li><p>anchor sizes.</p></li>
-</ul>
-</div></blockquote>
-<p>All the above can be indexed in parallel to get the selected correspondences</p>
-</p>
+<dd class="field-even"><p>targets in the format corresponding to logits</p>
 </dd>
 </dd>
 </dl>
 </dl>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py method">
 <dl class="py method">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV5DetectionLoss.compute_loss">
-<span class="sig-name descname"><span class="pre">compute_loss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">List</span><span class="p"><span class="pre">[</span></span><span class="pre">torch.Tensor</span><span class="p"><span class="pre">]</span></span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</s
-<dd><p>L = L_objectivness + L_boxes + L_classification
-where:</p>
-<blockquote>
-<div><ul class="simple">
-<li><p>L_boxes and L_classification are calculated only between anchors and targets that suit them;</p></li>
-<li><p>L_objectivness is calculated on all anchors.</p></li>
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.get_assignments">
+<span class="sig-name descname"><span class="pre">get_assignments</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">image_idx</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">num_gt</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">total_num_anchors</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">gt_bboxes_per_image</span></span></em>, <em class="sig-param"><span c
+<dd><dl class="simple">
+<dt>Match cells to ground truth:</dt><dd><ul class="simple">
+<li><p>at most 1 GT per cell</p></li>
+<li><p>dynamic number of cells per GT</p></li>
 </ul>
 </ul>
-</div></blockquote>
-<dl>
-<dt>L_classification:</dt><dd><p>for anchors that have suitable ground truths in their grid locations add BCEs
-to force max probability for each GT class in a multi-label way
-Coef: self.cls_loss_gain</p>
 </dd>
 </dd>
-<dt>L_boxes:</dt><dd><p>for anchors that have suitable ground truths in their grid locations
-add (1 - IoU), IoU between a predicted box and each GT box, force maximum IoU
-Coef: self.box_loss_gain</p>
+</dl>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>outside_boxes_and_center_cost_coeff</strong> – float: Cost coefficiant of cells the radius and bbox of gts in dynamic
+matching (default=100000).</p></li>
+<li><p><strong>ious_loss_cost_coeff</strong> – float: Cost coefficiant for iou loss in dynamic matching (default=3).</p></li>
+<li><p><strong>image_idx</strong> – int: Image index in batch.</p></li>
+<li><p><strong>num_gt</strong> – int: Number of ground trunth targets in the image.</p></li>
+<li><p><strong>total_num_anchors</strong> – int: Total number of possible bboxes = sum of all grid cells.</p></li>
+<li><p><strong>gt_bboxes_per_image</strong> – torch.Tensor: Tensor of gt bboxes for  the image, shape: (num_gt, 4).</p></li>
+<li><p><strong>gt_classes</strong> – torch.Tesnor: Tensor of the classes in the image, shape: (num_preds,4).</p></li>
+<li><p><strong>bboxes_preds_per_image</strong> – Tensor of the classes in the image, shape: (num_preds).</p></li>
+<li><p><strong>expanded_strides</strong> – torch.Tensor: Stride of the output grid the prediction is coming from,
+shape (1 x num_cells x 1).</p></li>
+<li><p><strong>x_shifts</strong> – torch.Tensor: X’s in cell coordinates, shape (1,num_cells,1).</p></li>
+<li><p><strong>y_shifts</strong> – torch.Tensor: Y’s in cell coordinates, shape (1,num_cells,1).</p></li>
+<li><p><strong>cls_preds</strong> – torch.Tensor: Class predictions in all cells, shape (batch_size, num_cells).</p></li>
+<li><p><strong>obj_preds</strong> – torch.Tensor: Objectness predictions in all cells, shape (batch_size, num_cells).</p></li>
+<li><p><strong>mode</strong> – str: One of [“gpu”,”cpu”], Controls the device the assignment operation should be taken place on (deafult=”gpu”)</p></li>
+</ul>
 </dd>
 </dd>
-<dt>L_objectness:</dt><dd><p>for each anchor add BCE to force a prediction of (1 - giou_loss_ratio) + giou_loss_ratio * IoU,
-IoU between a predicted box and random GT in it
-Coef: self.obj_loss_gain, loss from each YOLO grid is additionally multiplied by balance = [4.0, 1.0, 0.4]</p>
-<blockquote>
-<div><p>to balance different contributions coming from different numbers of grid cells</p>
-</div></blockquote>
+</dl>
+</dd></dl>
+
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.get_in_boxes_info">
+<span class="sig-name descname"><span class="pre">get_in_boxes_info</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">gt_bboxes_per_image</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">expanded_strides</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">x_shifts</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">y_shifts</span></span></em>, <em class="sig-param"><span
+<dd><dl>
+<dt>Create a mask for all cells, mask in only foreground: cells that have a center located:</dt><dd><ul class="simple">
+<li><p>withing a GT box;</p></li>
+</ul>
+<p>OR
+* within a fixed radius around a GT box (center sampling);</p>
 </dd>
 </dd>
 </dl>
 </dl>
 <dl class="field-list simple">
 <dl class="field-list simple">
 <dt class="field-odd">Parameters</dt>
 <dt class="field-odd">Parameters</dt>
 <dd class="field-odd"><ul class="simple">
 <dd class="field-odd"><ul class="simple">
-<li><p><strong>predictions</strong> – output from all Yolo levels, each of shape
-[Batch x Num_Anchors x GridSizeY x GridSizeX x (4 + 1 + Num_classes)]</p></li>
-<li><p><strong>targets</strong> – [Num_targets x (4 + 2)], values on dim 1 are: image id in a batch, class, box x y w h</p></li>
-<li><p><strong>giou_loss_ratio</strong> – a coef in L_objectness defining what should be predicted as objecness
-in a call with a target: can be a value in [IoU, 1] range</p></li>
+<li><p><strong>num_gt</strong> – int: Number of ground trunth targets in the image.</p></li>
+<li><p><strong>total_num_anchors</strong> – int: Sum of all grid cells.</p></li>
+<li><p><strong>gt_bboxes_per_image</strong> – torch.Tensor: Tensor of gt bboxes for  the image, shape: (num_gt, 4).</p></li>
+<li><p><strong>expanded_strides</strong> – torch.Tensor: Stride of the output grid the prediction is coming from,
+shape (1 x num_cells x 1).</p></li>
+<li><p><strong>x_shifts</strong> – torch.Tensor: X’s in cell coordinates, shape (1,num_cells,1).</p></li>
+<li><p><strong>y_shifts</strong> – torch.Tensor: Y’s in cell coordinates, shape (1,num_cells,1).</p></li>
 </ul>
 </ul>
 </dd>
 </dd>
-<dt class="field-even">Returns</dt>
-<dd class="field-even"><p>loss, all losses separately in a detached tensor</p>
+</dl>
+<dl class="simple">
+<dt>:return is_in_boxes_anchor, is_in_boxes_and_center</dt><dd><dl class="simple">
+<dt>where:</dt><dd><ul class="simple">
+<li><dl class="simple">
+<dt>is_in_boxes_anchor masks the cells that their cell center is  inside a gt bbox and within</dt><dd><p>self.center_sampling_radius cells away, without reduction (i.e shape=(num_gts, num_fgs))</p>
+</dd>
+</dl>
+</li>
+<li><dl class="simple">
+<dt>is_in_boxes_and_center masks the cells that their center is either inside a gt bbox or within</dt><dd><p>self.center_sampling_radius cells away, shape (num_fgs)</p>
+</dd>
+</dl>
+</li>
+</ul>
+</dd>
+</dl>
 </dd>
 </dd>
 </dl>
 </dl>
 </dd></dl>
 </dd></dl>
 
 
-<dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV5DetectionLoss.reduction">
-<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.YoLoV5DetectionLoss.reduction" title="Permalink to this definition"></a></dt>
-<dd></dd></dl>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.dynamic_k_matching">
+<span class="sig-name descname"><span class="pre">dynamic_k_matching</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">cost</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">pair_wise_ious</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">gt_classes</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">num_gt</span></span></em>, <em class="sig-param"><span class="n"><span
+<dd><dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><ul class="simple">
+<li><p><strong>cost</strong> – pairwise cost, [num_FGs x num_GTs]</p></li>
+<li><p><strong>pair_wise_ious</strong> – pairwise IoUs, [num_FGs x num_GTs]</p></li>
+<li><p><strong>gt_classes</strong> – class of each GT</p></li>
+<li><p><strong>num_gt</strong> – number of GTs</p></li>
+</ul>
+</dd>
+</dl>
+<dl class="simple">
+<dt>:return num_fg, (number of foregrounds)</dt><dd><p>gt_matched_classes, (the classes that have been matched with fgs)
+pred_ious_this_matching
+matched_gt_inds</p>
+</dd>
+</dl>
+</dd></dl>
 
 
 <dl class="py attribute">
 <dl class="py attribute">
-<dt class="sig sig-object py" id="super_gradients.training.losses.YoLoV5DetectionLoss.training">
-<span class="sig-name descname"><span class="pre">training</span></span><em class="property"><span class="pre">:</span> <span class="pre">bool</span></em><a class="headerlink" href="#super_gradients.training.losses.YoLoV5DetectionLoss.training" title="Permalink to this definition"></a></dt>
+<dt class="sig sig-object py" id="super_gradients.training.losses.YoloXDetectionLoss.reduction">
+<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.YoloXDetectionLoss.reduction" title="Permalink to this definition"></a></dt>
 <dd></dd></dl>
 <dd></dd></dl>
 
 
 </dd></dl>
 </dd></dl>
@@ -890,30 +894,51 @@ The corresponding lables</p>
 
 
 <dl class="py class">
 <dl class="py class">
 <dt class="sig sig-object py" id="super_gradients.training.losses.SSDLoss">
 <dt class="sig sig-object py" id="super_gradients.training.losses.SSDLoss">
-<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.</span></span><span class="sig-name descname"><span class="pre">SSDLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">dboxes</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="super_gradients.training.utils.html#super_gradients.train
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.</span></span><span class="sig-name descname"><span class="pre">SSDLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">dboxes</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><a class="reference internal" href="super_gradients.training.utils.html#super_gradients.train
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
 <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
-<p>Implements the loss as the sum of the followings:
+<blockquote>
+<div><p>Implements the loss as the sum of the followings:
 1. Confidence Loss: All labels, with hard negative mining
 1. Confidence Loss: All labels, with hard negative mining
 2. Localization Loss: Only on positive labels</p>
 2. Localization Loss: Only on positive labels</p>
+</div></blockquote>
+<dl class="simple">
+<dt>L = (2 - alpha) * L_l1 + alpha * L_cls, where</dt><dd><ul class="simple">
+<li><p>L_cls is HardMiningCrossEntropyLoss</p></li>
+<li><p>L_l1 = [SmoothL1Loss for all positives]</p></li>
+</ul>
+</dd>
+</dl>
 <dl class="py method">
 <dl class="py method">
 <dt class="sig sig-object py" id="super_gradients.training.losses.SSDLoss.match_dboxes">
 <dt class="sig sig-object py" id="super_gradients.training.losses.SSDLoss.match_dboxes">
 <span class="sig-name descname"><span class="pre">match_dboxes</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#SSDLoss.match_dboxes"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.losses.SSDLoss.match_dboxes" title="Pe
 <span class="sig-name descname"><span class="pre">match_dboxes</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#SSDLoss.match_dboxes"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#super_gradients.training.losses.SSDLoss.match_dboxes" title="Pe
-<dd><p>convert ground truth boxes into a tensor with the same size as dboxes. each gt bbox is matched to every
-destination box which overlaps it over 0.5 (IoU). so some gt bboxes can be duplicated to a few destination boxes
-:param targets: a tensor containing the boxes for a single image. shape [num_boxes, 5] (x,y,w,h,label)
-:return: two tensors</p>
-<blockquote>
-<div><p>boxes - shape of dboxes [4, num_dboxes] (x,y,w,h)
+<dd><p>creates tensors with target boxes and labels for each dboxes, so with the same len as dboxes.</p>
+<ul class="simple">
+<li><p>Each GT is assigned with a grid cell with the highest IoU, this creates a pair for each GT and some cells;</p></li>
+<li><p>The rest of grid cells are assigned to a GT with the highest IoU, assuming it’s &gt; self.iou_thresh;
+If this condition is not met the grid cell is marked as background</p></li>
+</ul>
+<p>GT-wise: one to many
+Grid-cell-wise: one to one</p>
+<dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><p><strong>targets</strong> – a tensor containing the boxes for a single image;
+shape [num_boxes, 6] (image_id, label, x, y, w, h)</p>
+</dd>
+<dt class="field-even">Returns</dt>
+<dd class="field-even"><p>two tensors
+boxes - shape of dboxes [4, num_dboxes] (x,y,w,h)
 labels - sahpe [num_dboxes]</p>
 labels - sahpe [num_dboxes]</p>
-</div></blockquote>
+</dd>
+</dl>
 </dd></dl>
 </dd></dl>
 
 
 <dl class="py method">
 <dl class="py method">
 <dt class="sig sig-object py" id="super_gradients.training.losses.SSDLoss.forward">
 <dt class="sig sig-object py" id="super_gradients.training.losses.SSDLoss.forward">
-<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#SSDLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="head
+<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">predictions</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">Tuple</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">targets</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/ssd_loss.html#
 <dd><dl class="simple">
 <dd><dl class="simple">
-<dt>Compute the loss</dt><dd><p>:param predictions - predictions tensor coming from the network. shape [N, num_classes+4, num_dboxes]
-were the first four items are (x,y,w,h) and the rest are class confidence
+<dt>Compute the loss</dt><dd><p>:param predictions - predictions tensor coming from the network,
+tuple with shapes ([Batch Size, 4, num_dboxes], [Batch Size, num_classes + 1, num_dboxes])
+were predictions have logprobs for background and other classes
 :param targets - targets for the batch. [num targets, 6] (index in batch, label, x,y,w,h)</p>
 :param targets - targets for the batch. [num targets, 6] (index in batch, label, x,y,w,h)</p>
 </dd>
 </dd>
 </dl>
 </dl>
@@ -962,6 +987,54 @@ were the first four items are (x,y,w,h) and the rest are class confidence
 
 
 </dd></dl>
 </dd></dl>
 
 
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.losses.KDLogitsLoss">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.</span></span><span class="sig-name descname"><span class="pre">KDLogitsLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">task_loss_fn</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">torch.nn.modules.loss._Loss</span></span></em>, <em class="sig-p
+<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
+<p>Knowledge distillation loss, wraps the task loss and distillation loss</p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.losses.KDLogitsLoss.forward">
+<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">kd_module_output</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">target</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/super_gradients/training/losses/kd_losses.html#KDLogitsLoss.forward"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a c
+<dd><p>Defines the computation performed at every call.</p>
+<p>Should be overridden by all subclasses.</p>
+<div class="admonition note">
+<p class="admonition-title">Note</p>
+<p>Although the recipe for forward pass needs to be defined within
+this function, one should call the <code class="xref py py-class docutils literal notranslate"><span class="pre">Module</span></code> instance afterwards
+instead of this since the former takes care of running the
+registered hooks while the latter silently ignores them.</p>
+</div>
+</dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.losses.KDLogitsLoss.reduction">
+<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.KDLogitsLoss.reduction" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+</dd></dl>
+
+<dl class="py class">
+<dt class="sig sig-object py" id="super_gradients.training.losses.DiceCEEdgeLoss">
+<em class="property"><span class="pre">class</span> </em><span class="sig-prename descclassname"><span class="pre">super_gradients.training.losses.</span></span><span class="sig-name descname"><span class="pre">DiceCEEdgeLoss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">num_classes</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">int</span></span></em>, <em class="sig-param"><span class="n"><
+<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.nn.modules.loss._Loss</span></code></p>
+<dl class="py method">
+<dt class="sig sig-object py" id="super_gradients.training.losses.DiceCEEdgeLoss.forward">
+<span class="sig-name descname"><span class="pre">forward</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">preds</span></span><span class="p"><span class="pre">:</span></span> <span class="n"><span class="pre">Tuple</span><span class="p"><span class="pre">[</span></span><span class="pre">torch.Tensor</span><span class="p"><span class="pre">]</span></span></span></em>, <em class="sig-param"><span class="n"><span class="pre">target</span></span>
+<dd><dl class="field-list simple">
+<dt class="field-odd">Parameters</dt>
+<dd class="field-odd"><p><strong>preds</strong> – Model output predictions, must be in the followed format:
+[Main-feats, Aux-feats[0], …, Aux-feats[num_auxs-1], Detail-feats[0], …, Detail-feats[num_details-1]</p>
+</dd>
+</dl>
+</dd></dl>
+
+<dl class="py attribute">
+<dt class="sig sig-object py" id="super_gradients.training.losses.DiceCEEdgeLoss.reduction">
+<span class="sig-name descname"><span class="pre">reduction</span></span><em class="property"><span class="pre">:</span> <span class="pre">str</span></em><a class="headerlink" href="#super_gradients.training.losses.DiceCEEdgeLoss.reduction" title="Permalink to this definition"></a></dt>
+<dd></dd></dl>
+
+</dd></dl>
+
 </section>
 </section>
 </section>
 </section>
 
 
Discard
Tip!

Press p or to see the previous file or, n or to see the next file