Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

#818 Feature/sg 750 fix links doc

Merged
Ghost merged 1 commits into Deci-AI:master from deci-ai:feature/SG-750-fix_links_doc
@@ -17,7 +17,8 @@ OSError: .../lib/python3.8/site-packages/nvidia/cublas/lib/libcublas.so.11: unde
 ```
 ```
 
 
 This may indicate a CUDA conflict between libraries (When Torchvision & Torch are installed for different CUDA versions) or the absence of CUDA support in your Torch version.
 This may indicate a CUDA conflict between libraries (When Torchvision & Torch are installed for different CUDA versions) or the absence of CUDA support in your Torch version.
-To fix this you can:
+To fix this you can
+
 - Uninstall both torch and torchvision `pip unistall torch torchvision`
 - Uninstall both torch and torchvision `pip unistall torch torchvision`
 - Install the torch version that respects your **os** & **compute platform** following the instruction from https://pytorch.org/
 - Install the torch version that respects your **os** & **compute platform** following the instruction from https://pytorch.org/
 
 
@@ -31,7 +32,8 @@ It is pretty common to run out of memory when using GPU. This is shown with foll
 CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 4.29 GiB already allocated; 10.12 MiB free; 4.46 GiB reserved in total by PyTorch)
 CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 4.29 GiB already allocated; 10.12 MiB free; 4.46 GiB reserved in total by PyTorch)
 ```
 ```
 
 
-To reduce memory usage, try the following:
+To reduce memory usage, try the following
+
 - Decrease the batch size (`dataset_params.train_dataloader_params.batch_size` and `dataset_params.val_dataloader_params.batch_size`)
 - Decrease the batch size (`dataset_params.train_dataloader_params.batch_size` and `dataset_params.val_dataloader_params.batch_size`)
 - Adjust the number of batch accumulation steps (`training_hyperparams.batch_accumulate`) and/or number of nodes (if you are using [DDP](device.md)) to keep the effective batch size the same: `effective_batch_size = num_gpus * batch_size * batch_accumulate` 
 - Adjust the number of batch accumulation steps (`training_hyperparams.batch_accumulate`) and/or number of nodes (if you are using [DDP](device.md)) to keep the effective batch size the same: `effective_batch_size = num_gpus * batch_size * batch_accumulate` 
 
 
Discard
Tip!

Press p or to see the previous file or, n or to see the next file