Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.Įpoch 0: 100%|█████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00įile "/home/usr/project/env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fitįile "/home/usr/project/env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 753, in _runįile "/home/usr/project/env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 785, in pre_dispatchįile "/home/usr/project/env/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py", line 49, in wrapped_fnįile "/home/usr/project/env/lib/python3.6/site-packages/pytorch_lightning/loggers/tensorboard.py", line 238, in save home/usr/project/env/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:69: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Set the gpus flag in your trainer `Trainer(gpus=1)` or script `-gpus=1`.Ġ.000 Total estimated model params size (MB) home/usr/project/env/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:69: UserWarning: GPU available but not used. # Error here combine = CombineModel( pretrainA, pretrainB) pretrainB( x)), dim = 1)ĭs = TensorDataset( torch. Return x class PretrainModelA( BaseModule):ĭef _init_( self, pretrainA, pretrainB): Return optimizer def training_step( self, batch, batch_idx): data import DataLoader, TensorDataset import pytorch_lightning as pl class BaseModule( pl.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |