Pytorch-Lightning這個庫我「發現」過兩次。第一次發現時,感覺它很重很難學,而且似乎自己也用不上。但是後面隨着做的項目開始出現了一些稍微高階的要求,我發現我總是不斷地在相似工程代碼上花費大量時間,Debug也是這些代碼花的時間最多,而且漸漸產生了一個矛盾之處:如果想要更多更好的功能,如TensorBoard支持,Early Stop,LR Scheduler,分布式訓練,快速測試等,代碼就無可避免地變得越來越長,看起來也越來越亂,同時核心的訓練邏輯也漸漸被這些工程代碼蓋過。那麼有沒有更好的解決方案,甚至能一鍵解決所有這些問題呢?
於是我第二次發現了Pytorch-Lightning。
但是問題還是來了。這個框架並沒有因為香而變得更加易學。官網的教程很豐富,可以看出來開發者們在努力做了。但是很多相連的知識點都被分布在了不同的版塊里,還有一些核心的理解要點並沒有被強調出來,而是小字帶過,這讓我想做一個普惠的教程,包含所有我在學習過程中認為重要的概念,好用的參數,一些注意點、坑點,大量的示例代碼段和一些核心問題的集中講解。
最後,第三部分提供了一個我總結出來的易用於大型項目、容易遷移、易於復用的模板,有興趣的可以去GitHub—https://github.com/miracleyoo/pytorch-lightning-template試用。
Pytorch-Lighting 的一大特點是把模型和系統分開來看。模型是像Resnet18, RNN之類的純模型, 而系統定義了一組模型如何相互交互,如GAN(生成器網絡與判別器網絡)、Seq2Seq(Encoder與Decoder網絡)和Bert。同時,有時候問題只涉及一個模型,那麼這個系統則可以是一個通用的系統,用於描述模型如何使用,並可以被復用到很多其他項目。Pytorch-Lighting 的核心設計思想是「自給自足」。每個網絡也同時包含了如何訓練、如何測試、優化器定義等內容。
這一部分放在最前面,因為全文內容太長,如果放後面容易忽略掉這部分精華。Pytorch-Lightning 是一個很好的庫,或者說是pytorch的抽象和包裝。它的好處是可復用性強,易維護,邏輯清晰等。缺點也很明顯,這個包需要學習和理解的內容還是挺多的,或者換句話說,很重。如果直接按照官方的模板寫代碼,小型project還好,如果是大型項目,有複數個需要調試驗證的模型和數據集,那就不太好辦,甚至更加麻煩了。經過幾天的摸索和調試,我總結出了下面這樣一套好用的模板,也可以說是對Pytorch-Lightning的進一步抽象。
歡迎大家嘗試這一套代碼風格,如果用習慣的話還是相當方便復用的,也不容易半道退坑。
root- |-data |-__init__.py |-data_interface.py |-xxxdataset1.py |-xxxdataset2.py |-... |-model |-__init__.py |-model_interface.py |-xxxmodel1.py |-xxxmodel2.py |-... |-main.py
如果對每個模型直接上plmodule,對於已有項目、別人的代碼等的轉換將相當耗時。另外,這樣的話,你需要給每個模型都加上一些相似的代碼,如training_step,validation_step。顯然,這並不是我們想要的,如果真的這樣做,不但不易於維護,反而可能會更加雜亂。同理,如果把每個數據集類都直接轉換成pl的DataModule,也會面臨相似的問題。基於這樣的考量,我建議使用上述架構:
data和modle兩個文件夾中放入__init__.py文件,做成包。這樣方便導入。兩個init文件分別是:from .data_interface import DInterface和from .model_interface import MInterface在data_interface中建立一個class DInterface(pl.LightningDataModule):用作所有數據集文件的接口。__init__()函數中import相應Dataset類,setup()進行實例化,並老老實實加入所需要的的train_dataloader, val_dataloader, test_dataloader函數。這些函數往往都是相似的,可以用幾個輸入args控制不同的部分。同理,在model_interface中建立class MInterface(pl.LightningModule):類,作為模型的中間接口。__init__()函數中import相應模型類,然後老老實實加入configure_optimizers, training_step, validation_step等函數,用一個接口類控制所有模型。不同部分使用輸入參數控制。main.py函數隻負責:定義parser,添加parse項;選好需要的callback函數;實例化MInterface, DInterface, Trainer。
完全版模板可以在GitHub:https://github.com/miracleyoo/pytorch-lightning-template 找到。
簡介
主頁:https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html
outs = []for batch in data: out = training_step(batch) outs.append(out)training_epoch_end(outs)等價Lightning代碼:def training_step(self, batch, batch_idx): prediction = ... return predictiondef training_epoch_end(self, training_step_outputs): for prediction in predictions: # do something with these
組件與函數API頁面:https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html%23lightningmodule-api
一個Pytorch-Lighting 模型必須含有的部件是:
init: 初始化,包括模型和系統的定義。
training_step(self, batch, batch_idx): 即每個batch的處理函數。
batch(Tensor| (Tensor, …) | [Tensor, …]) – The output of yourDataLoader. A tensor, tuple or list.batch_idx(int) – Integer displaying index of this batchoptimizer_idx(int) – When using multiple optimizers, this argument will also be present.hiddens(Tensor) – Passed in if truncated_bptt_steps > 0.
dict- A dictionary. Can include any keys, but must include the key'loss'None- Training will skip to the next batch
返回值無論如何也需要有一個loss量。如果是字典,要有這個key。沒loss這個batch就被跳過了。例:
def training_step(self, batch, batch_idx): x, y, z = batch out = self.encoder(x) loss = self.loss(out, x) return loss# Multiple optimizers (e.g.: GANs)def training_step(self, batch, batch_idx, optimizer_idx): if optimizer_idx == 0: # do training_step with encoder if optimizer_idx == 1: # do training_step with decoder# Truncated back-propagation through timedef training_step(self, batch, batch_idx, hiddens): # hiddens are the hidden states from the previous truncated backprop step ... out, hiddens = self.lstm(data, hiddens) ... return {'loss': loss, 'hiddens': hiddens}
configure_optimizers: 優化器定義,返回一個優化器,或數個優化器,或兩個List(優化器,Scheduler)。如:
# most casesdef configure_optimizers(self): opt = Adam(self.parameters(), lr=1e-3) return opt# multiple optimizer case (e.g.: GAN)def configure_optimizers(self): generator_opt = Adam(self.model_gen.parameters(), lr=0.01) disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02) return generator_opt, disriminator_opt# example with learning rate schedulersdef configure_optimizers(self): generator_opt = Adam(self.model_gen.parameters(), lr=0.01) disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02) discriminator_sched = CosineAnnealing(discriminator_opt, T_max=10) return [generator_opt, disriminator_opt], [discriminator_sched]# example with step-based learning rate schedulersdef configure_optimizers(self): gen_opt = Adam(self.model_gen.parameters(), lr=0.01) dis_opt = Adam(self.model_disc.parameters(), lr=0.02) gen_sched = {'scheduler': ExponentialLR(gen_opt, 0.99), 'interval': 'step'} # called after each training step dis_sched = CosineAnnealing(discriminator_opt, T_max=10) # called every epoch return [gen_opt, dis_opt], [gen_sched, dis_sched]# example with optimizer frequencies# see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1# https://arxiv.org/abs/1704.00028def configure_optimizers(self): gen_opt = Adam(self.model_gen.parameters(), lr=0.01) dis_opt = Adam(self.model_disc.parameters(), lr=0.02) n_critic = 5 return ( {'optimizer': dis_opt, 'frequency': n_critic}, {'optimizer': gen_opt, 'frequency': 1} )
forward: 和正常的nn.Module一樣,用於inference。內部調用時:y=self(batch)training_step_end: 只在使用多個node進行訓練且結果涉及如softmax之類需要全部輸出聯合運算的步驟時使用該函數。同理,validation_step_end/test_step_end。training_epoch_end:在一個訓練epoch結尾處被調用;輸入參數:一個List,List的內容是前面training_step()所返回的每次的內容;返回:Nonevalidation_step(self, batch, batch_idx)/test_step(self, batch, batch_idx):沒有返回值限制,不一定非要輸出一個val_loss。validation_epoch_end/test_epoch_end
freeze:凍結所有權重以供預測時候使用。僅當已經訓練完成且後面只測試時使用。print:儘管自帶的print函數也可以使用,但如果程序運行在分布式系統時,會打印多次。而使用self.print()則只會打印一次。log:像是TensorBoard等log記錄器,對於每個log的標量,都會有一個相對應的橫坐標,它可能是batch number或epoch number。而on_step就表示把這個log出去的量的橫坐標表示為當前batch,而on_epoch則表示將log的量在整個epoch上進行累積後log,橫坐標為當前epoch。*also applies to the test loop
prog_bar(bool) – if True logs to the progress barlogger(bool) – if True logs to the loggeron_step(Optional[bool]) – if True logs at this step. None auto-logs at the training_step but not validation/test_stepon_epoch(Optional[bool]) – if True logs epoch accumulated metrics. None auto-logs at the val/test step but not training_stepreduce_fx(Callable) – reduction function over step values for end of epoch. Torch.mean by defaulttbptt_reduce_fx(Callable) – function to reduce on truncated back proptbptt_pad_token(int) – token to use for paddingenable_graph(bool) – if True, will not auto detach the graphsync_dist(bool) – if True, reduces the metric across GPUs/TPUssync_dist_op(Union[Any,str]) – the op to sync across GPUs/TPUssync_dist_group(Optional[Any]) – the ddp grouplog_dict:和log函數唯一的區別就是,name和value變量由一個字典替換。表示同時log多個值。如:python values = {'loss': loss, 'acc': acc, ..., 'metric_n': metric_n} self.log_dict(values)save_hyperparameters:儲存init中輸入的所有超參。後續訪問可以由self.hparams.argX方式進行。同時,超參表也會被存到文件中。
device:可以使用self.device來構建設備無關型tensor。如:z = torch.rand(2, 3, device=self.device)。要點如果準備使用DataParallel,在寫training_step的時候需要調用forward函數,z=self(x)模板class LitModel(pl.LightningModule): def __init__(...): def forward(...): def training_step(...) def training_step_end(...) def training_epoch_end(...) def validation_step(...) def validation_step_end(...) def validation_epoch_end(...) def test_step(...) def test_step_end(...) def test_epoch_end(...) def configure_optimizers(...) def any_extra_hook(...)
基礎使用
model = MyLightningModule()trainer = Trainer()trainer.fit(model, train_dataloader, val_dataloader)
如果連validation_step都沒有,那val_dataloader也就算了。
偽代碼與hooksHooks頁面:https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html%23hooks
def fit(...): on_fit_start() if global_rank == 0: # prepare data is called on GLOBAL_ZERO only prepare_data() for gpu/tpu in gpu/tpus: train_on_device(model.copy()) on_fit_end()def train_on_device(model): # setup is called PER DEVICE setup() configure_optimizers() on_pretrain_routine_start() for epoch in epochs: train_loop() teardown()def train_loop(): on_train_epoch_start() train_outs = [] for train_batch in train_dataloader(): on_train_batch_start() # ----- train_step methods ------- out = training_step(batch) train_outs.append(out) loss = out.loss backward() on_after_backward() optimizer_step() on_before_zero_grad() optimizer_zero_grad() on_train_batch_end(out) if should_check_val: val_loop() # end training epoch logs = training_epoch_end(outs)def val_loop(): model.eval() torch.set_grad_enabled(False) on_validation_epoch_start() val_outs = [] for val_batch in val_dataloader(): on_validation_batch_start() # -------- val step methods ------- out = validation_step(val_batch) val_outs.append(out) on_validation_batch_end(out) validation_epoch_end(val_outs) on_validation_epoch_end() # set up for train model.train() torch.set_grad_enabled(True)
推薦參數參數介紹(附視頻)—https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html%23trainer-flags類定義與默認參數—https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html%23trainer-class-api
default_root_dir:默認存儲地址。所有的實驗變量和權重全部會被存到這個文件夾裡面。推薦是,每個模型有一個獨立的文件夾。每次重新訓練會產生一個新的version_x子文件夾。max_epochs:最大訓練周期數。trainer = Trainer(max_epochs=1000)min_epochs:至少訓練周期數。當有Early Stop時使用。auto_scale_batch_size:在進行任何訓練前自動選擇合適的batch size。
# default used by the Trainer (no scaling of batch size)trainer = Trainer(auto_scale_batch_size=None)# run batch size scaling, result overrides hparams.batch_sizetrainer = Trainer(auto_scale_batch_size='binsearch')# call tune to find the batch sizetrainer.tune(model)
auto_select_gpus:自動選擇合適的GPU。尤其是在有GPU處於獨占模式時候,非常有用。auto_lr_find:自動找到合適的初始學習率。使用了https://arxiv.org/abs/1506.01186 論文的技術。當且僅當執行trainer.tune(model)代碼時工作。
# run learning rate finder, results override hparams.learning_ratetrainer = Trainer(auto_lr_find=True)# run learning rate finder, results override hparams.my_lr_argtrainer = Trainer(auto_lr_find='my_lr_arg')# call tune to find the lrtrainer.tune(model)precision:精確度。正常是32,使用16可以減小內存消耗,增大batch。# default used by the Trainertrainer = Trainer(precision=32)# 16-bit precisiontrainer = Trainer(precision=16, gpus=1)
val_check_interval:進行Validation測試的周期。正常為1,訓練1個epoch測試4次是0.25,每1000 batch測試一次是1000。
use (float) to check within a training epoch:此時這個值為一個epoch的百分比。每百分之多少測試一次。use (int) to check every n steps (batches):每多少個batch測試一次。
# default used by the Trainertrainer = Trainer(val_check_interval=1.0)# check validation set 4 times during a training epochtrainer = Trainer(val_check_interval=0.25)# check validation set every 1000 training batches# use this when using iterableDataset and your dataset has no length# (ie: production cases with streaming data)trainer = Trainer(val_check_interval=1000)
gpus:控制使用的GPU數。當設定為None時,使用cpu。
# default used by the Trainer (ie: train on CPU)trainer = Trainer(gpus=None)# equivalenttrainer = Trainer(gpus=0)# int: train on 2 gpustrainer = Trainer(gpus=2)# list: train on GPUs 1, 4 (by bus ordering)trainer = Trainer(gpus=[1, 4])trainer = Trainer(gpus='1, 4') # equivalent# -1: train on all gpustrainer = Trainer(gpus=-1)trainer = Trainer(gpus='-1') # equivalent# combine with num_nodes to train on multiple GPUs across nodes# uses 8 gpus in totaltrainer = Trainer(gpus=2, num_nodes=4)# train only on GPUs 1 and 4 across nodestrainer = Trainer(gpus=[1, 4], num_nodes=4)
limit_train_batches:使用訓練數據的百分比。如果數據過多,或正在調試,可以使用這個。值的範圍為0~1。同樣,有limit_test_batches,limit_val_batches。
# default used by the Trainertrainer = Trainer(limit_train_batches=1.0)# run through only 25% of the training set each epochtrainer = Trainer(limit_train_batches=0.25)# run through only 10 batches of the training set each epochtrainer = Trainer(limit_train_batches=10)
fast_dev_run:bool量。如果設定為true,會只執行一個batch的train, val 和 test,然後結束。僅用於debug。
Setting this argument will disable tuner, checkpoint callbacks, early stopping callbacks, loggers and logger callbacks likeLearningRateLoggerand runs for only 1 epoch
# default used by the Trainertrainer = Trainer(fast_dev_run=False)# runs 1 train, val, test batch and program endstrainer = Trainer(fast_dev_run=True)# runs 7 train, val, test batches and program endstrainer = Trainer(fast_dev_run=7)
.fit()函數Trainer.fit(model, train_dataloader=None, val_dataloaders=None, datamodule=None):輸入第一個量一定是model,然後可以跟一個LigntningDataModule或一個普通的Train DataLoader。如果定義了Val step,也要有Val DataLoader。
datamodule([Optional] [LightningDataModule]) – A instance of LightningDataModule.model[LightningModule] – Model to fit.train_dataloader([Optional] [DataLoader]) – A Pytorch DataLoader with training samples. If the model has a predefined train_dataloader method this will be skipped.val_dataloaders( Union [DataLoader] ,List [DataLoader],None)– Either a single Pytorch Dataloader or a list of them, specifying validation samples. If the model has a predefined val_dataloaders method this will be skipped其他要點.test()若非直接調用,不會運行。trainer.test()model.eval() and torch.no_grad()在進行測試時會被自動調用。使用樣例
from argparse import ArgumentParserdef main(hparams): model = LightningModule() trainer = Trainer(gpus=hparams.gpus) trainer.fit(model)if __name__ == '__main__': parser = ArgumentParser() parser.add_argument('--gpus', default=None) args = parser.parse_args() main(args)
2.自動添加所有Trainer會用到的命令行參數:
from argparse import ArgumentParserdef main(args): model = LightningModule() trainer = Trainer.from_argparse_args(args) trainer.fit(model)if __name__ == '__main__': parser = ArgumentParser() parser = Trainer.add_argparse_args( # group the Trainer arguments together parser.add_argument_group(title="pl.Trainer args") ) args = parser.parse_args() main(args)
3.混合式,既使用Trainer相關參數,又使用一些自定義參數,如各種模型超參:
from argparse import ArgumentParserimport pytorch_lightning as plfrom pytorch_lightning import LightningModule, Trainerdef main(args): model = LightningModule() trainer = Trainer.from_argparse_args(args) trainer.fit(model)if __name__ == '__main__': parser = ArgumentParser() parser.add_argument('--batch_size', default=32, type=int) parser.add_argument('--hidden_dim', type=int, default=128) parser = Trainer.add_argparse_args( # group the Trainer arguments together parser.add_argument_group(title="pl.Trainer args") ) args = parser.parse_args() main(args)
所有參數Trainer.__init__(logger=True,checkpoint_callback=True,callbacks=None,default_root_dir=None,gradient_clip_val=0,process_position=0,num_nodes=1,num_processes=1,gpus=None,auto_select_gpus=False,tpu_cores=None,log_gpu_memory=None,progress_bar_refresh_rate=None,overfit_batches=0.0,track_grad_norm=- 1,check_val_every_n_epoch=1,fast_dev_run=False,accumulate_grad_batches=1,max_epochs=None,min_epochs=None,max_steps=None,min_steps=None,limit_train_batches=1.0,limit_val_batches=1.0,limit_test_batches=1.0,limit_predict_batches=1.0,val_check_interval=1.0,flush_logs_every_n_steps=100,log_every_n_steps=50,accelerator=None,sync_batchnorm=False,precision=32,weights_summary='top',weights_save_path=None,num_sanity_val_steps=2,truncated_bptt_steps=None,resume_from_checkpoint=None,profiler=None,benchmark=False,deterministic=False,reload_dataloaders_every_epoch=False,auto_lr_find=False,replace_sampler_ddp=True,terminate_on_nan=False,auto_scale_batch_size=False,prepare_data_per_node=True,plugins=None,amp_backend='native',amp_level='O2',distributed_backend=None,move_metrics_to_cpu=False,multiple_trainloader_mode='max_size_cycle',stochastic_weight_avg=False)Log和return loss到底在做什麼To add a training loop use the training_step method.
class LitClassifier(pl.LightningModule): def __init__(self, model): super().__init__() self.model = model def training_step(self, batch, batch_idx): x, y = batch y_hat = self.model(x) loss = F.cross_entropy(y_hat, y) return loss
無論是training_step,還是validation_step,test_step返回值都是loss。返回的loss會被用一個list收集起來。
Under the hood, Lightning does the following (pseudocode):
# put model in train modemodel.train()torch.set_grad_enabled(True)losses = []for batch in train_dataloader: # forward loss = training_step(batch) losses.append(loss.detach()) # backward loss.backward() # apply and clear grads optimizer.step() optimizer.zero_grad()
Training epoch-level metricsIf you want to calculate epoch-level metrics and log them, use the.logmethod.
def training_step(self, batch, batch_idx): x, y = batch y_hat = self.model(x) loss = F.cross_entropy(y_hat, y) # logs metrics for each training_step, # and the average across the epoch, to the progress bar and logger self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True) return loss
如果在x_step函數中使用了.log()函數,那麼這個量將會被逐步記錄下來。每一個log出去的變量都會被記錄下來,每一個step會集中生成一個字典dict,而每個epoch都會把這些字典收集起來,形成一個字典的list。
The .log object automatically reduces the requested metrics across the full epoch. Here’s the pseudocode of what it does under the hood:
outs = []for batch in train_dataloader: # forward out = training_step(val_batch) # backward loss.backward() # apply and clear grads optimizer.step() optimizer.zero_grad()epoch_metric = torch.mean(torch.stack([x['train_loss'] for x in outs]))
Train epoch-level operationsIf you need to do something with all the outputs of each training_step, override training_epoch_end yourself.
def training_step(self, batch, batch_idx): x, y = batch y_hat = self.model(x) loss = F.cross_entropy(y_hat, y) preds = ... return {'loss': loss, 'other_stuff': preds}def training_epoch_end(self, training_step_outputs): for pred in training_step_outputs: # do something
The matching pseudocode is:
outs = []for batch in train_dataloader: # forward out = training_step(val_batch) # backward loss.backward() # apply and clear grads optimizer.step() optimizer.zero_grad()training_epoch_end(outs)
主頁:https://pytorch-lightning.readthedocs.io/en/latest/extensions/datamodules.html介紹首先,這個DataModule和之前寫的Dataset完全不衝突。前者是後者的一個包裝,並且這個包裝可以被用於多個torch Dataset 中。在我看來,其最大的作用就是把各種train/val/test劃分、DataLoader初始化之類的重複代碼通過包裝類的方式得以被簡單的復用。
Processing instructions:處理Train dataloader:訓練集DataloaderVal dataloader(s):驗證集DataloaderTest dataloader(s):測試集Dataloader
其次,pl.LightningDataModule相當於一個功能加強版的torch Dataset,加強的功能包括:
最最開始的時候,進行一些無論GPU有多少只要執行一次的操作,如寫入磁盤的下載操作、分詞操作(tokenize)等。由於只在單線程中調用,不要在這個函數中進行self.x=y似的賦值操作。但如果是自己用而不是給大眾分發的話,這個函數可能並不需要調用,因為數據提前處理好就好了。
實例化數據集(Dataset),並進行相關操作,如:清點類數,劃分train/val/test集合等。參數stage用於指示是處於訓練周期(fit)還是測試周期(test),其中,fit周期需要構建train和val兩者的數據集。setup函數不需要返回值。初始化好的train/val/test set直接賦值給self即可。
train_dataloader/val_dataloader/test_dataloader:
示例
class MNISTDataModule(pl.LightningDataModule): def __init__(self, data_dir: str = './', batch_size: int = 64, num_workers: int = 8): super().__init__() self.data_dir = data_dir self.batch_size = batch_size self.num_workers = num_workers self.transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) # self.dims is returned when you call dm.size() # Setting default dims here because we know them. # Could optionally be assigned dynamically in dm.setup() self.dims = (1, 28, 28) self.num_classes = 10 def prepare_data(self): # download MNIST(self.data_dir, train=True, download=True) MNIST(self.data_dir, train=False, download=True) def setup(self, stage=None): # Assign train/val datasets for use in dataloaders if stage == 'fit' or stage is None: mnist_full = MNIST(self.data_dir, train=True, transform=self.transform) self.mnist_train, self.mnist_val = random_split(mnist_full, [55000, 5000]) # Assign test dataset for use in dataloader(s) if stage == 'test' or stage is None: self.mnist_test = MNIST(self.data_dir, train=False, transform=self.transform) def train_dataloader(self): return DataLoader(self.mnist_train, batch_size=self.batch_size, num_workers=self.num_workers) def val_dataloader(self): return DataLoader(self.mnist_val, batch_size=self.batch_size, num_workers=self.num_workers) def test_dataloader(self):returnDataLoader(self.mnist_test,batch_size=self.batch_size,num_workers=self.num_workers)要點若在DataModule中定義了一個self.dims 變量,後面可以調用dm.size()獲取該變量。
主頁:https://pytorch-lightning.readthedocs.io/en/latest/common/weights_loading.htmlSaving
ModelCheckpoint 地址:https://pytorch-lightning.readthedocs.io/en/latest/extensions/generated/pytorch_lightning.callbacks.ModelCheckpoint.html%23pytorch_lightning.callbacks.ModelCheckpoint
ModelCheckpoint: 自動儲存的callback module。默認情況下training過程中只會自動儲存最新的模型與相關參數,而用戶可以通過這個module自定義。如觀測一個val_loss的量,並儲存top 3好的模型,且同時儲存最後一個epoch的模型,等等。例:
from pytorch_lightning.callbacks import ModelCheckpoint# saves a file like: my/path/sample-mnist-epoch=02-val_loss=0.32.ckptcheckpoint_callback = ModelCheckpoint( monitor='val_loss', filename='sample-mnist-{epoch:02d}-{val_loss:.2f}', save_top_k=3, mode='min', save_last=True)trainer=pl.Trainer(gpus=1,max_epochs=3,progress_bar_refresh_rate=20,callbacks=[checkpoint_callback])
另外,也可以手動存儲checkpoint:trainer.save_checkpoint("example.ckpt")
ModelCheckpoint Callback中,如果save_weights_only =True,那麼將會只儲存模型的權重(相當於model.save_weights(filepath)),反之會儲存整個模型(相當於model.save(filepath))。Loadingload一個模型,包括它的weights、biases和超參數:
model = MyLightingModule.load_from_checkpoint(PATH)print(model.learning_rate)# prints the learning_rate you used in this checkpointmodel.eval()y_hat = model(x)
class LitModel(LightningModule): def __init__(self, in_dim, out_dim): super().__init__() self.save_hyperparameters() self.l1 = nn.Linear(self.hparams.in_dim, self.hparams.out_dim)# if you train and save the model like this it will use these values when loading# the weights. But you can overwrite thisLitModel(in_dim=32, out_dim=10)# uses in_dim=32, out_dim=10model = LitModel.load_from_checkpoint(PATH)# uses in_dim=128, out_dim=10model=LitModel.load_from_checkpoint(PATH,in_dim=128,out_dim=10)完全load訓練狀態:load包括模型的一切,以及和訓練相關的一切參數,如model, epoch, step, LR schedulers, apex等
model = LitModel()trainer = Trainer(resume_from_checkpoint='some/path/to/my_checkpoint.ckpt')# automatically restores model, epoch, step, LR schedulers, apex, etc...trainer.fit(model)
Callback 是一個自包含的程序,可以與訓練流程交織在一起,而不會污染主要的研究邏輯。
Callback 並非只會在epoch結尾調用。pytorch-lightning 提供了數十個hook(接口,調用位置)可供選擇,也可以自定義callback,實現任何想實現的模塊。推薦使用方式是,隨問題和項目變化的操作,這些函數寫到lightning module裡面,而相對獨立,相對輔助性的,需要復用的內容則可以定義單獨的模塊,供後續方便地插拔使用。
Callbacks推薦內建 Callbacks:https://pytorch-lightning.readthedocs.io/en/latest/extensions/callbacks.html%23built-in-callbacks
EarlyStopping(monitor='early_stop_on', min_delta=0.0, patience=3, verbose=False, mode='min', strict=True):根據某個值,在數個epoch沒有提升的情況下提前停止訓練。
monitor(str) – quantity to be monitored. Default: 'early_stop_on'.min_delta(float) – minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement. Default: 0.0.patience(int) – number of validation epochs with no improvement after which training will be stopped. Default: 3.verbose(bool) – verbosity mode. Default: False.mode(str) – one of 'min', 'max'. In 'min' mode, training will stop when the quantity monitored has stopped decreasing and in 'max' mode it will stop when the quantity monitored has stopped increasing.strict(bool) – whether to crash the training if monitor is not found in the validation metrics. Default: True.
from pytorch_lightning import Trainerfrom pytorch_lightning.callbacks import EarlyStoppingearly_stopping = EarlyStopping('val_loss')trainer = Trainer(callbacks=[early_stopping])
ModelCheckpoint:見上文Saving and Loading.PrintTableMetricsCallback:在每個epoch結束後打印一份結果整理表格。
from pl_bolts.callbacks import PrintTableMetricsCallbackcallback = PrintTableMetricsCallback()trainer = pl.Trainer(callbacks=[callback])trainer.fit(...)# ------------------------------# at the end of every epoch it will print# ------------------------------# loss│train_loss│val_loss│epoch# ──────────────────────────────#2.2541470527648926│2.2541470527648926│2.2158432006835938│0
Logging:Logger默認是TensorBoard,但可以指定各種主流Logger框架,如Comet.ml,MLflow,Netpune,或直接CSV文件。可以同時使用複數個logger。
from pytorch_lightning import loggers as pl_loggers# Defaulttb_logger = pl_loggers.TensorBoardLogger( save_dir=os.getcwd(), version=None, name='lightning_logs')trainer = Trainer(logger=tb_logger)# Or use the same format as otherstb_logger = pl_loggers.TensorBoardLogger('logs/')# One Loggercomet_logger = pl_loggers.CometLogger(save_dir='logs/')trainer = Trainer(logger=comet_logger)# Save code snapshotlogger = pl_loggers.TestTubeLogger('logs/', create_git_tag=True)# Multiple Loggertb_logger = pl_loggers.TensorBoardLogger('logs/')comet_logger = pl_loggers.CometLogger(save_dir='logs/')trainer = Trainer(logger=[tb_logger, comet_logger])
默認情況下,每50個batch log一次,可以通過調整參數。
如果想要log輸出非scalar(標量)的內容,如圖片,文本,直方圖等等,可以直接調用self.logger.experiment.add_xxx()來實現所需操作。
def training_step(...): ... # the logger you used (in this case tensorboard) tensorboard = self.logger.experiment tensorboard.add_image() tensorboard.add_histogram(...) tensorboard.add_figure(...)
使用log:如果是TensorBoard,那麼:tensorboard --logdir ./lightning_logs。在Jupyter Notebook中,可以使用:
# Start tensorboard.%load_ext tensorboard%tensorboard --logdir lightning_logs/
小技巧:如果在局域網內開啟了TensorBoard,加上flag --bind_all即可使用主機名訪問:
tensorboard--logdirlightning_logs--bind_all`->`http://SERVER-NAME:6006/
主頁:https://pytorch-lightning.readthedocs.io/en/latest/starter/introduction_guide.html%23transfer-learning
import torchvision.models as modelsclass ImagenetTransferLearning(LightningModule): def __init__(self): super().__init__() # init a pretrained resnet backbone = models.resnet50(pretrained=True) num_filters = backbone.fc.in_features layers = list(backbone.children())[:-1] self.feature_extractor = nn.Sequential(*layers) # use the pretrained model to classify cifar-10 (10 image classes) num_target_classes = 10 self.classifier = nn.Linear(num_filters, num_target_classes) def forward(self, x): self.feature_extractor.eval() with torch.no_grad(): representations = self.feature_extractor(x).flatten(1) x = self.classifier(representations) ...
LightningModules know what device they are on! Construct tensors on the device directly to avoid CPU->Device transfer.
# badt = torch.rand(2, 2).cuda()# good (self is LightningModule)t = torch.rand(2, 2, device=self.device)
For tensors that need to be model attributes, it is best practice to register them as buffers in the modules』__init__method:
# badself.t = torch.rand(2, 2, device=self.device)# goodself.register_buffer("t", torch.rand(2, 2))
如果你使用了一個中繼的pl.LightningModule,而這個module裡面實例化了某個普通的nn.Module,而這個模型中又需要內部生成一些tensor,比如圖片每個通道的mean,std之類,那麼如果你從pl.LightningModule中pass一個self.device,實際上在一開始這個self.device永遠是cpu。所以如果你在調用的nn.Module的__init__()中初始化,使用to(device)或乾脆什麼都不用,結果就是它永遠都在cpu上。
但是,經過實驗,雖然pl.LightningModule在__init__()階段self.device還是cpu,當進入了training_step()之後,就迅速變為了cuda。所以,對於子模塊,最佳方案是,使用一個forward中傳入的量,如x,作為一個reference變量,用type_as函數將在模型中生成的tensor都放到和這個參考變量相同的device上即可。
class RDNFuse(nn.Module): ... def init_norm_func(self, ref): self.mean = torch.tensor(np.array(self.mean_sen), dtype=torch.float32).type_as(ref) def forward(self, x): if not hasattr(self, 'mean'): self.init_norm_func(x)
pl.seed_everything(1234):對所有相關的隨機量固定種子。
使用LR Scheduler時候,不用自己.step()。它也被Trainer自動處理了。
相關界面:https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html%3Fhighlight%3Dscheduler%23
# Single optimizerfor epoch in epochs: for batch in data: loss = model.training_step(batch, batch_idx, ...) loss.backward() optimizer.step() optimizer.zero_grad() for scheduler in schedulers: scheduler.step()# Multiple optimizersfor epoch in epochs: for batch in data: for opt in optimizers: disable_grads_for_other_optimizers() train_step(opt) opt.step() for scheduler in schedulers: scheduler.step()
關於劃分train和val集合的方法。與PL無關,但很常用,兩個例子:
random_split(range(10), [3, 7], generator=torch.Generator().manual_seed(42))
from torch.utils.data import DataLoader, random_splitfrom torchvision.datasets import MNISTmnist_full = MNIST(self.data_dir, train=True, transform=self.transform)self.mnist_train,self.mnist_val=random_split(mnist_full,[55000,5000])
dataset(https://pytorch.org/docs/stable/data.html%23torch.utils.data.Dataset) – Dataset to be splitlengths– lengths of splits to be producedgenerator(https://pytorch.org/docs/stable/generated/torch.Generator.html%23torch.Generator) – Generator used for the random permutation.