STYLE: Fix lightning deprecation warnings
I'm fixing a few issues that would prevent upgrading the pytorch-lightning package. Trainer.lr_schedulers is being replaced with trainer.lr_scheduler_configs. In future versions, using trainer.logger with multiple loggers will only return the first logger available, so I'm iterating through trainer.loggers instead. And LightningLoggerBase.close is being replaced with LightningLoggerBase.finalize. Closes #751
This commit is contained in:
Родитель
2877002d50
Коммит
cc7d754687
|
@ -395,5 +395,5 @@ class InnerEyeLightning(LightningModule):
|
|||
assert isinstance(self.trainer, Trainer)
|
||||
self.log_on_epoch(MetricType.LOSS, loss, is_training)
|
||||
if is_training:
|
||||
learning_rate = self.trainer.lr_schedulers[0]['scheduler'].get_last_lr()[0]
|
||||
learning_rate = self.trainer.lr_scheduler_configs[0].scheduler.get_last_lr()[0] # type: ignore
|
||||
self.log_on_epoch(MetricType.LEARNING_RATE, learning_rate, is_training)
|
||||
|
|
|
@ -266,7 +266,8 @@ def model_train(checkpoint_path: Optional[Path],
|
|||
logging.info("Starting training")
|
||||
|
||||
trainer.fit(lightning_model, datamodule=data_module)
|
||||
trainer.logger.close() # type: ignore
|
||||
for logger in trainer.loggers:
|
||||
logger.finalize("success")
|
||||
|
||||
world_size = getattr(trainer, "world_size", 0)
|
||||
is_azureml_run = not is_offline_run_context(RUN_CONTEXT)
|
||||
|
|
Загрузка…
Ссылка в новой задаче