diff --git a/README.md b/README.md index 2d68fae..d1ba36a 100644 --- a/README.md +++ b/README.md @@ -58,7 +58,7 @@ It makes your code neatly organized and provides lots of useful features, like a - **Hyperparameter Search**: made easier with Hydra built in plugins like [Optuna Sweeper](https://hydra.cc/docs/next/plugins/optuna_sweeper) - **Best Practices**: a couple of recommended tools, practices and standards for efficient workflow and reproducibility (see [#Best Practices](#best-practices)) - **Extra Features**: optional utilities to make your life easier (see [#Extra Features](#extra-features)) -- **Tests**: unit tests and smoke tests (see [#Best Practices](#best-practices)) +- **Tests**: unit tests and smoke tests (see [#Tests](#tests)) - **Workflow**: comes down to 4 simple steps (see [#Workflow](#workflow))
@@ -354,13 +354,11 @@ docker pull nvcr.io/nvidia/pytorch:21.03-py3 # run container from image with GPUs enabled docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:21.03-py3 -# # run container with mounted volume -# docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:21.03-py3 -``` -```yaml -# alternatively build image by yourself using Dockerfile from the mentioned template branch -docker build -t lightning-hydra . ``` +

@@ -464,7 +462,7 @@ Experiment configurations allow you to overwrite parameters from main project co **Simple example** ```yaml # to execute this experiment run: -# python run.py +experiment=example_simple +# python run.py experiment=example_simple defaults: - override /trainer: minimal.yaml @@ -500,7 +498,7 @@ datamodule: ```yaml # to execute this experiment run: -# python run.py +experiment=example_full +# python run.py experiment=example_full defaults: - override /trainer: null @@ -556,7 +554,7 @@ logger: 1. Write your PyTorch Lightning model (see [mnist_model.py](src/models/mnist_model.py) for example) 2. Write your PyTorch Lightning datamodule (see [mnist_datamodule.py](src/datamodules/mnist_datamodule.py) for example) 3. Write your experiment config, containing paths to your model and datamodule -4. Run training with chosen experiment config: `python run.py +experiment=experiment_name` +4. Run training with chosen experiment config: `python run.py experiment=experiment_name`
### Logs @@ -617,6 +615,27 @@ Take a look at [inference_example.py](src/utils/inference_example.py).

+ +### Tests +Template comes with example tests implemented with pytest library.
+To execute them simply run: +```yaml +# run all tests +pytest + +# run tests from specific file +pytest tests/smoke_tests/test_commands.py + +# run all tests except the ones using wandb +pytest -k "not wandb" +``` +I often find myself running into bugs that come out only in edge cases or on some specific hardware/environment. To speed up the development, I usually constantly execute tests that run a couple of quick 1 epoch experiments, like overfitting to 10 batches, training on 25% of data, etc. Those kind of tests don't check for any specific output - they exist to simply verify that executing some commands doesn't end up in throwing exceptions. You can find them implemented in [tests/smoke_tests](tests/smoke_tests) folder. + +You can easily modify the commands in the scripts for your use case. If even 1 epoch is too much for your model, then you can make it run for a couple of batches instead (by using the right trainer flags). +

+ + + ### Callbacks Template contains example callbacks enabling better Weights&Biases integration, which you can use as a reference for writing your own callbacks (see [wandb_callbacks.py](src/callbacks/wandb_callbacks.py)).
To support reproducibility: @@ -664,20 +683,20 @@ List of extra utilities available in the template: - forcing debug friendly configuration - forcing multi-gpu friendly configuration - method for logging hyperparameters to loggers -- (TODO) resuming latest run + You can easily remove any of those by modifying [run.py](run.py) and [src/train.py](src/train.py).

- + ## Best Practices -
+
Use Miniconda @@ -890,8 +909,16 @@ from project_name.datamodules.mnist_datamodule import MNISTDataModule
Automatic activation of virtual environment and tab completion when entering folder -Create a new file called `.autoenv` (this name is excluded from version control in .gitignore). -Add to the file the following lines: + +Create a new file called `.autoenv` (this name is excluded from version control in .gitignore).
+You can use it to automatically execute shell commands when entering folder. + +To setup this automation for bash, execute the following line: +```bash +echo "autoenv() { if [ -x .autoenv ]; then source .autoenv ; echo '.autoenv executed' ; fi } ; cd() { builtin cd \"\$@\" ; autoenv ; } ; autoenv" >> ~/.bashrc +``` + +Now you can add any commands to your `.autoenv` file, e.g. activation of virual environment and hydra tab completion: ```bash # activate conda environment conda activate myenv @@ -899,19 +926,17 @@ conda activate myenv # initialize hydra tab completion for bash eval "$(python run.py -sc install=bash)" ``` +(these commands will be executed whenever you're openning or switching terminal to folder containing `.autoenv` file) -You can use it to automatically execute shell commands when entering folder. -To setup this automation for bash, execute the following line: -```bash -echo "autoenv() { [[ -f \"\$PWD/.autoenv\" ]] && source .autoenv ; } ; cd() { builtin cd \"\$@\" ; autoenv ; } ; autoenv" >> ~/.bashrc +Lastly add execution previliges to your `.autoenv` file: +``` +chmod +x .autoenv ``` -Keep in mind this will modify your `.bashrc` file to always run `.autoenv` file whenever it's present in the current folder, which means it creates a potential security issue. **Explanation**
-The mentioned line appends your `.bashrc` file with 3 commands: -1. `autoenv() { [[ -f \"\$PWD/.autoenv\" ]] && source .autoenv ; }` - this declares the `autoenv()` function, which executes `.autoenv` file if it exists in current work dir -2. `cd() { builtin cd \"\$@\" ; autoenv ; }` - this extends behaviour of `cd` command, to make it execute `autoenv()` function each time you change folder in terminal -3. `autoenv` this is just to ensure the function will also be called when directly openning terminal in any folder +The mentioned line appends your `.bashrc` file with 2 commands: +1. `autoenv() { if [ -x .autoenv ]; then source .autoenv ; echo '.autoenv executed' ; fi }` - this declares the `autoenv()` function, which executes `.autoenv` file if it exists in current work dir and has execution previligies +2. `cd() { builtin cd \"\$@\" ; autoenv ; } ; autoenv` - this extends behaviour of `cd` command, to make it execute `autoenv()` function each time you change folder in terminal or open new terminal
diff --git a/tests/smoke_tests/test_wandb.py b/tests/smoke_tests/test_wandb.py index 88041d6..a47cf27 100644 --- a/tests/smoke_tests/test_wandb.py +++ b/tests/smoke_tests/test_wandb.py @@ -26,6 +26,7 @@ def test_wandb_optuna_sweep(): run_command(command) +@pytest.mark.wandb def test_wandb_callbacks(): """Test wandb callbacks.""" command = [ diff --git a/tests/unit_tests/test_sth.py b/tests/unit_tests/test_sth.py index fc905a0..e1d8698 100644 --- a/tests/unit_tests/test_sth.py +++ b/tests/unit_tests/test_sth.py @@ -1,5 +1,7 @@ import pytest +from tests.helpers.runif import RunIf + def test_something1(): """Some test description.""" @@ -9,3 +11,16 @@ def test_something1(): def test_something2(): """Some test description.""" assert 1 + 1 == 2 + + +@pytest.mark.parametrize("arg1", [0.5, 1.0, 2.0]) +def test_something3(arg1: float): + """Some test description.""" + assert arg1 > 0 + + +# use RunIf to skip execution of some tests when not on windows or when no gpus are available +@RunIf(skip_windows=True, min_gpus=1) +def test_something4(): + """Some test description.""" + assert True is True