seismic-deeplearning/cgmanifest.json

64 строки
1.5 KiB
JSON
Исходник Обычный вид История

final public release PR (#125) * Merged PR 42: Python package structure Created Python package structure * Merged PR 50: Röth-Tarantola generative model for velocities - Created Python package structure for generative models for velocities - Implemented the [Röth-Tarantola model](https://doi.org/10.1029/93JB01563) * Merged PR 51: Isotropic AWE forward modelling using Devito Implemented forward modelling for the isotropic acoustic wave equation using [Devito](https://www.devitoproject.org/) * Merged PR 52: PRNG seed Exposed PRNG seed in generative models for velocities * Merged PR 53: Docs update - Updated LICENSE - Added Microsoft Open Source Code of Conduct - Added Contributing section to README * Merged PR 54: CLI for velocity generators Implemented CLI for velocity generators * Merged PR 69: CLI subpackage using Click Reimplemented CLI as subpackage using Click * Merged PR 70: VS Code settings Added VS Code settings * Merged PR 73: CLI for forward modelling Implemented CLI for forward modelling * Merged PR 76: Unit fixes - Changed to use km/s instead of m/s for velocities - Fixed CLI interface * Merged PR 78: Forward modelling CLI fix * Merged PR 85: Version 0.1.0 * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * adding cgmanifest to staging * adding a yml file with CG build task * added prelim NOTICE file * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Merged PR 126: updated notice file with previously excluded components updated notice file with previously excluded components * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * Merged PR 222: Moves cv_lib into repo and updates setup instructions * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * added cela copyright headers to all non-empty .py files (#3) * switched to ACR instead of docker hub (#4) * sdk.v1.0.69, plus switched to ACR push. ACR pull coming next * full acr use, push and pull, and use in Estimator * temp fix for dcker image bug * fixed the az acr login --username and --password issue * full switch to ACR for docker image storage * Vapaunic/metrics (#1) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * BUILD: added build setup files. (#5) * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added pytest to environmetn, and pytest job to the main build (#18) * Update main_build.yml for Azure Pipelines * minor stylistic changes (#19) * Update main_build.yml for Azure Pipelines Added template for integration tests for scripts and experiments Added setup and env Increased job timeout added complete set of tests * BUILD: placeholder for Azure pipelines for notebooks build. BUILD: added notebooks job placeholders. BUILD: added github badgets for notebook builds * CLEANUP: moved non-release items to contrib (#20) * Updates HRNet notebook 🚀 (#25) * Modifies pre-commit hook to modify output * Modifies the HRNet notebook to use Penobscot dataset Adds parameters to limit iterations Adds parameters meta tag for papermil * Fixing merge peculiarities * Updates environment.yaml (#21) * Pins main libraries Adds cudatoolkit version based on issues faced during workshop * removing files * Updates Readme (#22) * Adds model instructions to readme * Update README.md (#24) I have collected points to all of our BP repos into this central place. We are trying to create links between everything to draw people from one to the other. Can we please add a pointer here to the readme? I have spoken with Max and will be adding Deep Seismic there once you have gone public. * CONTRIB: cleanup for imaging. (#28) * Create Unit Test Build.yml (#29) Adding Unit Test Build. * Update README.md * Update README.md * Create Unit Test Build.yml (#29) Adding Unit Test Build. Update README.md Update README.md * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * TESTS: added notebook integreation tests. (#65) * TESTS: added notebook integreation tests. * TEST: typo in env name * Addressing a number of minor issues with README and broken links (#67) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fix for seyviewer and mkdir splits in README + broken link in F3 notebook * issue edits to README * download complete message * Added Yacs info to README.md (#69) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added info on yacs files * MODEL.PRETRAINED key missing in default.py (#70) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added MODEL.PRETRAINED key to default.py * Update README.md (#59) * Update README.md (#58) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * Adds premium storage (#79) * Adds premium storage method * update test.py for section based approach to use command line arguments (#76) * added README documentation per bug bush feedback (#78) * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * https://github.com/microsoft/DeepSeismic/issues/71 (#80) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * addressing multiple issues from first bug bash (#81) * added README documentation per bug bush feedback * DOC: added HRNET download info to README * added hrnet download script and tested it * added legal headers to a few scripts. * changed /data to ~data in the main README * added Troubleshooting section to the README * Dciborow/build bug (#68) * Update unit_test_steps.yml * Update environment.yml * Update setup_step.yml * Update setup_step.yml * Update unit_test_steps.yml * Update setup_step.yml * Adds AzureML libraries (#82) * Adds azure dependencies * Adds AzureML components * Fixes download script (#84) * Fixes download script * Updates readme * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * modified hrnet notebook, addressing bug bash issues (#95) * Update environment.yml (#93) * Update environment.yml * Update environment.yml * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * notebook integration tests complete (#106) * added README documentation per bug bush feedback * HRNet notebook works with tests now * removed debug material from the notebook * corrected duplicate build names * conda init fix * changed setup deps * fixed F3 notebook - merge conflict and pytorch bug * main and notebook builds have functional setup now * Mat/test (#105) * added README documentation per bug bush feedback * Modifies scripts to run for only afew iterations when in debug/test mode * Updates training scripts and build * Making names unique * Fixes conda issue * HRNet notebook works with tests now * removed debug material from the notebook * corrected duplicate build names * conda init fix * Adds docstrings to training script * Testing somehting out * testing * test * test * test * test * test * test * test * test * test * test * test * adds seresnet * Modifies to work outside of git env * test * test * Fixes typo in DATASET * reducing steps * test * test * fixes the argument * Altering batch size to fit k80 * reducing batch size further * test * test * test * test * fixes distributed * test * test * adds missing import * Adds further tests * test * updates * test * Fixes section script * test * testing everyting once through * Final run for badge * changed setup deps, fixed F3 notebook * Adds missing tests (#111) * added missing tests * Adding fixes for test * reinstating all tests * Maxkaz/issues (#110) * added README documentation per bug bush feedback * added missing tests * closing out multiple post bug bash issues with single PR * Addressed comments * minor change * Adds Readme information to experiments (#112) * Adds readmes to experiments * Updates instructions based on feedback * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#2) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#3) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * Remove related projects on AI Labs * Added a reference to Azure machine learning (#115) Added a reference to Azure machine learning to show how folks can get started with using Azure Machine Learning * Update README.md * update fork from upstream (#4) * fixed merge conflict resolution in LICENSE * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * Remove related projects on AI Labs * Added a reference to Azure machine learning (#115) Added a reference to Azure machine learning to show how folks can get started with using Azure Machine Learning * Update README.md * Update AUTHORS.md (#117) * Update AUTHORS.md (#118) * pre-release items (#119) * added README documentation per bug bush feedback * added missing tests * closing out multiple post bug bash issues with single PR * new badges in README * cleared notebook output * notebooks links * fixed bad merge * forked branch name is misleading. (#116) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#2) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#3) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build stat… * Minor fix: broken links in README (#120) * fully-run notebooks links and fixed contrib voxel models (#123) * added README documentation per bug bush feedback * added missing tests * - added notebook links - made sure orginal voxel2pixel code runs * update ignite port of texturenet * resolved merge conflict * formatting change * Adds reproduction instructions to readme (#122) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * Updates notebook to use itkwidgets for interactive visualisation * Further updates * Fixes merge conflicts * removing files * Adding reproduction experiment instructions to readme * checking in ablation study from ilkarman (#124) tests pass but final results aren't communicated to github. No way to trigger another commit other than to do a dummy commit
2019-12-17 15:14:43 +03:00
{"Registrations":[
{
"component": {
"type": "git",
"git": {
V0.1.2 (#307) * Merged PR 42: Python package structure Created Python package structure * Merged PR 50: Röth-Tarantola generative model for velocities - Created Python package structure for generative models for velocities - Implemented the [Röth-Tarantola model](https://doi.org/10.1029/93JB01563) * Merged PR 51: Isotropic AWE forward modelling using Devito Implemented forward modelling for the isotropic acoustic wave equation using [Devito](https://www.devitoproject.org/) * Merged PR 52: PRNG seed Exposed PRNG seed in generative models for velocities * Merged PR 53: Docs update - Updated LICENSE - Added Microsoft Open Source Code of Conduct - Added Contributing section to README * Merged PR 54: CLI for velocity generators Implemented CLI for velocity generators * Merged PR 69: CLI subpackage using Click Reimplemented CLI as subpackage using Click * Merged PR 70: VS Code settings Added VS Code settings * Merged PR 73: CLI for forward modelling Implemented CLI for forward modelling * Merged PR 76: Unit fixes - Changed to use km/s instead of m/s for velocities - Fixed CLI interface * Merged PR 78: Forward modelling CLI fix * Merged PR 85: Version 0.1.0 * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * adding cgmanifest to staging * adding a yml file with CG build task * added prelim NOTICE file * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Merged PR 126: updated notice file with previously excluded components updated notice file with previously excluded components * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * Merged PR 222: Moves cv_lib into repo and updates setup instructions * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * added cela copyright headers to all non-empty .py files (#3) * switched to ACR instead of docker hub (#4) * sdk.v1.0.69, plus switched to ACR push. ACR pull coming next * full acr use, push and pull, and use in Estimator * temp fix for dcker image bug * fixed the az acr login --username and --password issue * full switch to ACR for docker image storage * Vapaunic/metrics (#1) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * BUILD: added build setup files. (#5) * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added pytest to environmetn, and pytest job to the main build (#18) * Update main_build.yml for Azure Pipelines * minor stylistic changes (#19) * Update main_build.yml for Azure Pipelines Added template for integration tests for scripts and experiments Added setup and env Increased job timeout added complete set of tests * BUILD: placeholder for Azure pipelines for notebooks build. BUILD: added notebooks job placeholders. BUILD: added github badgets for notebook builds * CLEANUP: moved non-release items to contrib (#20) * Updates HRNet notebook 🚀 (#25) * Modifies pre-commit hook to modify output * Modifies the HRNet notebook to use Penobscot dataset Adds parameters to limit iterations Adds parameters meta tag for papermil * Fixing merge peculiarities * Updates environment.yaml (#21) * Pins main libraries Adds cudatoolkit version based on issues faced during workshop * removing files * Updates Readme (#22) * Adds model instructions to readme * Update README.md (#24) I have collected points to all of our BP repos into this central place. We are trying to create links between everything to draw people from one to the other. Can we please add a pointer here to the readme? I have spoken with Max and will be adding Deep Seismic there once you have gone public. * CONTRIB: cleanup for imaging. (#28) * Create Unit Test Build.yml (#29) Adding Unit Test Build. * Update README.md * Update README.md * Create Unit Test Build.yml (#29) Adding Unit Test Build. Update README.md Update README.md * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * TESTS: added notebook integreation tests. (#65) * TESTS: added notebook integreation tests. * TEST: typo in env name * Addressing a number of minor issues with README and broken links (#67) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fix for seyviewer and mkdir splits in README + broken link in F3 notebook * issue edits to README * download complete message * Added Yacs info to README.md (#69) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added info on yacs files * MODEL.PRETRAINED key missing in default.py (#70) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added MODEL.PRETRAINED key to default.py * Update README.md (#59) * Update README.md (#58) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * Adds premium storage (#79) * Adds premium storage method * update test.py for section based approach to use command line arguments (#76) * added README documentation per bug bush feedback (#78) * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * https://github.com/microsoft/DeepSeismic/issues/71 (#80) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * addressing multiple issues from first bug bash (#81) * added README documentation per bug bush feedback * DOC: added HRNET download info to README * added hrnet download script and tested it * added legal headers to a few scripts. * changed /data to ~data in the main README * added Troubleshooting section to the README * Dciborow/build bug (#68) * Update unit_test_steps.yml * Update environment.yml * Update setup_step.yml * Update setup_step.yml * Update unit_test_steps.yml * Update setup_step.yml * Adds AzureML libraries (#82) * Adds azure dependencies * Adds AzureML components * Fixes download script (#84) * Fixes download script * Updates readme * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * modified hrnet notebook, addressing bug bash issues (#95) * Update environment.yml (#93) * Update environment.yml * Update environment.yml * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * notebook integration tests complete (#106) * added README documentation per bug bush feedback * HRNet notebook works with tests now * removed debug material from the notebook * corrected duplicate build names * conda init fix * changed setup deps * fixed F3 notebook - merge conflict and pytorch bug * main and notebook builds have functional setup now * Mat/test (#105) * added README documentation per bug bush feedback * Modifies scripts to run for only afew iterations when in debug/test mode * Updates training scripts and build * Making names unique * Fixes conda issue * HRNet notebook works with tests now * removed debug material from the notebook * corrected duplicate build names * conda init fix * Adds docstrings to training script * Testing somehting out * testing * test * test * test * test * test * test * test * test * test * test * test * adds seresnet * Modifies to work outside of git env * test * test * Fixes typo in DATASET * reducing steps * test * test * fixes the argument * Altering batch size to fit k80 * reducing batch size further * test * test * test * test * fixes distributed * test * test * adds missing import * Adds further tests * test * updates * test * Fixes section script * test * testing everyting once through * Final run for badge * changed setup deps, fixed F3 notebook * Adds missing tests (#111) * added missing tests * Adding fixes for test * reinstating all tests * Maxkaz/issues (#110) * added README documentation per bug bush feedback * added missing tests * closing out multiple post bug bash issues with single PR * Addressed comments * minor change * Adds Readme information to experiments (#112) * Adds readmes to experiments * Updates instructions based on feedback * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#2) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#3) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * Remove related projects on AI Labs * Added a reference to Azure machine learning (#115) Added a reference to Azure machine learning to show how folks can get started with using Azure Machine Learning * Update README.md * update fork from upstream (#4) * fixed merge conflict resolution in LICENSE * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * Remove related projects on AI Labs * Added a reference to Azure machine learning (#115) Added a reference to Azure machine learning to show how folks can get started with using Azure Machine Learning * Update README.md * Update AUTHORS.md (#117) * Update AUTHORS.md (#118) * pre-release items (#119) * added README documentation per bug bush feedback * added missing tests * closing out multiple post bug bash issues with single PR * new badges in README * cleared notebook output * notebooks links * fixed bad merge * forked branch name is misleading. (#116) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#2) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#3) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build stat… * Minor fix: broken links in README (#120) * fully-run notebooks links and fixed contrib voxel models (#123) * added README documentation per bug bush feedback * added missing tests * - added notebook links - made sure orginal voxel2pixel code runs * update ignite port of texturenet * resolved merge conflict * formatting change * Adds reproduction instructions to readme (#122) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * Updates notebook to use itkwidgets for interactive visualisation * Further updates * Fixes merge conflicts * removing files * Adding reproduction experiment instructions to readme * checking in ablation study from ilkarman (#124) tests pass but final results aren't communicated to github. No way to trigger another commit other than to do a dummy commit * minor bug in 000 nb; sdk.v1.0.79; FROM continuumio/miniconda3:4.7.12 (#126) * Added download script for dutch F3 dataset. Also adding Sharat/WH as authors. (#129) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * Using env variable during dutchF3 splits * Improvements to dutchf3 (#128) * Adds padding to distributed training pipeline * Adds exception if supplied weights file is not found * Fixes hrnet location * Removes unecessary config * Ghiordan/azureml devito04 (#130) * exported conda env .yml file for AzureML control plane * both control plane and experimentation docker images use azure_ml sdk 1.0.81 * making model snapshots more verbose / friendly (#152) * added scripts which reproduce results * build error fix * modified all local training runs to use model_dir for model name * extended model naming to distributed setup as well * added pillow breakage fix too * removing execution scripts from this PR * upgrading pytorch version to keep up with torchvision to keep up with Pillow * reduced validation batch size for deconvnets to combat OOM with pyTorch 1.4.0 * notebook enhancements from sharatsc (#153) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * Fixes a few typos, and links to the troubleshooting section when running the conda command * Readme update Fixes a few typos, and links to the troubleshooting section when running the conda command (#160) * scripts to reproduce model results (#155) * added scripts which reproduce results * build error fix * modified all local training runs to use model_dir for model name * extended model naming to distributed setup as well * added pillow breakage fix too * removing execution scripts from this PR * upgrading pytorch version to keep up with torchvision to keep up with Pillow * initial checkin of the run scripts to reproduce results * edited version of run_all to run all jobs to match presentation/github results * fixed typos in main run_all launch script * final version of the scripts which reproduce repo results * added README description which reproduces the results * Fix data path in the README (#167) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * Adding readme text for the notebooks and checking if config is correctly setup * fixing prepare script example Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * Update F3_block_training_and_evaluation_local.ipynb (#163) Minor fix to figure axes Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * Maxkaz/test fixes (#168) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * fixed test links in the README * addressed PR comments Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * add data tests for download and preprocessing; resolve preprocessing bugs (#175) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * addressed PR comments * added dataset download and preprocessing tests * changed data dir to not break master, added separate data prep script for builds * modified README to reflect code changes; added license header * adding fixes to data download script for the builds * forgot to ass the readme fix to data preprocessing script * fixes Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * Adding content to interpretation README (#171) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * Adding readme text for the notebooks and checking if config is correctly setup * fixing prepare script example * Adding more content to interpretation README * Update README.md * Update HRNet_Penobscot_demo_notebook.ipynb Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * added utility to validate paths * better handling of AttributeErrors * fixing paths in configs to match those in readme.md * minor formatting improvements * add validate_config_paths to notebooks * adding generic and absolute paths in the config + minor cleanup * better format for validate_config_paths() * added dummy path in hrnet config * modified HRNet notebook * added missing validate_config_paths() * Updates to prepare dutchf3 (#185) * updating patch to patch_size when we are using it as an integer * modifying the range function in the prepare_dutchf3 script to get all of our data * updating path to logging.config so the script can locate it * manually reverting back log path to troubleshoot build tests * updating patch to patch_size for testing on preprocessing scripts * updating patch to patch_size where applicable in ablation.sh * reverting back changes on ablation.sh to validate build pass * update patch to patch_size in ablation.sh (#191) Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * closes https://github.com/microsoft/seismic-deeplearning/issues/181 (#187) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * addressed PR comments * addded ability to load pretrained HRNet model on the build server from custom location * fixed build failure * another fix Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * read parameters from papermill * fixes to test_all script to reproduce model results (#201) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * addressed PR comments * updated model test script to work on master and staging branches to reproduce results Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * Solves issue #54: check/validate config to make sure datapath and model path are valid (#198) Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * Adding dockerfile to solve issue #146 (#204) * Fixes a few typos, and links to the troubleshooting section when running the conda command * added draft dockerfile and readme * fixes to dockerfile * minor improvements * add code to download the datasets * Update Dockerfile * use miniconda * activate jupyter kernel * comment out code to download data * Updates to Dockerfile to activate conda env * updating the Dockerfile * change branch to staging (bug fix) * download the datasets * Update README.md * final modifications to Dockerfile * Updated the README file * Updated the README to use --mount instead of --volume --volume has the disadvantage of requiring the mount point to exist prior to running the docker image. Otherwise, it will create an empty directory. --mount however allows us to mount files directly to any location. * Update the hrnet.yml to match the mount point in the docker image * Update the dutchf3 paths in the config files * Update the dockerfile to prepare the datasets for training * support for nvidia-docker * fix gitpython bug * fixing the "out of shared memory" bug Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * remove duplicate code for validation (#208) * Update README.md * updating readme metrics; adding runtimes (#210) * adds ability to specify cwd for notebook tests (#207) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * added scripts which reproduce results * Adding readme text for the notebooks and checking if config is correctly setup * build error fix * fixed notebook build error * removed scripts from older pull * fixing pip version for DS VM bug fix * fixing pip version for DS VM bug fix * notebook fixes * fixes to benchmark test script * explicitly setting backbone model location in tests * addressed PR comments * updated model test script to work on master and staging branches to reproduce results * enables ability to change notebook execution dir * fix Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * 138 download pretrained models to hrnet notebook (#213) * removed duplicate code in the notebooks * initial draft * done with download_pretrained_model * updated notebook and utils * updating model dir in config * updates to util * update to notebook * model download fixes and HRNet pre-trained model demo run * fix to non-existant model_dir directory on the build server * typo Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * 226 (#231) * added ability to use pre-trained models on Dutch F3 dataset * moved black notebook formatter instructions to README * finished Dutch F3 notebook training - pre trained model runtime is down to 1 minute; starting on test set performance * finished dutch f3 notebook * fixed Docker not running out-of-the-box with the given parameters * cleaned up other notebooks and files which are not scoped for this release * tweaks to notebook from Docker * fixed Docker instructions and port 9000 for TB * notebook build fixes * small Dockerfile fix * notebook build fixes * increased max_iterations in tests * finished tweaking the notebook to get the tests to pass * more fixes for build tests * dummy commit to re-trigger the builds * addressed PR comments * reverting back data.Subset to toolz.take * added docker image test build (#242) * added docker image test build * increased Docker image build timeout * update notebook seeds (#247) * re-wrote experiment test builds to run in parallel on single 4-GPU VM (#246) * re-wrote experiment test builds to run in parallel on single 4-GPU VM * fixed yaml typo * fixed another yaml typo * added more descriptive build names * fixed another yaml typo * changed build names and added tee log splitting * added wait -n * added wait termination condition * fixed path typo * added code to manually block on PIDs * added ADO fixes to collect PIDs for wait; changed component governance build pool * added manual handling of return codes * fixed parallel distributed tests * build typo * correctness branch setup (#251) * created correctnes branch, trimmed experiments to Dutch F3 only * trivial change to re-trigger build * dummy PR to re-trigger malfunctioning builds * reducing scope further (#258) * created correctnes branch, trimmed experiments to Dutch F3 only * trivial change to re-trigger build * dummy PR to re-trigger malfunctioning builds * reducing scope of the correctness branch further * added branch triggers * hotfixing correctness - broken DropBox download link * 214 Ignite 0.3.0 upgrade (#261) * upgraded to Ignite 0.3.0 and fixed upgrade compatibility * added seeds and modified notebook for ignite 0.3.0 * updated code and tests to work with ignite 0.3.0 * made code consistent with Ignite 0.3.0 as much as possible * fixed iterator epoch_length bug by subsetting validation set * applied same fix to the notebook * bugfix in distributed train.py * increased distributed tests to 2 batched - hoping for one batch per GPU * resolved rebase conflict * added seeds and modified notebook for ignite 0.3.0 * updated code and tests to work with ignite 0.3.0 * made code consistent with Ignite 0.3.0 as much as possible * fixed iterator epoch_length bug by subsetting validation set * applied same fix to the notebook * bugfix in distributed train.py * increased distributed tests to 2 batched - hoping for one batch per GPU * update docker readme (#262) Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * tagged all TODOs with issues on github (and created issues) (#278) * created correctnes branch, trimmed experiments to Dutch F3 only * trivial change to re-trigger build * dummy PR to re-trigger malfunctioning builds * resolved merge conflict * flagged all non-contrib TODO with github issues * resolved rebase conflict * resolved merge conflict * cleaned up archaic voxel code * Refactoring train.py, removing OpenCV, adding training results to Tensborboard, bug fixes (#264) I think moving forward, we'll use smaller PRs. But here are the changes in this one: Fixes issue #236 that involves rewriting a big portion of train.py such that: All the tensorboard event handlers are organized in tensorboard_handlers.py and only called in train.py to log training and validation results in Tensorboard The code logs the same results for training and validation. Also, it adds the class IoU score as well. All single-use functions (e.g. _select_max, _tensor_to_numpy, _select_pred_and_mask) are lambda functions now The code is organized into more meaningful "chunks".. e.g. all the optimizer-related code should be together if possible, same thing for logging, configuration, loaders, tensorboard, ..etc. In addition: Fixed a visualization bug where the seismic images where not normalized correctly. This solves Issue #217. Fixed a visualization bug where the predictions where not masked where the input image was padded. This improves the ability to visually inspect and evaluate the results. This solves Issue #230. Fixes a potential issue where Tensorboard can crash when a large training batchsize is used. Now the number of images visualized in Tensorboard from every batch has an upper limit. Completely removed OpenCV as a dependency from the DeepSeismic Repo. It was only used in a small part of the code where it wasn't really necessary, and OpenCV is a huge library. Fixes Issue #218 where the epoch number for the images in Tensorboard was always logged as 1 (therefore, not allowing use to see the epoch number of the different results in Tensorboard. Removes the HorovodLRScheduler class since its no longer used Removes toolz.take from Debug mode, and uses PyTorch's native Subset() dataset class Changes default patch size for the HRNet model to 256 In addition to several other minor changes Co-authored-by: Yazeed Alaudah <yalaudah@users.noreply.github.com> Co-authored-by: Ubuntu <yazeed@yaalauda-dsvm-nd24.jsxrnelwp15e1jpgk5vvfmbzyb.bx.internal.cloudapp.net> Co-authored-by: Max Kaznady <maxkaz@microsoft.com> * Fixes training/validation overlap #143, #233, #253, and #259 (#282) * Correctness single GPU switch (#290) * resolved rebase conflict * resolved merge conflict * resolved rebase conflict * resolved merge conflict * reverted multi-GPU builds to run on single GPU * 249r3 (#283) * resolved rebase conflict * resolved merge conflict * resolved rebase conflict * resolved merge conflict * wrote the bulk of checkerboard example * finished checkerboard generator * resolved merge conflict * resolved rebase conflict * got binary dataset to run * finished first implementation mockup - commit before rebase * made sure rebase went well manually * added new files * resolved PR comments and made tests work * fixed build error * fixed build VM errors * more fixes to get the test to pass * fixed n_classes issue in data.py * fixed notebook as well * cleared notebook run cell * trivial commit to restart builds * addressed PR comments * moved notebook tests to main build pipeline * fixed checkerboard label precision * relaxed performance tests for now * resolved merge conflict * resolved merge conflict * fixed build error * resolved merge conflicts * fixed another merge mistake * enabling development on docker (#291) * 289: correctness metrics and tighter tests (#293) * resolved rebase conflict * resolved merge conflict * resolved rebase conflict * resolved merge conflict * wrote the bulk of checkerboard example * finished checkerboard generator * resolved merge conflict * resolved rebase conflict * got binary dataset to run * finished first implementation mockup - commit before rebase * made sure rebase went well manually * added new files * resolved PR comments and made tests work * fixed build error * fixed build VM errors * more fixes to get the test to pass * fixed n_classes issue in data.py * fixed notebook as well * cleared notebook run cell * trivial commit to restart builds * addressed PR comments * moved notebook tests to main build pipeline * fixed checkerboard label precision * relaxed performance tests for now * resolved merge conflict * resolved merge conflict * fixed build error * resolved merge conflicts * fixed another merge mistake * resolved rebase conflict * resolved rebase 2 * resolved merge conflict * resolved merge conflict * adding new logging * added better logging - cleaner - debugged metrics on checkerboard dataset * resolved rebase conflict * resolved merge conflict * resolved merge conflict * resolved merge conflict * resolved rebase 2 * resolved merge conflict * updated notebook with the changes * addressed PR comments * addressed another PR comment * uniform colormap and correctness tests (#295) * correctness code good for PR review * addressed PR comments * V0.2 release README update (#300) * updated readme for v0.2 release * bug fix (#296) Co-authored-by: Gianluca Campanella <gianluca.campanella@microsoft.com> Co-authored-by: msalvaris <msalvaris@users.noreply.github.com> Co-authored-by: Vanja Paunic <vapaunic@microsoft.com> Co-authored-by: Vanja Paunic <Vanja.Paunic@microsoft.com> Co-authored-by: Mathew Salvaris <masalvar@microsoft.com> Co-authored-by: George Iordanescu <ghiordan@microsoft.com> Co-authored-by: vapaunic <15053814+vapaunic@users.noreply.github.com> Co-authored-by: Sharat Chikkerur <sharat.chikkerur@microsoft.com> Co-authored-by: Wee Hyong Tok <weehyong@hotmail.com> Co-authored-by: Daniel Ciborowski <dciborow@microsoft.com> Co-authored-by: George Iordanescu <george.iordanescu@gmail.com> Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> Co-authored-by: Ubuntu <yazeed@yaalauda-dsvm-one.lyvrisn14kmuzl2vzryuh5txbh.bx.internal.cloudapp.net> Co-authored-by: Yazeed Alaudah <yalaudah@users.noreply.github.com> Co-authored-by: Yazeed Alaudah <yalaudah@gmail.com> Co-authored-by: kirasoderstrom <kirasoderstrom@gmail.com> Co-authored-by: yalaudah <yazeed.alaudah@microsoft.com> Co-authored-by: Ubuntu <yazeed@yaalauda-dsvm-nd24.jsxrnelwp15e1jpgk5vvfmbzyb.bx.internal.cloudapp.net>
2020-05-20 20:27:19 +03:00
"repositoryUrl": "https://github.com/yalaudah/facies_classification_benchmark",
final public release PR (#125) * Merged PR 42: Python package structure Created Python package structure * Merged PR 50: Röth-Tarantola generative model for velocities - Created Python package structure for generative models for velocities - Implemented the [Röth-Tarantola model](https://doi.org/10.1029/93JB01563) * Merged PR 51: Isotropic AWE forward modelling using Devito Implemented forward modelling for the isotropic acoustic wave equation using [Devito](https://www.devitoproject.org/) * Merged PR 52: PRNG seed Exposed PRNG seed in generative models for velocities * Merged PR 53: Docs update - Updated LICENSE - Added Microsoft Open Source Code of Conduct - Added Contributing section to README * Merged PR 54: CLI for velocity generators Implemented CLI for velocity generators * Merged PR 69: CLI subpackage using Click Reimplemented CLI as subpackage using Click * Merged PR 70: VS Code settings Added VS Code settings * Merged PR 73: CLI for forward modelling Implemented CLI for forward modelling * Merged PR 76: Unit fixes - Changed to use km/s instead of m/s for velocities - Fixed CLI interface * Merged PR 78: Forward modelling CLI fix * Merged PR 85: Version 0.1.0 * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * adding cgmanifest to staging * adding a yml file with CG build task * added prelim NOTICE file * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Merged PR 126: updated notice file with previously excluded components updated notice file with previously excluded components * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * Merged PR 222: Moves cv_lib into repo and updates setup instructions * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * added cela copyright headers to all non-empty .py files (#3) * switched to ACR instead of docker hub (#4) * sdk.v1.0.69, plus switched to ACR push. ACR pull coming next * full acr use, push and pull, and use in Estimator * temp fix for dcker image bug * fixed the az acr login --username and --password issue * full switch to ACR for docker image storage * Vapaunic/metrics (#1) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * BUILD: added build setup files. (#5) * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added pytest to environmetn, and pytest job to the main build (#18) * Update main_build.yml for Azure Pipelines * minor stylistic changes (#19) * Update main_build.yml for Azure Pipelines Added template for integration tests for scripts and experiments Added setup and env Increased job timeout added complete set of tests * BUILD: placeholder for Azure pipelines for notebooks build. BUILD: added notebooks job placeholders. BUILD: added github badgets for notebook builds * CLEANUP: moved non-release items to contrib (#20) * Updates HRNet notebook 🚀 (#25) * Modifies pre-commit hook to modify output * Modifies the HRNet notebook to use Penobscot dataset Adds parameters to limit iterations Adds parameters meta tag for papermil * Fixing merge peculiarities * Updates environment.yaml (#21) * Pins main libraries Adds cudatoolkit version based on issues faced during workshop * removing files * Updates Readme (#22) * Adds model instructions to readme * Update README.md (#24) I have collected points to all of our BP repos into this central place. We are trying to create links between everything to draw people from one to the other. Can we please add a pointer here to the readme? I have spoken with Max and will be adding Deep Seismic there once you have gone public. * CONTRIB: cleanup for imaging. (#28) * Create Unit Test Build.yml (#29) Adding Unit Test Build. * Update README.md * Update README.md * Create Unit Test Build.yml (#29) Adding Unit Test Build. Update README.md Update README.md * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * TESTS: added notebook integreation tests. (#65) * TESTS: added notebook integreation tests. * TEST: typo in env name * Addressing a number of minor issues with README and broken links (#67) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fix for seyviewer and mkdir splits in README + broken link in F3 notebook * issue edits to README * download complete message * Added Yacs info to README.md (#69) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added info on yacs files * MODEL.PRETRAINED key missing in default.py (#70) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added MODEL.PRETRAINED key to default.py * Update README.md (#59) * Update README.md (#58) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * Adds premium storage (#79) * Adds premium storage method * update test.py for section based approach to use command line arguments (#76) * added README documentation per bug bush feedback (#78) * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * https://github.com/microsoft/DeepSeismic/issues/71 (#80) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * addressing multiple issues from first bug bash (#81) * added README documentation per bug bush feedback * DOC: added HRNET download info to README * added hrnet download script and tested it * added legal headers to a few scripts. * changed /data to ~data in the main README * added Troubleshooting section to the README * Dciborow/build bug (#68) * Update unit_test_steps.yml * Update environment.yml * Update setup_step.yml * Update setup_step.yml * Update unit_test_steps.yml * Update setup_step.yml * Adds AzureML libraries (#82) * Adds azure dependencies * Adds AzureML components * Fixes download script (#84) * Fixes download script * Updates readme * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * modified hrnet notebook, addressing bug bash issues (#95) * Update environment.yml (#93) * Update environment.yml * Update environment.yml * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * notebook integration tests complete (#106) * added README documentation per bug bush feedback * HRNet notebook works with tests now * removed debug material from the notebook * corrected duplicate build names * conda init fix * changed setup deps * fixed F3 notebook - merge conflict and pytorch bug * main and notebook builds have functional setup now * Mat/test (#105) * added README documentation per bug bush feedback * Modifies scripts to run for only afew iterations when in debug/test mode * Updates training scripts and build * Making names unique * Fixes conda issue * HRNet notebook works with tests now * removed debug material from the notebook * corrected duplicate build names * conda init fix * Adds docstrings to training script * Testing somehting out * testing * test * test * test * test * test * test * test * test * test * test * test * adds seresnet * Modifies to work outside of git env * test * test * Fixes typo in DATASET * reducing steps * test * test * fixes the argument * Altering batch size to fit k80 * reducing batch size further * test * test * test * test * fixes distributed * test * test * adds missing import * Adds further tests * test * updates * test * Fixes section script * test * testing everyting once through * Final run for badge * changed setup deps, fixed F3 notebook * Adds missing tests (#111) * added missing tests * Adding fixes for test * reinstating all tests * Maxkaz/issues (#110) * added README documentation per bug bush feedback * added missing tests * closing out multiple post bug bash issues with single PR * Addressed comments * minor change * Adds Readme information to experiments (#112) * Adds readmes to experiments * Updates instructions based on feedback * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#2) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#3) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * Remove related projects on AI Labs * Added a reference to Azure machine learning (#115) Added a reference to Azure machine learning to show how folks can get started with using Azure Machine Learning * Update README.md * update fork from upstream (#4) * fixed merge conflict resolution in LICENSE * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * Remove related projects on AI Labs * Added a reference to Azure machine learning (#115) Added a reference to Azure machine learning to show how folks can get started with using Azure Machine Learning * Update README.md * Update AUTHORS.md (#117) * Update AUTHORS.md (#118) * pre-release items (#119) * added README documentation per bug bush feedback * added missing tests * closing out multiple post bug bash issues with single PR * new badges in README * cleared notebook output * notebooks links * fixed bad merge * forked branch name is misleading. (#116) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#2) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * fixed link for F3 download * MINOR: python version fix to 3.6.7 (#72) * Adding system requirements in README (#74) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * DOC: forking dislaimer and new build names. (#9) * Updating README.md with introduction material (#10) * Update README with introduction to DeepSeismic Add intro material for DeepSeismic * Adding logo file * Adding image to readme * Update README.md * Updates the 3D visualisation to use itkwidgets (#11) * Updates notebook to use itkwidgets for interactive visualisation * Adds jupytext to pre-commit (#12) * Add jupytext * Adds demo notebook for HRNet (#13) * Adding TF 2.0 to allow for tensorboard vis in notebooks * Modifies hrnet config for notebook * Add HRNet notebook for demo * Updates HRNet notebook and tidies F3 * removed my username references (#15) * moving 3D models into contrib folder (#16) * Weetok (#17) * Update it to include sections for imaging * Update README.md * Update README.md * added system requirements to readme * sdk 1.0.76; tested conda env vs docker image; extented readme * removed reference to imaging * minor md formatting * minor md formatting * clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83 * Add Troubleshooting section for DSVM warnings #89 * Add Troubleshooting section for DSVM warnings, plus typo #89 * tested both yml conda env and docker; udated conda yml to have docker sdk * tested both yml conda env and docker; udated conda yml to have docker sdk; added * NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment * Update README.md * BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 (#88) (#3) * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing * merge upstream into my fork (#1) * MINOR: addressing broken F3 download link (#73) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build stat… * Minor fix: broken links in README (#120) * fully-run notebooks links and fixed contrib voxel models (#123) * added README documentation per bug bush feedback * added missing tests * - added notebook links - made sure orginal voxel2pixel code runs * update ignite port of texturenet * resolved merge conflict * formatting change * Adds reproduction instructions to readme (#122) * Update main_build.yml for Azure Pipelines * Update main_build.yml for Azure Pipelines * BUILD: added build status badges (#6) * Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7) * Finished version of numpy data loader * Working training script for demo * Adds the new metrics * Fixes docstrings and adds header * Removing extra setup.py * Log config file now experiment specific (#8) * Merging work on salt dataset * Adds computer vision to dependencies * Updates dependecies * Update * Updates the environemnt files * Updates readme and envs * Initial running version of dutchf3 * INFRA: added structure templates. * VOXEL: initial rough code push - need to clean up before PRing. * Working version * Working version before refactor * quick minor fixes in README * 3D SEG: first commit for PR. * 3D SEG: removed data files to avoid redistribution. * Updates * 3D SEG: restyled batch file, moving onto others. * Working HRNet * 3D SEG: finished going through Waldeland code * Updates test scripts and makes it take processing arguments * minor update * Fixing imports * Refactoring the experiments * Removing .vscode * Updates gitignore * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script * minor wording fix * minor wording fix * enabled splitting dataset into sections, rather than only patches * enabled splitting dataset into sections, rather than only patches * merged duplicate ifelse blocks * merged duplicate ifelse blocks * refactored prepare_data.py * refactored prepare_data.py * added scripts for section train test * added scripts for section train test * section train/test works for single channel input * section train/test works for single channel input * Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py This PR includes the following changes: - added README instructions for running f3dutch experiments - prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic. - ran black formatter on the file, which created all the formatting changes (sorry!) * Merged PR 204: Adds loaders to deepseismic from cv_lib * train and test script for section based training/testing * train and test script for section based training/testing * Merged PR 209: changes to section loaders in data.py Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts: - get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders. - SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py h, w = img.shape[-2], img.shape[-1] # height and width * Merged PR 210: BENCHMARKS: added placeholder for benchmarks. BENCHMARKS: added placeholder for benchmarks. * Merged PR 211: Fixes issues left over from changes to data.py * removing experiments from deep_seismic, following the new struct * removing experiments from deep_seismic, following the new struct * Merged PR 220: Adds Horovod and fixes Add Horovod training script Updates dependencies in Horovod docker file Removes hard coding of path in data.py * section train/test scripts * section train/test scripts * Add cv_lib to repo and updates instructions * Add cv_lib to repo and updates instructions * Removes data.py and updates readme * Removes data.py and updates readme * Updates requirements * Updates requirements * Merged PR 222: Moves cv_lib into repo and updates setup instructions * renamed train/test scripts * renamed train/test scripts * train test works on alaudah section experiments, a few minor bugs left * train test works on alaudah section experiments, a few minor bugs left * cleaning up loaders * cleaning up loaders * Merged PR 236: Cleaned up dutchf3 data loaders @<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments. The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders. This will affect your code if you access these attributes. E.g. if you have something like this in your experiments: ``` train_set = TrainPatchLoader(…) patches = train_set.patches[train_set.split] ``` or ``` train_set = TrainSectionLoader(…) sections = train_set.sections[train_set.split] ``` * training testing for sections works * training testing for sections works * minor changes * minor changes * reverting changes on dutchf3/local/default.py file * reverting changes on dutchf3/local/default.py file * added config file * added config file * Updates the repo with preliminary results for 2D segmentation * Merged PR 248: Experiment: section-based Alaudah training/testing This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment. * Merged PR 253: Waldeland based voxel loaders and TextureNet model Related work items: #16357 * Merged PR 290: A demo notebook on local train/eval on F3 data set Notebook and associated files + minor change in a patch_deconvnet_skip.py model file. Related work items: #17432 * Merged PR 312: moved dutchf3_section to experiments/interpretation moved dutchf3_section to experiments/interpretation Related work items: #17683 * Merged PR 309: minor change to README to reflect the changes in prepare_data script minor change to README to reflect the changes in prepare_data script Related work items: #17681 * Merged PR 315: Removing voxel exp Related work items: #17702 * sync with new experiment structure * sync with new experiment structure * added a logging handler for array metrics * added a logging handler for array metrics * first draft of metrics based on the ignite confusion matrix * first draft of metrics based on the ignite confusion matrix * metrics now based on ignite.metrics * metrics now based on ignite.metrics * modified patch train.py with new metrics * modified patch train.py with new metrics * Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo. Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script. Related work items: #18264 * modified metrics with ignore_index * modified metrics with ignore_index * Merged PR 405: minor mods to notebook, more documentation A very small PR - Just a few more lines of documentation in the notebook, to improve clarity. Related work items: #17432 * Merged PR 368: Adds penobscot Adds for penobscot - Dataset reader - Training script - Testing script - Section depth augmentation - Patch depth augmentation - Iinline visualisation for Tensorboard Related work items: #14560, #17697, #17699, #17700 * Merged PR 407: Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Azure ML SDK Version: 1.0.65; running devito in AzureML Estimators Related work items: #16362 * Merged PR 452: decouple docker image creation from azureml removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb All other changes are due to trivial reruns Related work items: #18346 * Merged PR 512: Pre-commit hooks for formatting and style checking Opening this PR to start the discussion - I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added: - .pre-commit-config.yaml - defines git hooks to be installed - .flake8 - settings for flake8 linter - pyproject.toml - settings for black formatter The last two files define the formatting and linting style we want to enforce on the repo. All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors. Some questions to start the discussion: - Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that. - Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file. - Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this? Thanks! Related work items: #18350 * Merged PR 513: 3D training script for Waldeland's model with Ignite Related work items: #16356 * Merged PR 565: Demo notebook updated with 3D graph Changes: 1) Updated demo notebook with the 3D visualization 2) Formatting changes due to new black/flake8 git hook Related work items: #17432 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * Merged PR 341: Tests for cv_lib/metrics This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged. I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing. Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest. Related work items: #16955 * merged tests into this branch * merged tests into this branch * Merged PR 569: Minor PR: change to pre-commit configuration files Related work items: #18350 * Merged PR 586: Purging unused files and experiments Purging unused files and experiments Related work items: #20499 * moved prepare data under scripts * moved prepare data under scripts * removed untested model configs * removed untested model configs * fixed weird bug in penobscot data loader * fixed weird bug in penobscot data loader * penobscot experiments working for hrnet, seresnet, no depth and patch depth * penobscot experiments working for hrnet, seresnet, no depth and patch depth * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * removed a section loader bug in the penobscot loader * fixed bugs in my previous 'fix' * fixed bugs in my previous 'fix' * removed redundant _open_mask from subclasses * removed redundant _open_mask from subclasses * Merged PR 601: Fixes to penobscot experiments A few changes: - Instructions in README on how to download and process Penobscot and F3 2D data sets - moved prepare_data scripts to the scripts/ directory - fixed a weird issue with a class method in Penobscot data loader - fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue) - removed config files that were not tested or working in Penobscot experiments - modified default.py so it's working if train.py ran without a config file Related work items: #20694 * Merged PR 605: added common metrics to Waldeland model in Ignite Related work items: #19550 * Removed redundant extract_metric_from * Removed redundant extract_metric_from * formatting changes in metrics * formatting changes in metrics * modified penobscot experiment to use new local metrics * modified penobscot experiment to use new local metrics * modified section experimen to pass device to metrics * modified section experimen to pass device to metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * moved metrics out of dutchf3, modified distributed to work with the new metrics * fixed other experiments after new metrics * fixed other experiments after new metrics * removed apex metrics from distributed train.py * removed apex metrics from distributed train.py * added ignite-based metrics to dutch voxel experiment * added ignite-based metrics to dutch voxel experiment * removed apex metrics * removed apex metrics * modified penobscot test script to use new metrics * pytorch-ignite pre-release with new metrics until stable available * removed cell output from the F3 notebook * deleted .vscode * modified metric import in test_metrics.py * separated metrics out as a module * relative logger file path, modified section experiment * removed the REPO_PATH from init * created util logging function, and moved logging file to each experiment * modified demo experiment * modified penobscot experiment * modified dutchf3_voxel experiment * no logging in voxel2pixel * modified dutchf3 patch local experiment * modified patch distributed experiment * modified interpretation notebook * minor changes to comments * Updates notebook to use itkwidgets for interactive visualisation * Further updates * Fixes merge conflicts * removing files * Adding reproduction experiment instructions to readme * checking in ablation study from ilkarman (#124) tests pass but final results aren't communicated to github. No way to trigger another commit other than to do a dummy commit
2019-12-17 15:14:43 +03:00
"commitHash": "12102683a1ae78f8fbc953823c35a43b151194b3"
}
},
"license": "MIT"
},
{
"component": {
"type": "git",
"git": {
"repositoryUrl": "https://github.com/waldeland/CNN-for-ASI",
"commitHash": "6f985cccecf9a811565d0b7cd919412569a22b7b"
}
},
"license": "MIT"
},
{
"component": {
"type": "git",
"git": {
"repositoryUrl": "https://github.com/opesci/devito",
"commitHash": "f6129286d9c0b3a8bfe07e724ac5b00dc762efee"
}
},
"license": "MIT"
},
{
"component": {
"type": "git",
"git": {
"repositoryUrl": "https://github.com/pytorch/ignite",
"commitHash": "38a4f37de759e33bc08441bde99bcb50f3d81f55"
}
},
"license": "BSD-3-Clause"
},
{
"component": {
"type": "git",
"git": {
"repositoryUrl": "https://github.com/HRNet/HRNet-Semantic-Segmentation",
"commitHash": "06142dc1c7026e256a7561c3e875b06622b5670f"
}
},
"license": "MIT"
},
{
"component": {
"type": "git",
"git": {
"repositoryUrl": "https://github.com/dask/dask",
"commitHash": "54019e9c05134585c9c40e4195206aa78e2ea61a"
}
},
"license": "IPL-1.0"
}
],
"Version": 1
}