Add LossAnalysisCallback to plot loss heatmaps + scatter plots and save
loss ranks and their statistics.
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Relax version constraints to from ==1.43.0 to >=1.42.0. Also relax
constraints for protobuf to <4.0
This breaks cleanup code of 4 tests, which will be fixed in a follow-up PR
Merge #612 from main:
- Move all Amulet-related code to a separate module
- Add bug fix for `RANK` environment to `hi-ml` runner
- Improve documentation and add standalone example script
- Move all Amulet-related code to a separate module
- Add bug fix for `RANK` environment to `hi-ml` runner
- Improve documentation and add standalone example script
* add global similarity comparison between text and image inputs
* rename the file to address pytest issue
* update method naming -- PR comment
* add support for multi prompt similarities
* update tests
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update function naming
* Improve method docstring
* fix the issue with normalising embeddings twice
* add initial version of zero-shot classification
* finalise the zero-shot classification test
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Fernando Pérez-García <fperezgarcia@microsoft.com>
* Add commandline arguments to parametrise child runs arg identifier and primary metric to get best_epoch.
* Rename all crossval functions to hyperdrive ones to make it generic
For small validation sets and poor performance models, we sometimes end up with some slides falsely belonging to TP/FN cases.
We need to make sure thay top_heaps contain only true cases and bottom ones, only false cases.
Check predicted vs true label before pushing into slides_heaps to make sure we only push true predictions in top_slides_heaps and false ones in bottom_slides_heaps
* ENH: Enable logging to AzureML when running training outside AzureML (#580)
* FIX: Update env var settings for multi node (#588)
* ENH: Add a way of quickly starting runs with different seeds (#597)
* ENH: Submit jobs to Singularity via Amulet (#596)
In this PR:
- transformer_dropout parameter is added to TransformerPooling and TransformerPoolingBenchmark pooling layers.
- Tests are updated with the transformer_dropout parameter.
In this PR:
Average precision metric is added to multi-class (n_classes > 1) case.
AUROC is modified such that num_classes=None for binary case, as prescribed in Pytorch documentation https://torchmetrics.readthedocs.io/en/stable/classification/auroc.html.
Parameter num_classes is not explicitly given in Specificity, similar to the other binary metrics.
Hardcoded threshold=0.5 is removed from binary metrics (since this is default value).
Confusion matrix is normalized on true values.
Metrics are re-organized for readability.
Index datasets dataframe by tile_id or slide_id only when necessary (e.g. the current index is not already to tile_id or slide_id columns)
* update check