* update pre-commit

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Apply suggestions from code review

Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
This commit is contained in:
Jirka Borovec 2021-04-28 19:51:15 +02:00 коммит произвёл GitHub
Родитель cceced613f
Коммит 6e33ab07da
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
21 изменённых файлов: 76 добавлений и 76 удалений

Просмотреть файл

@ -48,4 +48,3 @@ comment:
require_changes: false
behavior: default # update if exists else create new
# branches: *

10
.github/CONTRIBUTING.md поставляемый
Просмотреть файл

@ -1,7 +1,7 @@
# Contributing
Welcome to the Torchmetrics community! We're building largest collection of native pytorch metrics, with the
goal of reducing boilerplate and increasing reproducibility.
goal of reducing boilerplate and increasing reproducibility.
## Contribution Types
@ -24,7 +24,7 @@ We are always looking for help implementing new features or fixing bugs.
3. Submit a PR!
_**Note**, even if you do not find the solution, sending a PR with a test covering the issue is a valid contribution and we can
_**Note**, even if you do not find the solution, sending a PR with a test covering the issue is a valid contribution and we can
help you or finish it with you :]_
### New Features:
@ -42,7 +42,7 @@ help you or finish it with you :]_
### Test cases:
Want to keep Torchmetrics healthy? Love seeing those green tests? So do we! How to we keep it that way?
Want to keep Torchmetrics healthy? Love seeing those green tests? So do we! How to we keep it that way?
We write tests! We value tests contribution even more than new features. One of the core values of torchmetrics
is that our users can trust our metric implementation. We can only guarantee this if our metrics are well tested.
@ -59,9 +59,9 @@ To build the documentation locally, simply execute the following commands from p
### Original code
All added or edited code shall be the own original work of the particular contributor.
If you use some third-party implementation, all such blocks/functions/modules shall be properly referred and if
If you use some third-party implementation, all such blocks/functions/modules shall be properly referred and if
possible also agreed by code's author. For example - `This code is inspired from http://...`.
In case you adding new dependencies, make sure that they are compatible with the actual Torchmetrics license
In case you adding new dependencies, make sure that they are compatible with the actual Torchmetrics license
(ie. dependencies should be _at least_ as permissive as the Torchmetrics license).
### Coding Style

2
.github/ISSUE_TEMPLATE/bug_report.md поставляемый
Просмотреть файл

@ -25,7 +25,7 @@ Steps to reproduce the behavior:
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior

2
.github/ISSUE_TEMPLATE/documentation.md поставляемый
Просмотреть файл

@ -12,7 +12,7 @@ assignees: ''
For typos and doc fixes, please go ahead and:
1. Create an issue.
2. Fix the typo.
2. Fix the typo.
3. Submit a PR.
Thanks!

8
.github/PULL_REQUEST_TEMPLATE.md поставляемый
Просмотреть файл

@ -2,14 +2,14 @@
- [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
- [ ] Did you read the [contributor guideline](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md), Pull Request section?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
## What does this PR do?
Fixes # (issue).
## PR review
Anyone in the community is free to review the PR once the tests have passed.
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?

1
.github/workflows/docs-check.yml поставляемый
Просмотреть файл

@ -97,4 +97,3 @@ jobs:
name: docs-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}
path: docs/build/html/
if: success()

2
.github/workflows/docs-deploy.yml поставляемый
Просмотреть файл

@ -56,4 +56,4 @@ jobs:
FOLDER: docs/build/html # The folder the action should deploy.
CLEAN: true # Automatically remove deleted files from the deploy branch
TARGET_FOLDER: docs # If you'd like to push the contents of the deployment folder into a specific directory
if: success()
if: success()

Просмотреть файл

@ -15,36 +15,39 @@
default_language_version:
python: python3.8
ci:
autofix_prs: true
autoupdate_commit_msg: '[pre-commit.ci] pre-commit suggestions'
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
rev: v3.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: local
- repo: https://github.com/PyCQA/isort
rev: 5.8.0
hooks:
- id: isort
name: imports
entry: isort
args: [--settings-path, ./pyproject.toml]
language: system
types: [python]
language: python
args: [--settings-path, "./pyproject.toml"]
- repo: https://github.com/pre-commit/mirrors-yapf
rev: v0.29.0
hooks:
- id: yapf
name: formatting
entry: yapf
args: [ --parallel ]
language: system
types: [python]
args: [--parallel]
language: python
- repo: https://github.com/PyCQA/flake8
rev: 3.9.1
hooks:
- id: flake8
name: PEP8
entry: flake8
language: system
types: [python]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.790
hooks:
- id: mypy
language: python

Просмотреть файл

@ -27,4 +27,3 @@ docs: clean
env:
pip install -r requirements.txt
pip install -r requirements/test.txt

Просмотреть файл

@ -96,7 +96,7 @@ For the single GPU/CPU case:
``` python
import torch
# import our library
import torchmetrics
import torchmetrics
# initialize metric
metric = torchmetrics.Accuracy()
@ -109,14 +109,14 @@ for i in range(n_batches):
# metric on current batch
acc = metric(preds, target)
print(f"Accuracy on batch {i}: {acc}")
print(f"Accuracy on batch {i}: {acc}")
# metric on all batches using custom accumulation
acc = metric.compute()
print(f"Accuracy on all data: {acc}")
```
Module metric usage remains the same when using multiple GPUs or multiple nodes.
Module metric usage remains the same when using multiple GPUs or multiple nodes.
<details>
<summary>Example using DDP</summary>
@ -165,7 +165,7 @@ def metric_ddp(rank, world_size):
# metric on current batch
acc = metric(preds, target)
if rank == 0: # print only for rank 0
print(f"Accuracy on batch {i}: {acc}")
print(f"Accuracy on batch {i}: {acc}")
# metric on all batches and all accelerators using custom accumulation
# accuracy is same across both accelerators
@ -174,7 +174,7 @@ def metric_ddp(rank, world_size):
# Reseting internal state such that metric ready for new data
metric.reset()
# cleanup
dist.destroy_process_group()
@ -197,7 +197,7 @@ from torchmetrics import Metric
class MyAccuracy(Metric):
def __init__(self, dist_sync_on_step=False):
# call `self.add_state`for every internal state that is needed for the metrics computations
# dist_reduce_fx indicates the function that should be used to reduce
# dist_reduce_fx indicates the function that should be used to reduce
# state from multiple processes
super().__init__(dist_sync_on_step=dist_sync_on_step)
@ -240,7 +240,7 @@ acc = torchmetrics.functional.accuracy(preds, target)
* [AveragePrecision](https://torchmetrics.readthedocs.io/en/latest/references/modules.html#averageprecision)
* [AUC](https://torchmetrics.readthedocs.io/en/latest/references/modules.html#auc)
* [AUROC](https://torchmetrics.readthedocs.io/en/latest/references/modules.html#auroc)
* [F1](https://torchmetrics.readthedocs.io/en/latest/references/modules.html#f1)
* [F1](https://torchmetrics.readthedocs.io/en/latest/references/modules.html#f1)
* [Hamming Distance](https://torchmetrics.readthedocs.io/en/latest/references/modules.html#hamming-distance)
* [ROC](https://torchmetrics.readthedocs.io/en/latest/references/modules.html#roc)
* [ExplainedVariance](https://torchmetrics.readthedocs.io/en/latest/references/modules.html#explainedvariance)
@ -252,19 +252,19 @@ acc = torchmetrics.functional.accuracy(preds, target)
And many more!
## Contribute!
The lightning + torchmetric team is hard at work adding even more metrics.
The lightning + torchmetric team is hard at work adding even more metrics.
But we're looking for incredible contributors like you to submit new metrics
and improve existing ones!
Join our [Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)
Join our [Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)
to get help becoming a contributor!
## Community
For help or questions, join our huge community on [Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)!
## Citations
Were excited to continue the strong legacy of open source software and have been inspired over the years by
Caffe, Theano, Keras, PyTorch, torchbearer, ignite, sklearn and fast.ai. When/if a paper is written about this,
Were excited to continue the strong legacy of open source software and have been inspired over the years by
Caffe, Theano, Keras, PyTorch, torchbearer, ignite, sklearn and fast.ai. When/if a paper is written about this,
well be happy to cite these frameworks and the corresponding authors.
## License

Просмотреть файл

@ -1,2 +1,2 @@
make clean
make html --debug --jobs $(nproc)
make html --debug --jobs $(nproc)

Просмотреть файл

@ -16,4 +16,4 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

Просмотреть файл

@ -45,8 +45,8 @@ Module metrics
:options: +ELLIPSIS, +NORMALIZE_WHITESPACE
Accuracy on batch ...
Module metric usage remains the same when using multiple GPUs or multiple nodes.
Module metric usage remains the same when using multiple GPUs or multiple nodes.
Functional metrics
@ -62,7 +62,7 @@ Functional metrics
target = torch.randint(5, (10,))
acc = torchmetrics.functional.accuracy(preds, target)
Implementing a metric
~~~~~~~~~~~~~~~~~~~~~
@ -72,7 +72,7 @@ Implementing a metric
class MyAccuracy(Metric):
def __init__(self, dist_sync_on_step=False):
# call `self.add_state`for every internal state that is needed for the metrics computations
# dist_reduce_fx indicates the function that should be used to reduce
# dist_reduce_fx indicates the function that should be used to reduce
# state from multiple processes
super().__init__(dist_sync_on_step=dist_sync_on_step)

Просмотреть файл

@ -100,16 +100,16 @@ and tests gets formatted in the following way:
3. ``new_metric(...)``: essentially wraps the ``_update`` and ``_compute`` private functions into one public function that
makes up the functional interface for the metric.
.. note::
.. note::
The `functional accuracy <https://github.com/PyTorchLightning/metrics/blob/master/torchmetrics/functional/classification/accuracy.py>`_
metric is a great example of this division of logic.
metric is a great example of this division of logic.
3. In a corresponding file placed in ``torchmetrics/"domain"/"new_metric".py`` create the module interface:
1. Create a new module metric by subclassing ``torchmetrics.Metric``.
2. In the ``__init__`` of the module call ``self.add_state`` for as many metric states are needed for the metric to
proper accumulate metric statistics.
3. The module interface should essentially call the private ``_new_metric_update(...)`` in its `update` method and similarly the
3. The module interface should essentially call the private ``_new_metric_update(...)`` in its `update` method and similarly the
``_new_metric_compute(...)`` function in its ``compute``. No logic should really be implemented in the module interface.
We do this to not have duplicate code to maintain.
@ -130,14 +130,14 @@ and tests gets formatted in the following way:
respectively tests the module interface and the functional interface.
4. The testclass should be parameterized (using ``@pytest.mark.parametrize``) by the different test inputs defined initially.
Additionally, the ``test_"new_metric"_class`` method should also be parameterized with an ``ddp`` parameter such that it gets
tested in a distributed setting. If your metric has additional parameters, then make sure to also parameterize these
tested in a distributed setting. If your metric has additional parameters, then make sure to also parameterize these
such that different combinations of inputs and parameters gets tested.
5. (optional) If your metric raises any exception, please add tests that showcase this.
.. note::
The `test file for accuracy <https://github.com/PyTorchLightning/metrics/blob/master/tests/classification/test_accuracy.py>`_ metric
shows how to implement such tests.
shows how to implement such tests.
If you only can figure out part of the steps, do not fear to send a PR. We will much rather receive working
metrics that are not formatted exactly like our codebase, than not receiving any. Formatting can always be applied.
metrics that are not formatted exactly like our codebase, than not receiving any. Formatting can always be applied.
We will gladly guide and/or help implement the remaining :]

Просмотреть файл

@ -95,5 +95,5 @@ If ``on_epoch`` is True, the logger automatically logs the end of epoch metric v
#update and log
self.metric(outputs['preds'], outputs['target'])
self.log('metric', self.metric)
For more details see `Lightning Docs <https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#logging-from-a-lightningmodule>`_

Просмотреть файл

@ -113,8 +113,8 @@ Metrics in Dataparallel (DP) mode
=================================
When using metrics in `Dataparallel (DP) <https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html#torch.nn.DataParallel>`_
mode, one should be aware DP will both create and clean-up replicas of Metric objects during a single forward pass.
This has the consequence, that the metric state of the replicas will as default be destroyed before we can sync
mode, one should be aware DP will both create and clean-up replicas of Metric objects during a single forward pass.
This has the consequence, that the metric state of the replicas will as default be destroyed before we can sync
them. It is therefore recommended, when using metrics in DP mode, to initialize them with ``dist_sync_on_step=True``
such that metric states are synchonized between the main process and the replicas before they are destroyed.
@ -122,14 +122,14 @@ Metrics in Distributed Data Parallel (DDP) mode
===============================================
When using metrics in `Distributed Data Parallel (DPP) <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`_
mode, one should be aware that DDP will add additional samples to your dataset if the size of your dataset is
not equally divisible by ``batch_size * num_processors``. The added samples will always be replicates of datapoints
already in your dataset. This is done to secure an equal load for all processes. However, this has the consequence
mode, one should be aware that DDP will add additional samples to your dataset if the size of your dataset is
not equally divisible by ``batch_size * num_processors``. The added samples will always be replicates of datapoints
already in your dataset. This is done to secure an equal load for all processes. However, this has the consequence
that the calculated metric value will be sligtly bias towards those replicated samples, leading to a wrong result.
During training and/or validation this may not be important, however it is highly recommended when evaluating
the test dataset to only run on a single gpu or use a `join <https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html#DistributedDataParallel.join>`_
context in conjunction with DDP to prevent this behaviour.
context in conjunction with DDP to prevent this behaviour.
****************************
Metrics and 16-bit precision
@ -143,7 +143,7 @@ the following limitations:
where support for operations such as addition, subtraction, multiplication ect. was added.
* Some metrics does not work at all in half precision on CPU. We have explicitly stated this in their docstring,
but they are also listed below:
- :ref:`references/modules:PSNR` and :ref:`references/functional:psnr [func]`
- :ref:`references/modules:SSIM` and :ref:`references/functional:ssim [func]`
@ -153,7 +153,7 @@ Metric Arithmetics
Metrics support most of python built-in operators for arithmetic, logic and bitwise operations.
For example for a metric that should return the sum of two different metrics, implementing a new metric is an
For example for a metric that should return the sum of two different metrics, implementing a new metric is an
overhead that is not necessary. It can now be done with:
.. code-block:: python
@ -163,10 +163,10 @@ overhead that is not necessary. It can now be done with:
new_metric = first_metric + second_metric
``new_metric.update(*args, **kwargs)`` now calls update of ``first_metric`` and ``second_metric``. It forwards
all positional arguments but forwards only the keyword arguments that are available in respective metric's update
declaration. Similarly ``new_metric.compute()`` now calls compute of ``first_metric`` and ``second_metric`` and
adds the results up. It is important to note that all implemented operations always returns a new metric object. This means
``new_metric.update(*args, **kwargs)`` now calls update of ``first_metric`` and ``second_metric``. It forwards
all positional arguments but forwards only the keyword arguments that are available in respective metric's update
declaration. Similarly ``new_metric.compute()`` now calls compute of ``first_metric`` and ``second_metric`` and
adds the results up. It is important to note that all implemented operations always returns a new metric object. This means
that the line ``first_metric == second_metric`` will not return a bool indicating if ``first_metric`` and ``second_metric``
is the same metric, but will return a new metric that checks if the ``first_metric.compute() == second_metric.compute()``.

Просмотреть файл

@ -32,10 +32,10 @@ Using TorchMetrics
Functional metrics
~~~~~~~~~~~~~~~~~~
Similar to `torch.nn <https://pytorch.org/docs/stable/nn>`_, most metrics have both a class-based and a functional version.
The functional versions implement the basic operations required for computing each metric.
They are simple python functions that as input take `torch.tensors <https://pytorch.org/docs/stable/tensors.html>`_
and return the corresponding metric as a ``torch.tensor``.
Similar to `torch.nn <https://pytorch.org/docs/stable/nn>`_, most metrics have both a class-based and a functional version.
The functional versions implement the basic operations required for computing each metric.
They are simple python functions that as input take `torch.tensors <https://pytorch.org/docs/stable/tensors.html>`_
and return the corresponding metric as a ``torch.tensor``.
The code-snippet below shows a simple example for calculating the accuracy using the functional interface:
.. testcode::
@ -102,4 +102,4 @@ Implementing your own metric is as easy as subclassing an :class:`torch.nn.Modul
2. Implement ``update`` method, where all logic that is necessary for updating metric states go
3. Implement ``compute`` method, where the final metric computations happens
For practical examples and more info about implementing a metric, please see this :ref:`page <implement>`.
For practical examples and more info about implementing a metric, please see this :ref:`page <implement>`.

Просмотреть файл

@ -414,7 +414,7 @@ RetrievalNormalizedDCG
Wrappers
********
Modular wrapper metrics are not metrics in themself, but instead take a metric and alter the internal logic
Modular wrapper metrics are not metrics in themself, but instead take a metric and alter the internal logic
of the base metric.
.. autoclass:: torchmetrics.BootStrapper

Просмотреть файл

@ -1,3 +1,3 @@
numpy
torch>=1.3.1
packaging
packaging

Просмотреть файл

@ -5,4 +5,4 @@
-r test.txt
# add the integration dependencies
-r integrate.txt
-r integrate.txt

Просмотреть файл

@ -13,4 +13,4 @@ sphinx-togglebutton>=0.2
sphinx-copybutton>=0.3
# integrations
pytorch-lightning>=1.1
pytorch-lightning>=1.1