Docs: update & larger Logo [wip] (#30)

* readme

* pruning

* pruning

* ci

* fix

* h1

* logos

* 3
This commit is contained in:
Jirka Borovec 2021-03-09 16:29:24 +01:00 коммит произвёл GitHub
Родитель d718892a68
Коммит 73330e460f
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
20 изменённых файлов: 58 добавлений и 152 удалений

3
.github/workflows/ci_test-full.yml поставляемый
Просмотреть файл

@ -17,6 +17,9 @@ jobs:
os: [ubuntu-20.04, macOS-10.15, windows-2019]
python-version: [3.6, 3.8, 3.9]
requires: ['minimal', 'latest']
exclude:
- python-version: 3.9
requires: 'minimal'
# Timeout: https://stackoverflow.com/a/59076067/4521646
timeout-minutes: 15

Просмотреть файл

@ -12,7 +12,7 @@
<a href="#installation">Installation</a>
<a href="https://torchmetrics.readthedocs.io/en/stable/">Docs</a>
<a href="#build-in-metrics">Build-in metrics</a>
<a href="#implementing-your-own-metric">Implementing your own metric</a>
<a href="#implementing-your-own-metric">Own metric</a>
<a href="#community">Community</a>
<a href="#license">License</a>
</p>

Двоичные данные
docs/source/_static/images/icon.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 1.8 KiB

Двоичные данные
docs/source/_static/images/logo.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 3.1 KiB

После

Ширина:  |  Высота:  |  Размер: 11 KiB

Просмотреть файл

@ -19,11 +19,10 @@ More reading
pages/overview
pages/implement
pages/classification
pages/lightning
.. toctree::
:maxdepth: 2
:maxdepth: 3
:name: metrics
:caption: Metrics' references

Просмотреть файл

@ -1,104 +0,0 @@
.. role:: hidden
:class: hidden-section
**********************
Classification Metrics
**********************
Input types
-----------
For the purposes of classification metrics, inputs (predictions and targets) are split
into these categories (``N`` stands for the batch size and ``C`` for number of classes):
.. csv-table:: \*dtype ``binary`` means integers that are either 0 or 1
:header: "Type", "preds shape", "preds dtype", "target shape", "target dtype"
:widths: 20, 10, 10, 10, 10
"Binary", "(N,)", "``float``", "(N,)", "``binary``\*"
"Multi-class", "(N,)", "``int``", "(N,)", "``int``"
"Multi-class with probabilities", "(N, C)", "``float``", "(N,)", "``int``"
"Multi-label", "(N, ...)", "``float``", "(N, ...)", "``binary``\*"
"Multi-dimensional multi-class", "(N, ...)", "``int``", "(N, ...)", "``int``"
"Multi-dimensional multi-class with probabilities", "(N, C, ...)", "``float``", "(N, ...)", "``int``"
.. note::
All dimensions of size 1 (except ``N``) are "squeezed out" at the beginning, so
that, for example, a tensor of shape ``(N, 1)`` is treated as ``(N, )``.
When predictions or targets are integers, it is assumed that class labels start at 0, i.e.
the possible class labels are 0, 1, 2, 3, etc. Below are some examples of different input types
.. testcode::
# Binary inputs
binary_preds = torch.tensor([0.6, 0.1, 0.9])
binary_target = torch.tensor([1, 0, 2])
# Multi-class inputs
mc_preds = torch.tensor([0, 2, 1])
mc_target = torch.tensor([0, 1, 2])
# Multi-class inputs with probabilities
mc_preds_probs = torch.tensor([[0.8, 0.2, 0], [0.1, 0.2, 0.7], [0.3, 0.6, 0.1]])
mc_target_probs = torch.tensor([0, 1, 2])
# Multi-label inputs
ml_preds = torch.tensor([[0.2, 0.8, 0.9], [0.5, 0.6, 0.1], [0.3, 0.1, 0.1]])
ml_target = torch.tensor([[0, 1, 1], [1, 0, 0], [0, 0, 0]])
Using the is_multiclass parameter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In some cases, you might have inputs which appear to be (multi-dimensional) multi-class
but are actually binary/multi-label - for example, if both predictions and targets are
integer (binary) tensors. Or it could be the other way around, you want to treat
binary/multi-label inputs as 2-class (multi-dimensional) multi-class inputs.
For these cases, the metrics where this distinction would make a difference, expose the
``is_multiclass`` argument. Let's see how this is used on the example of
:class:`~torchmetrics.StatScores` metric.
First, let's consider the case with label predictions with 2 classes, which we want to
treat as binary.
.. testcode::
from torchmetrics.functional import stat_scores
# These inputs are supposed to be binary, but appear as multi-class
preds = torch.tensor([0, 1, 0])
target = torch.tensor([1, 1, 0])
As you can see below, by default the inputs are treated
as multi-class. We can set ``is_multiclass=False`` to treat the inputs as binary -
which is the same as converting the predictions to float beforehand.
.. doctest::
>>> stat_scores(preds, target, reduce='macro', num_classes=2)
tensor([[1, 1, 1, 0, 1],
[1, 0, 1, 1, 2]])
>>> stat_scores(preds, target, reduce='macro', num_classes=1, is_multiclass=False)
tensor([[1, 0, 1, 1, 2]])
>>> stat_scores(preds.float(), target, reduce='macro', num_classes=1)
tensor([[1, 0, 1, 1, 2]])
Next, consider the opposite example: inputs are binary (as predictions are probabilities),
but we would like to treat them as 2-class multi-class, to obtain the metric for both classes.
.. testcode::
preds = torch.tensor([0.2, 0.7, 0.3])
target = torch.tensor([1, 1, 0])
In this case we can set ``is_multiclass=True``, to treat the inputs as multi-class.
.. doctest::
>>> stat_scores(preds, target, reduce='macro', num_classes=1)
tensor([[1, 0, 1, 1, 2]])
>>> stat_scores(preds, target, reduce='macro', num_classes=2, is_multiclass=True)
tensor([[1, 1, 1, 0, 1],
[1, 0, 1, 1, 2]])

Просмотреть файл

@ -11,10 +11,10 @@ We make sure that all our metrics are rigorously tested against other popular im
Build-in metrics
****************
Similar to `torch.nn` most metrics comes both as class based version and simple functional version.
Similar to `torch.nn` most metrics comes both as Module based version and simple functional version.
- The class based metrics offers the most functionality, by supporting both accumulation over multiple
batches and automatic syncrenization between multiple devices.
- The Module based metrics offers the most functionality, by supporting both accumulation over multiple
batches and automatic synchronization between multiple devices.
.. testcode::

Просмотреть файл

@ -16,7 +16,7 @@ TorchMetrics was originaly created as part of `PyTorch Lightning <https://github
While TorchMetrics was built to be used with native PyTorch, using TorchMetrics with Lightning offers additional benefits:
* Class metrics are automatically placed on the correct device when properly defined inside a LightningModule. This means that your data will always be placed on the same device as your metrics.
* Module metrics are automatically placed on the correct device when properly defined inside a LightningModule. This means that your data will always be placed on the same device as your metrics.
* Native support for logging metrics in Lightning using `self.log <https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#logging-from-a-lightningmodule>`_ inside your LightningModule.
* The ``.reset()`` method of the metric will automatically be called and the end of an epoch.

Просмотреть файл

@ -136,7 +136,7 @@ MetricCollection
****************
In many cases it is beneficial to evaluate the model output by multiple metrics.
In this case the `MetricCollection` class may come in handy. It accepts a sequence
In this case the ``MetricCollection`` class may come in handy. It accepts a sequence
of metrics and wraps theses into a single callable metric class, with the same
interface as any other metric.
@ -196,17 +196,17 @@ inside your LightningModule
:noindex:
***************************
Class vs Functional Metrics
***************************
****************************
Module vs Functional Metrics
****************************
The functional metrics follow the simple paradigm input in, output out.
This means they don't provide any advanced mechanisms for syncing across DDP nodes or aggregation over batches.
They simply compute the metric value based on the given inputs.
Also, the integration within other parts of PyTorch Lightning will never be as tight as with the class-based interface.
Also, the integration within other parts of PyTorch Lightning will never be as tight as with the Module-based interface.
If you look for just computing the values, the functional metrics are the way to go.
However, if you are looking for the best integration and user experience, please consider also using the class interface.
However, if you are looking for the best integration and user experience, please consider also using the Module interface.
**********************
Classification Metrics

Просмотреть файл

@ -1,6 +1,10 @@
.. role:: hidden
:class: hidden-section
##################
Functional metrics
##################
**********************
Classification Metrics
**********************

Просмотреть файл

@ -1,3 +1,7 @@
#################
nn.Module metrics
#################
**********************
Classification Metrics
**********************

Просмотреть файл

@ -38,7 +38,7 @@ class Accuracy(Metric):
changed to subset accuracy (which requires all labels or sub-samples in the sample to
be correctly predicted) by setting ``subset_accuracy=True``.
Accepts all input types listed in :ref:`pages/classification:input types`.
Accepts all input types listed in :ref:`pages/overview:input types`.
Args:
threshold:
@ -127,7 +127,7 @@ class Accuracy(Metric):
def update(self, preds: torch.Tensor, target: torch.Tensor):
"""
Update state with predictions and targets. See :ref:`pages/classification:input types` for more information
Update state with predictions and targets. See :ref:`pages/overview:input types` for more information
on input types.
Args:

Просмотреть файл

@ -250,7 +250,7 @@ def _check_classification_inputs(
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
@ -376,7 +376,7 @@ def _input_format_classification(
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
Returns:

Просмотреть файл

@ -35,7 +35,7 @@ class HammingDistance(Metric):
treats each possible label separately - meaning that, for example, multi-class data is
treated as if it were multi-label.
Accepts all input types listed in :ref:`pages/classification:input types`.
Accepts all input types listed in :ref:`pages/overview:input types`.
Args:
threshold:
@ -88,7 +88,7 @@ class HammingDistance(Metric):
def update(self, preds: torch.Tensor, target: torch.Tensor):
"""
Update state with predictions and targets. See :ref:`pages/classification:input types` for more information
Update state with predictions and targets. See :ref:`pages/overview:input types` for more information
on input types.
Args:

Просмотреть файл

@ -31,7 +31,7 @@ class Precision(StatScores):
The reduction method (how the precision scores are aggregated) is controlled by the
``average`` parameter, and additionally by the ``mdmc_average`` parameter in the
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/classification:input types`.
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/overview:input types`.
Args:
num_classes:
@ -67,11 +67,11 @@ class Precision(StatScores):
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then averaged over samples.
The computation for each sample is done by treating the flattened extra axes ``...``
(see :ref:`pages/classification:input types`) as the ``N`` dimension within the sample,
(see :ref:`pages/overview:input types`) as the ``N`` dimension within the sample,
and computing the metric for the sample based on that.
- ``'global'``: In this case the ``N`` and ``...`` dimensions of the inputs
(see :ref:`pages/classification:input types`)
(see :ref:`pages/overview:input types`)
are flattened into a new ``N_X`` sample axis, i.e. the inputs are treated as if they
were ``(N_X, C)``. From here on the ``average`` parameter applies as usual.
@ -90,7 +90,7 @@ class Precision(StatScores):
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
compute_on_step:
@ -181,7 +181,7 @@ class Recall(StatScores):
The reduction method (how the recall scores are aggregated) is controlled by the
``average`` parameter, and additionally by the ``mdmc_average`` parameter in the
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/classification:input types`.
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/overview:input types`.
Args:
num_classes:
@ -217,11 +217,11 @@ class Recall(StatScores):
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then averaged over samples.
The computation for each sample is done by treating the flattened extra axes ``...``
(see :ref:`pages/classification:input types`) as the ``N`` dimension within the sample,
(see :ref:`pages/overview:input types`) as the ``N`` dimension within the sample,
and computing the metric for the sample based on that.
- ``'global'``: In this case the ``N`` and ``...`` dimensions of the inputs
(see :ref:`pages/classification:input types`)
(see :ref:`pages/overview:input types`)
are flattened into a new ``N_X`` sample axis, i.e. the inputs are treated as if they
were ``(N_X, C)``. From here on the ``average`` parameter applies as usual.
@ -241,7 +241,7 @@ class Recall(StatScores):
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
compute_on_step:

Просмотреть файл

@ -30,7 +30,7 @@ class StatScores(Metric):
``reduce`` parameter, and additionally by the ``mdmc_reduce`` parameter in the
multi-dimensional multi-class case.
Accepts all inputs listed in :ref:`pages/classification:input types`.
Accepts all inputs listed in :ref:`pages/overview:input types`.
Args:
threshold:
@ -73,7 +73,7 @@ class StatScores(Metric):
one of the following:
- ``None`` [default]: Should be left unchanged if your data is not multi-dimensional
multi-class (see :ref:`pages/classification:input types` for the definition of input types).
multi-class (see :ref:`pages/overview:input types` for the definition of input types).
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then the outputs are concatenated together. In each
@ -88,7 +88,7 @@ class StatScores(Metric):
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
compute_on_step:
@ -177,7 +177,7 @@ class StatScores(Metric):
def update(self, preds: torch.Tensor, target: torch.Tensor):
"""
Update state with predictions and targets. See :ref:`pages/classification:input types` for more information
Update state with predictions and targets. See :ref:`pages/overview:input types` for more information
on input types.
Args:

Просмотреть файл

@ -73,7 +73,7 @@ def accuracy(
changed to subset accuracy (which requires all labels or sub-samples in the sample to
be correctly predicted) by setting ``subset_accuracy=True``.
Accepts all input types listed in :ref:`pages/classification:input types`.
Accepts all input types listed in :ref:`pages/overview:input types`.
Args:
preds: Predictions from model (probabilities, or labels)

Просмотреть файл

@ -51,7 +51,7 @@ def hamming_distance(preds: torch.Tensor, target: torch.Tensor, threshold: float
treats each possible label separately - meaning that, for example, multi-class data is
treated as if it were multi-label.
Accepts all input types listed in :ref:`pages/classification:input types`.
Accepts all input types listed in :ref:`pages/overview:input types`.
Args:
preds: Predictions from model

Просмотреть файл

@ -58,7 +58,7 @@ def precision(
The reduction method (how the precision scores are aggregated) is controlled by the
``average`` parameter, and additionally by the ``mdmc_average`` parameter in the
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/classification:input types`.
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/overview:input types`.
Args:
preds: Predictions from model (probabilities or labels)
@ -92,11 +92,11 @@ def precision(
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then averaged over samples.
The computation for each sample is done by treating the flattened extra axes ``...``
(see :ref:`pages/classification:input types`) as the ``N`` dimension within the sample,
(see :ref:`pages/overview:input types`) as the ``N`` dimension within the sample,
and computing the metric for the sample based on that.
- ``'global'``: In this case the ``N`` and ``...`` dimensions of the inputs
(see :ref:`pages/classification:input types`)
(see :ref:`pages/overview:input types`)
are flattened into a new ``N_X`` sample axis, i.e. the inputs are treated as if they
were ``(N_X, C)``. From here on the ``average`` parameter applies as usual.
@ -121,7 +121,7 @@ def precision(
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
Return:
@ -211,7 +211,7 @@ def recall(
The reduction method (how the recall scores are aggregated) is controlled by the
``average`` parameter, and additionally by the ``mdmc_average`` parameter in the
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/classification:input types`.
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/overview:input types`.
Args:
preds: Predictions from model (probabilities, or labels)
@ -242,11 +242,11 @@ def recall(
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then averaged over samples.
The computation for each sample is done by treating the flattened extra axes ``...``
(see :ref:`pages/classification:input types`) as the ``N`` dimension within the sample,
(see :ref:`pages/overview:input types`) as the ``N`` dimension within the sample,
and computing the metric for the sample based on that.
- ``'global'``: In this case the ``N`` and ``...`` dimensions of the inputs
(see :ref:`pages/classification:input types`)
(see :ref:`pages/overview:input types`)
are flattened into a new ``N_X`` sample axis, i.e. the inputs are treated as if they
were ``(N_X, C)``. From here on the ``average`` parameter applies as usual.
@ -271,7 +271,7 @@ def recall(
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
Return:
@ -347,7 +347,7 @@ def precision_recall(
The reduction method (how the recall scores are aggregated) is controlled by the
``average`` parameter, and additionally by the ``mdmc_average`` parameter in the
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/classification:input types`.
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/overview:input types`.
Args:
preds: Predictions from model (probabilities, or labels)
@ -378,11 +378,11 @@ def precision_recall(
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then averaged over samples.
The computation for each sample is done by treating the flattened extra axes ``...``
(see :ref:`pages/classification:input types`) as the ``N`` dimension within the sample,
(see :ref:`pages/overview:input types`) as the ``N`` dimension within the sample,
and computing the metric for the sample based on that.
- ``'global'``: In this case the ``N`` and ``...`` dimensions of the inputs
(see :ref:`pages/classification:input types`)
(see :ref:`pages/overview:input types`)
are flattened into a new ``N_X`` sample axis, i.e. the inputs are treated as if they
were ``(N_X, C)``. From here on the ``average`` parameter applies as usual.
@ -407,7 +407,7 @@ def precision_recall(
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
Return:

Просмотреть файл

@ -153,7 +153,7 @@ def stat_scores(
The reduction method (how the statistics are aggregated) is controlled by the
``reduce`` parameter, and additionally by the ``mdmc_reduce`` parameter in the
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/classification:input types`.
multi-dimensional multi-class case. Accepts all inputs listed in :ref:`pages/overview:input types`.
Args:
preds: Predictions from model (probabilities or labels)
@ -198,7 +198,7 @@ def stat_scores(
one of the following:
- ``None`` [default]: Should be left unchanged if your data is not multi-dimensional
multi-class (see :ref:`pages/classification:input types` for the definition of input types).
multi-class (see :ref:`pages/overview:input types` for the definition of input types).
- ``'samplewise'``: In this case, the statistics are computed separately for each
sample on the ``N`` axis, and then the outputs are concatenated together. In each
@ -213,7 +213,7 @@ def stat_scores(
is_multiclass:
Used only in certain special cases, where you want to treat inputs as a different type
than what they appear to be. See the parameter's
:ref:`documentation section <pages/classification:using the is_multiclass parameter>`
:ref:`documentation section <pages/overview:using the is_multiclass parameter>`
for a more detailed explanation and examples.
Return: