зеркало из https://github.com/microsoft/SNR.git
Update README.md
This commit is contained in:
Родитель
e9ec4a25df
Коммит
ce4386cbde
|
@ -1,4 +1,5 @@
|
|||
# Style Normalization and Restitution for Domain Generalization and Adaptation
|
||||
|
||||
## Introduction
|
||||
|
||||
For many practical computer vision applications, the learned models usually have high performance on the datasets used for training but suffer from significant performance degradation when deployed in new environments, where there are usually style differences between the training images and the testing images. An effective domain generalizable model is expected to be able to learn feature representations that are both generalizable and discriminative.
|
||||
|
@ -8,10 +9,12 @@ In this work, we design a novel Style Normalization and Restitution module (SNR)
|
|||
<p align="center">
|
||||
<img src="imgs/pipeline.png" alt="pipeline" width="800">
|
||||
</p>
|
||||
Figure 1: Overall flowchart. (a) Our generalizable feature learning network with the proposed Style Normalization and Restitution (SNR) module being plugged in after some convolutional blocks. Here, we use ResNet-50 as our backbone for illustration. (b) Proposed SNR module. Instance Normalization (IN) is used to eliminate some style discrepancies followed by identity-relevant feature restitution (marked by red solid arrows). Note the branch with dashed green line is only used for enforcing loss constraint and is discarded in inference. (c) Dual causality loss constraint encourages the disentanglement of a residual feature R to task-relevant one ($R^+$) and task-irrelevant one ($R^-$), which decreases and enhances, respectively, the entropy by adding them to the style normalized feature $\widetilde{F}$.}
|
||||
Figure 1: Overall flowchart. (a) Our generalizable feature learning network with the proposed Style Normalization and Restitution (SNR) module being plugged in after some convolutional blocks. Here, we use ResNet-50 as our backbone for illustration. (b) Proposed SNR module. Instance Normalization (IN) is used to eliminate some style discrepancies followed by identity-relevant feature restitution (marked by red solid arrows). Note the branch with dashed green line is only used for enforcing loss constraint and is discarded in inference. (c) Dual causality loss constraint encourages the disentanglement of a residual feature R to task-relevant one and task-irrelevant one, which decreases and enhances, respectively, the entropy by adding them to the style normalized feature.}
|
||||
|
||||
|
||||
We validate the effectiveness of our SNR on different computer vision tasks, including classification, semantic segmentation, and object detection, for both domain generalization and unsupervised domain adaptation. Please refer to the sub-folder of each task for more details.
|
||||
The SNR module is generic and can be applied on different computer vision tasks, including classification, semantic segmentation, and object detection, for enhancing both the domain generalization and unsupervised domain adaptation performance.
|
||||
|
||||
Please refer to the sub-folder of this repo for each task and our paper [**Style Normalization and Restitution for Domain Generalization and Adaptation**](https://arxiv.org/pdf/2101.00588.pdf) for more details.
|
||||
|
||||
### Reference
|
||||
If you consider using this code or its derivatives, please cite our paper:
|
||||
|
|
Загрузка…
Ссылка в новой задаче