Merge branch 'master' of github.com:microsoft/pytorch-luxor-lab

This commit is contained in:
Sergii Dymchenko 2019-11-21 08:04:03 -08:00
Родитель 5826e86f34 5bdc97fcd4
Коммит 33730f1320
2 изменённых файлов: 2 добавлений и 31 удалений

Просмотреть файл

@ -1,33 +1,4 @@
Give a short description for your sample here. What does it do and why is it important? # "Solving problems with Deep Learning: an in-depth example using PyTorch and its ecosystem" tutorial/lab
## Contents
Outline the file contents of the repository. It helps users navigate the codebase, build configuration and any related assets.
| File/folder | Description |
|-------------------|--------------------------------------------|
| `src` | Sample source code. |
| `.gitignore` | Define what to ignore at commit time. |
| `CHANGELOG.md` | List of changes to the sample. |
| `CONTRIBUTING.md` | Guidelines for contributing to the sample. |
| `README.md` | This README file. |
| `LICENSE` | The license for the sample. |
## Prerequisites
Outline the required components and tools that a user might need to have on their machine in order to run the sample. This can be anything from frameworks, SDKs, OS versions or IDE releases.
## Setup
Explain how to prepare the sample once the user clones or downloads the repository. The section should outline every step necessary to install dependencies and set up any settings (for example, API keys and output folders).
## Runnning the sample
Outline step-by-step instructions to execute the sample and see its output. Include steps for executing the sample from the IDE, starting specific services in the Azure portal or anything related to the overall launch of the code.
## Key concepts
Provide users with more context on the tools and services used in the sample. Explain some of the code that is being used and how services interact with each other.
## Contributing ## Contributing

Просмотреть файл

@ -27,7 +27,7 @@ def train(net, data_loader, parameters, device):
torch.nn.utils.clip_grad_norm_(net.parameters(), parameters["grad_norm"]) torch.nn.utils.clip_grad_norm_(net.parameters(), parameters["grad_norm"])
optimizer.step() optimizer.step()
return(epoch_loss) return epoch_loss
def evaluate(net, data_loader, device): def evaluate(net, data_loader, device):