Коммит
2fd95632d9
|
@ -1,45 +0,0 @@
|
|||
name: Azure Static Web Apps CI/CD
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened, closed]
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
build_and_deploy_job:
|
||||
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
|
||||
runs-on: ubuntu-latest
|
||||
name: Build and Deploy Job
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
submodules: true
|
||||
- name: Build And Deploy
|
||||
id: builddeploy
|
||||
uses: Azure/static-web-apps-deploy@v1
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_BLACK_GROUND_0CC93280F }}
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
|
||||
action: "upload"
|
||||
###### Repository/Build Configurations - These values can be configured to match your app requirements. ######
|
||||
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
|
||||
app_location: "/etc/quiz-app" # App source code path
|
||||
api_location: "api" # Api source code path - optional
|
||||
output_location: "dist" # Built app content directory - optional
|
||||
###### End of Repository/Build Configurations ######
|
||||
|
||||
close_pull_request_job:
|
||||
if: github.event_name == 'pull_request' && github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
name: Close Pull Request Job
|
||||
steps:
|
||||
- name: Close Pull Request
|
||||
id: closepullrequest
|
||||
uses: Azure/static-web-apps-deploy@v1
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_BLACK_GROUND_0CC93280F }}
|
||||
action: "close"
|
|
@ -150,7 +150,7 @@ By ensuring that the content aligns with projects, the process is made more enga
|
|||
|
||||
> Find our [Code of Conduct](etc/CODE_OF_CONDUCT.md), [Contributing](etc/CONTRIBUTING.md), and [Translation](etc/TRANSLATIONS.md) guidelines. Find our [Support Documentation here](etc/SUPPORT.md) and [security information here](etc/SECURITY.md). We welcome your constructive feedback!
|
||||
|
||||
> **A note about quizzes**: All quizzes are contained [in this app](https://black-ground-0cc93280f.1.azurestaticapps.net/), for 50 total quizzes of three questions each. They are linked from within the lessons but the quiz app can be run locally; follow the instruction in the `etc/quiz-app` folder.
|
||||
> **A note about quizzes**: All quizzes are contained [in this app](https://victorious-sand-043ca7603.1.azurestaticapps.net/), for 50 total quizzes of three questions each. They are linked from within the lessons but the quiz app can be run locally; follow the instruction in the `etc/quiz-app` folder.
|
||||
|
||||
## Offline access
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ Similar to Readme's, please translate the assignments as well.
|
|||
|
||||
3. Edit the quiz-app's [translations index.js file](https://github.com/microsoft/AI-For-Beginners/blob/main/etc/quiz-app/src/assets/translations/index.js) to add your language.
|
||||
|
||||
4. Finally, edit ALL the quiz links in your translated README.md files to point directly to your translated quiz: https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/1 becomes https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/1?loc=id
|
||||
4. Finally, edit ALL the quiz links in your translated README.md files to point directly to your translated quiz: https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/1 becomes https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/1?loc=id
|
||||
|
||||
**THANK YOU**
|
||||
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
html{font-family:Avenir,Helvetica,Arial,sans-serif;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;color:#252d4a}nav{background-color:#252d4a;padding:1em;margin-bottom:20px}nav a{color:#fff;text-align:right}ul{list-style-type:none;margin:0;padding:0;overflow:hidden}li{float:left}.title{color:#fff;font-weight:700;font-size:x-large;float:right}.link{display:list-item}.message,h1,h2,h3{text-align:center}.error{color:red}.complete{color:green}.card{width:60%;border:solid #252d4a;border-radius:5px;margin:auto;padding:1em}.btn{min-width:50%;text-align:center;cursor:pointer;margin-bottom:5px;width:50%;font-size:16px;color:#fff;background-color:#252d4a;border-radius:5px;padding:5px;justify-content:flex-start;align-items:center}.ans-btn{justify-content:center;display:flex;margin:4px auto}
|
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 17 KiB |
|
@ -0,0 +1 @@
|
|||
<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" href="/favicon.ico"><title>quizzes</title><link href="/css/app.67375a05.css" rel="preload" as="style"><link href="/js/app.dfcb35f3.js" rel="preload" as="script"><link href="/js/chunk-vendors.c1571e8f.js" rel="preload" as="script"><link href="/css/app.67375a05.css" rel="stylesheet"></head><body><noscript><strong>We're sorry but quizzes doesn't work properly without JavaScript enabled. Please enable it to continue.</strong></noscript><div id="app"></div><script src="/js/chunk-vendors.c1571e8f.js"></script><script src="/js/app.dfcb35f3.js"></script></body></html>
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -0,0 +1,8 @@
|
|||
{
|
||||
"routes": [
|
||||
{
|
||||
"route": "/*",
|
||||
"serve": "/index.html"
|
||||
}
|
||||
]
|
||||
}
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -4,7 +4,7 @@
|
|||
|
||||
> Sketchnote by [Tomomi Imura](https://twitter.com/girlie_mac)
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/101)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/101)
|
||||
|
||||
**Artificial Intelligence** is an exciting scientific discipline that studies how we can make computers exhibit intelligent behavior, e.g. do those things that human beings are good at doing.
|
||||
|
||||
|
@ -139,7 +139,7 @@ Over the past few years we have witnessed huge successes with large language mod
|
|||
|
||||
Do a tour of the internet to determine where, in your opinion, AI is most effectively used. Is it in a Mapping app, or some speech-to-text service or a video game? Research how the system was built.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/201)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/201)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
> Sketchnote by [Tomomi Imura](https://twitter.com/girlie_mac)
|
||||
|
||||
## [事前クイズ](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/101)
|
||||
## [事前クイズ](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/101)
|
||||
|
||||
**Artificial Intelligence** is an exciting scientific discipline that studies how we can make computers exhibit intelligent behavior, e.g. do those things that human beings are good at doing.
|
||||
|
||||
|
@ -139,7 +139,7 @@ Over the past few years we have witnessed huge successes with large language mod
|
|||
|
||||
Do a tour of the internet to determine where, in your opinion, AI is most effectively used. Is it in a Mapping app, or some speech-to-text service or a video game? Research how the system was built.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/201)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/201)
|
||||
|
||||
## レビュー&セルフスタディ
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
The quest for artificial intelligence is based on a search for knowledge, to make sense of the world similar to how humans do. But how can you go about doing this?
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/102)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/102)
|
||||
|
||||
In the early days of AI, the top-down approach to creating intelligent systems (discussed in the previous lesson) was popular. The idea was to extract the knowledge from people into some machine-readable form, and then use it to automatically solve problems. This approach was based on two big ideas:
|
||||
|
||||
|
@ -230,7 +230,7 @@ Nowadays, AI is often considered to be a synonym for *Machine Learning* or *Neur
|
|||
|
||||
In the Family Ontology notebook associated to this lesson, there is an opportunity to experiment with other family relations. Try to discover new connections between people in the family tree.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/202)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/202)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Introduction to Neural Networks: Perceptron
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/103)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/103)
|
||||
|
||||
One of the first attempts to implement something similar to a modern neural network was done by Frank Rosenblatt from Cornell Aeronautical Laboratory in 1957. It was a hardware implementation called "Mark-1", designed to recognize primitive geometric figures, such as triangles, squares and circles.
|
||||
|
||||
|
@ -76,7 +76,7 @@ In this lesson, you learned about a perceptron, which is a binary classification
|
|||
|
||||
If you'd like to try to build your own perceptron, try [this lab on Microsoft Learn](https://docs.microsoft.com/en-us/azure/machine-learning/component-reference/two-class-averaged-perceptron?WT.mc_id=academic-57639-dmitryso) which uses the [Azure ML designer](https://docs.microsoft.com/en-us/azure/machine-learning/concept-designer?WT.mc_id=academic-57639-dmitryso).
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/203)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/203)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ In this section we will extend this model into a more flexible framework, allowi
|
|||
|
||||
We will also develop our own modular framework in Python that will allow us to construct different neural network architectures.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/104)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/104)
|
||||
|
||||
## Formalization of Machine Learning
|
||||
|
||||
|
@ -72,7 +72,7 @@ In the accompanying notebook, you will implement your own framework for building
|
|||
|
||||
Proceed to the [OwnFramework](OwnFramework.ipynb) notebook and work through it.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/204)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/204)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ In this lesson, you learned about the differences between the various APIs for t
|
|||
|
||||
In the accompanying notebooks, you will find 'tasks' at the bottom; work through the notebooks and complete the tasks.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/205)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/205)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ As we have learned already, to be able to train neural networks efficiently we n
|
|||
* To operate on tensors, eg. to multiply, add, and compute some functions such as sigmoid or softmax
|
||||
* To compute gradients of all expressions, in order to perform gradient descent optimization
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/105)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/105)
|
||||
|
||||
While the `numpy` library can do the first part, we need some mechanism to compute gradients. In [our framework](../04-OwnFramework/OwnFramework.ipynb) that we have developed in the previous section we had to manually program all derivative functions inside the `backward` method, which does backpropagation. Ideally, a framework should give us the opportunity to compute gradients of *any expression* that we can define.
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
[Computer Vision](https://wikipedia.org/wiki/Computer_vision) is a discipline whose aim is to allow computers to gain high-level understanding of digital images. This is quite a broad definition, because *understanding* can mean many different things, including finding an object on a picture (**object detection**), understanding what is happening (**event detection**), describing a picture in text, or reconstructing a scene in 3D. There are also special tasks related to human images: age and emotion estimation, face detection and identification, and 3D pose estimation, to name a few.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/106)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/106)
|
||||
|
||||
One of the simplest tasks of computer vision is **image classification**.
|
||||
|
||||
|
@ -96,7 +96,7 @@ Sometimes, relatively complex tasks such as movement detection or fingertip dete
|
|||
|
||||
Watch [this video](https://docs.microsoft.com/shows/ai-show/ai-show--2021-opencv-ai-competition--grand-prize-winners--cortic-tigers--episode-32?WT.mc_id=academic-57639-dmitryso) from the AI show to learn about the Cortic Tigers project and how they built a block-based solution to democratize computer vision tasks via a robot. Do some research on other projects like this that help onboard new learners into the field.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/206)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/206)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ In this unit, you have learned the main concept behind computer vision neural ne
|
|||
|
||||
In the accompanying notebooks, there are notes at the bottom about how to obtain greater accuracy. Do some experiments to see if you can achieve higher accuracy.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/207)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/207)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
We have seen before that neural networks are quite good at dealing with images, and even one-layer perceptron is able to recognize handwritten digits from MNIST dataset with reasonable accuracy. However, the MNIST dataset is very special, and all digits are centered inside the image, which makes the task simpler.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/107)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/107)
|
||||
|
||||
In real life, we want to be able to recognize objects on a picture regardless of their exact location in the image. Computer vision is different from generic classification, because when we are trying to find a certain object in the picture, we are scanning the image looking for some specific **patterns** and their combinations. For example, when looking for a cat, we first may look for horizontal lines, which can form whiskers, and then certain a combination of whiskers can tell us that it is actually a picture of a cat. Relative position and presence of certain patterns is important, and not their exact position on the image.
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Training CNNs can take a lot of time, and a lot of data is required for that task. However, much of the time is spent learning the best low-level filters that a network can use to extract patterns from images. A natural question arises - can we use a neural network trained on one dataset and adapt it to classify different images without requiring a full training process?
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/108)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/108)
|
||||
|
||||
This approach is called **transfer learning**, because we transfer some knowledge from one neural network model to another. In transfer learning, we typically start with a pre-trained model, which has been trained on some large image dataset, such as **ImageNet**. Those models can already do a good job extracting different features from generic images, and in many cases just building a classifier on top of those extracted features can yield a good result.
|
||||
|
||||
|
@ -66,7 +66,7 @@ Using transfer learning, you are able to quickly put together a classifier for a
|
|||
|
||||
In the accompanying notebooks, there are notes at the bottom about how transfer knowledge works best with somewhat similar training data (a new type of animal, perhaps). Do some experimentation with completely new types of images to see how well or poorly your transfer knowledge models perform.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/208)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/208)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
When training CNNs, one of the problems is that we need a lot of labeled data. In the case of image classification, we need to separate images into different classes, which is a manual effort.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/109)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/109)
|
||||
|
||||
However, we might want to use raw (unlabeled) data for training CNN feature extractors, which is called **self-supervised learning**. Instead of labels, we will use training images as both network input and output. The main idea of **autoencoder** is that we will have an **encoder network** that converts input image into some **latent space** (normally it is just a vector of some smaller size), then the **decoder network**, whose goal would be to reconstruct the original image.
|
||||
|
||||
|
@ -71,7 +71,7 @@ Learn more about autoencoders in these corresponding notebooks:
|
|||
* **Lossy** - the reconstructed image is not the same as the original image. The nature of loss is defined by the *loss function* used during training
|
||||
* Works on **unlabeled data**
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/209)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/209)
|
||||
|
||||
## Conclusion
|
||||
|
||||
|
@ -81,7 +81,7 @@ In this lesson, you learned about the various types of autoencoders available to
|
|||
|
||||
In this lesson, you learned about using autoencoders for images. But they can also be used for music! Check out the Magenta project's [MusicVAE](https://magenta.tensorflow.org/music-vae) project, which uses autoencoders to learn to reconstruct music. Do some [experiments](https://colab.research.google.com/github/magenta/magenta-demos/blob/master/colab-notebooks/Multitrack_MusicVAE.ipynb) with this library to see what you can create.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/208)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/208)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
In the previous section, we learned about **generative models**: models that can generate new images similar to the ones in the training dataset. VAE was a good example of a generative model.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/110)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/110)
|
||||
|
||||
However, if we try to generate something really meaningful, like a painting at reasonable resolution, with VAE, we will see that training does not converge well. For this use case, we should learn about another architecture specifically targeted at generative models - **Generative Adversarial Networks**, or GANs.
|
||||
|
||||
|
@ -75,7 +75,7 @@ The way it works is the following:
|
|||
|
||||
## ✍️ Example: [Style Transfer](StyleTransfer.ipynb)
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/210)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/210)
|
||||
|
||||
## Conclusion
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
The image classification models we have dealt with so far took an image and produced a categorical result, such as the class 'number' in a MNIST problem. However, in many cases we do not want just to know that a picture portrays objects - we want to be able to determine their precise location. This is exactly the point of **object detection**.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/111)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/111)
|
||||
|
||||
![Object Detection](images/Screen_Shot_2016-11-17_at_11.14.54_AM.png)
|
||||
|
||||
|
@ -163,7 +163,7 @@ Read through these articles and notebooks about YOLO and try them for yourself
|
|||
* Yolo: [Keras implementation](https://github.com/experiencor/keras-yolo2), [step-by-step notebook](https://github.com/experiencor/basic-yolo-keras/blob/master/Yolo%20Step-by-Step.ipynb)
|
||||
* Yolo v2: [Keras implementation](https://github.com/experiencor/keras-yolo2), [step-by-step notebook](https://github.com/experiencor/keras-yolo2/blob/master/Yolo%20Step-by-Step.ipynb)
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/211)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/211)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
We have previously learned about Object Detection, which allows us to locate objects in the image by predicting their *bounding boxes*. However, for some tasks we do not only need bounding boxes, but also more precise object localization. This task is called **segmentation**.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/112)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/112)
|
||||
|
||||
Segmentation can be viewed as **pixel classification**, whereas for **each** pixel of image we must predict its class (*background* being one of the classes). There are two main segmentation algorithms:
|
||||
|
||||
|
@ -47,7 +47,7 @@ Open the notebooks below to learn more about different semantic segmentation arc
|
|||
* [Semantic Segmentation Pytorch](SemanticSegmentationPytorch.ipynb)
|
||||
* [Semantic Segmentation TensorFlow](SemanticSegmentationTF.ipynb)
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/212)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/212)
|
||||
|
||||
## Conclusion
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Representing Text as Tensors
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/113)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/113)
|
||||
|
||||
## Text Classification
|
||||
|
||||
|
@ -64,7 +64,7 @@ So far, we have studied techniques that can add frequency weight to different wo
|
|||
|
||||
Try some other exercises using bag-of-words and different data models. You might be inspired by this [competition on Kaggle](https://www.kaggle.com/competitions/word2vec-nlp-tutorial/overview/part-1-for-beginners-bag-of-words)
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/213)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/213)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Embeddings
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/114)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/114)
|
||||
|
||||
When training classifiers based on BoW or TF/IDF, we operated on high-dimensional bag-of-words vectors with length `vocab_size`, and we were explicitly converting from low-dimensional positional representation vectors into sparse one-hot representation. This one-hot representation, however, is not memory-efficient. In addition, each word is treated independently from each other, i.e. one-hot encoded vectors do not express any semantic similarity between words.
|
||||
|
||||
|
@ -56,7 +56,7 @@ In this lesson, you discovered how to build and use embedding layers in TensorFl
|
|||
|
||||
Word2Vec has been used for some interesting applications, including generating song lyrics and poetry. Take a look at [this article](https://www.politetype.com/blog/word2vec-color-poems) which walks through how the author used Word2Vec to generate poetry. Watch [this video by Dan Shiffmann](https://www.youtube.com/watch?v=LSS_bos_TPI&ab_channel=TheCodingTrain) as well to discover a different explanation of this technique. Then try to apply these techniques to your own text corpus, perhaps sourced from Kaggle.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/214)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/214)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Semantic embeddings, such as Word2Vec and GloVe, are in fact a first step towards **language modeling** - creating models that somehow *understand* (or *represent*) the nature of the language.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/115)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/115)
|
||||
|
||||
The main idea behind language modeling is training them on unlabeled datasets in an unsupervised manner. This is important because we have huge amounts of unlabeled text available, while the amount of labeled text would always be limited by the amount of effort we can spend on labeling. Most often, we can build language models that can **predict missing words** in the text, because it is easy to mask out a random word in text and use it as a training sample.
|
||||
|
||||
|
@ -28,7 +28,7 @@ Continue your learning in the following notebooks:
|
|||
|
||||
In the previous lesson we have seen that words embeddings work like magic! Now we know that training word embeddings is not a very complex task, and we should be able to train our own word embeddings for domain specific text if needed.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/215)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/215)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Recurrent Neural Networks
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/116)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/116)
|
||||
|
||||
In previous sections, we have been using rich semantic representations of text and a simple linear classifier on top of the embeddings. What this architecture does is to capture the aggregated meaning of words in a sentence, but it does not take into account the **order** of words, because the aggregation operation on top of embeddings removed this information from the original text. Because these models are unable to model word ordering, they cannot solve more complex or ambiguous tasks such as text generation or question answering.
|
||||
|
||||
|
@ -75,7 +75,7 @@ Read through some literature about LSTMs and consider their applications:
|
|||
- [Show, Attend and Tell: Neural Image Caption
|
||||
Generation with Visual Attention](https://arxiv.org/pdf/1502.03044v2.pdf)
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/216)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/216)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Generative networks
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/117)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/117)
|
||||
|
||||
Recurrent Neural Networks (RNNs) and their gated cell variants such as Long Short Term Memory Cells (LSTMs) and Gated Recurrent Units (GRUs) provided a mechanism for language modeling in that they can learn word ordering and provide predictions for the next word in a sequence. This allows us to use RNNs for **generative tasks**, such as ordinary text generation, machine translation, and even image captioning.
|
||||
|
||||
|
@ -62,7 +62,7 @@ Take some lessons on Microsoft Learn on this topic
|
|||
|
||||
* Text Generation with [PyTorch](https://docs.microsoft.com/learn/modules/intro-natural-language-processing-pytorch/6-generative-networks/?WT.mc_id=academic-15963-dmitryso)/[TensorFlow](https://docs.microsoft.com/learn/modules/intro-natural-language-processing-tensorflow/5-generative-networks/?WT.mc_id=academic-15963-dmitryso)
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/217)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/217)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Attention Mechanisms and Transformers
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/118)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/118)
|
||||
|
||||
One of the most important problems in the NLP domain is **machine translation**, an essential task that underlies tools such as Google Translate. In this section, we will focus on machine translation, or, more generally, on any *sequence-to-sequence* task (which is also called **sentence transduction**).
|
||||
|
||||
|
@ -99,7 +99,7 @@ In this lesson you learned about Transformers and Attention Mechanisms, all esse
|
|||
|
||||
## 🚀 Challenge
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/218)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/218)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Up to now, we have mostly been concentrating on one NLP task - classification. However, there are also other NLP tasks that can be accomplished with neural networks. One of those tasks is **[Named Entity Recognition](https://wikipedia.org/wiki/Named-entity_recognition)** (NER), which deals with recognizing specific entities within text, such as places, person names, date-time intervals, chemical formulae and so on.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/119)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/119)
|
||||
|
||||
## Example of Using NER
|
||||
|
||||
|
@ -71,7 +71,7 @@ A NER model is a **token classification model**, which means that it can be used
|
|||
|
||||
Complete the assignment linked below to train a named entity recognition model for medical terms, then try it on a different dataset.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/219)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/219)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
In all of our previous tasks, we were training a neural network to perform a certain task using labeled dataset. With large transformer models, such as BERT, we use language modelling in self-supervised fashion to build a language model, which is then specialized for specific downstream task with further domain-specific training. However, it has been demonstrated that large language models can also solve many tasks without ANY domain-specific training. A family of models capable of doing that is called **GPT**: Generative Pre-Trained Transformer.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/120)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/120)
|
||||
|
||||
## Text Generation and Perplexity
|
||||
|
||||
|
@ -53,4 +53,4 @@ Continue your learning in the following notebooks:
|
|||
|
||||
New general pre-trained language models do not only model language structure, but also contain vast amount of commonsense knowledge. Thus, they can be effectively used to solve some NLP tasks in zero-shop or few-shot settings.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/220)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/220)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Genetic Algorithms
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/121)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/121)
|
||||
|
||||
**Genetic Algorithms** (GA) are based on an **evolutionary approach** to AI, in which methods of the evolution of a population is used to obtain an optimal solution for a given problem. They were proposed in 1975 by [John Henry Holland](https://wikipedia.org/wiki/John_Henry_Holland).
|
||||
|
||||
|
@ -57,7 +57,7 @@ Genetic Algorithms are used to solve many problems, including logistics and sear
|
|||
|
||||
"Genetic algorithms are simple to implement, but their behavior is difficult to understand." [source](https://wikipedia.org/wiki/Genetic_algorithm) Do some research to find an implementation of a genetic algorithm such as solving a Sudoku puzzle, and explain how it works as a sketch or flowchart.
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/221)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/221)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Reinforcement learning (RL) is seen as one of the basic machine learning paradigms, next to supervised learning and unsupervised learning. While in supervised learning we rely on the dataset with known outcomes, RL is based on **learning by doing**. For example, when we first see a computer game, we start playing, even without knowing the rules, and soon we are able to improve our skills just by the process of playing and adjusting our behavior.
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/122)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/122)
|
||||
|
||||
To perform RL, we need:
|
||||
|
||||
|
@ -100,7 +100,7 @@ We have now learned how to train agents to achieve good results just by providin
|
|||
|
||||
Explore the applications listed in the 'Other RL Tasks' section and try to implement one!
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/222)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/222)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
One of the possible ways of achieving intelligence is so-called **emergent** (or **synergetic**) approach, which is based on the fact that the combined behavior of many relatively simple agents can result in the overall more complex (or intelligent) behavior of the system as a whole. Theoretically, this is based on the principles of [Collective Intelligence](https://en.wikipedia.org/wiki/Collective_intelligence), [Emergentism](https://en.wikipedia.org/wiki/Global_brain) and [Evolutionary Cybernetics](https://en.wikipedia.org/wiki/Global_brain), which state that higher-level systems gain some sort of added value when being properly combined from lower-level systems (so-called *principle of metasystem transition*).
|
||||
|
||||
## [Pre-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/123)
|
||||
## [Pre-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/123)
|
||||
|
||||
The direction of **Multi-Agent Systems** has emerged in AI in 1990s as a response to growth of the Internet and distributed systems. On of the classical AI textbooks, [Artificial Intelligence: A Modern Approach](https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach), focuses on the view of classical AI from the point of view of Multi-agent systems.
|
||||
|
||||
|
@ -143,7 +143,7 @@ They all tend to focus on the simpler behavior of an individual agent, and achie
|
|||
|
||||
Take this lesson to the real world and try to conceptualize a multi-agent system that can solve a problem. What, for example, would a multi-agent system need to do to optimize a school bus route? How could it work in a bakery?
|
||||
|
||||
## [Post-lecture quiz](https://black-ground-0cc93280f.1.azurestaticapps.net/quiz/223)
|
||||
## [Post-lecture quiz](https://victorious-sand-043ca7603.1.azurestaticapps.net/quiz/223)
|
||||
|
||||
## Review & Self Study
|
||||
|
||||
|
|
|
@ -151,7 +151,7 @@
|
|||
|
||||
> [行動規範](etc/CODE_OF_CONDUCT.md)、[コントリビューター](etc/CONTRIBUTING.md)、[翻訳のガイドライン](etc/TRANSLATIONS.md)をご覧ください。サポートドキュメントやセキュリティ情報についてはこちらをご覧ください。建設的なご意見をお待ちしています。
|
||||
|
||||
> **クイズについての注意事項**。すべてのクイズは[このアプリ](https://black-ground-0cc93280f.1.azurestaticapps.net/)に含まれており、3問ずつのクイズが合計50問あります。クイズはレッスンからリンクされていますが、クイズアプリはローカルで実行することができます。
|
||||
> **クイズについての注意事項**。すべてのクイズは[このアプリ](https://victorious-sand-043ca7603.1.azurestaticapps.net/)に含まれており、3問ずつのクイズが合計50問あります。クイズはレッスンからリンクされていますが、クイズアプリはローカルで実行することができます。
|
||||
|
||||
## オフラインでのアクセス
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче