Коммит
cbbcf6ab38
|
@ -112,7 +112,7 @@ Get started with the following resources:
|
|||
|
||||
# Getting Started
|
||||
|
||||
**Students**, there are a couple of ways to use the curriculum. First of all, you can just read the text and look through the code directly on GitHub. If you want to run the code in any of the notebooks - [read our instructions](./etc/how-to-run.md), and find more advice on how to do it [in this blog post](https://soshnikov.com/education/how-to-execute-notebooks-from-github/).
|
||||
**Students**, there are a couple of ways to use the curriculum. First of all, you can just read the text and look through the code directly on GitHub. If you want to run the code in any of the notebook - [read our instructions](./etc/how-to-run.md), and find more advice on how to do it [in this blog post](https://soshnikov.com/education/how-to-execute-notebooks-from-github/).
|
||||
|
||||
> **Note**: [Instructions on how to run the code in this curriculum](./etc/how-to-run.md)
|
||||
|
||||
|
@ -120,7 +120,7 @@ However, if you would like to take the course as a self-study project, we sugges
|
|||
|
||||
- Start with a pre-lecture quiz
|
||||
- Read the intro text for the lecture
|
||||
- If the lecture has additional notebooks, go through them, reading and executing the code. If both TensorFlow and PyTorch notebooks are provided, you can focus on one of them - chose your favorite framework
|
||||
- If the lecture has additional notebooks, go through them, reading and executing the code. If both TensorFlow and PyTorch notebooks are provided, you can focus on one of them - choose your favorite framework
|
||||
- Notebooks often contain some of the challenges that require you to tweak the code a little bit to experiment
|
||||
- Take the post-lecture quiz
|
||||
- If there is a lab attached to the module - complete the assignment
|
||||
|
|
|
@ -34,7 +34,7 @@ One of the problems when dealing with the term **[Intelligence](https://en.wikip
|
|||
|
||||
> [Photo](https://unsplash.com/photos/75715CVEJhI) by [Amber Kipp](https://unsplash.com/@sadmax) from Unsplash
|
||||
|
||||
To see the ambiguity of a term *intelligence*, try answering a question: "Is a cat intelligent?". Different people tend to give different answers to this question, as there is no universally accepted test to prove the assertion true or not. And if you think there is - try running your cat through an IQ test...
|
||||
To see the ambiguity of a term *intelligence*, try answering a question: "Is a cat intelligent?". Different people tend to give different answers to this question, as there is no universally accepted test to prove the assertion is true or not. And if you think there is - try running your cat through an IQ test...
|
||||
|
||||
✅ Think for a minute about how you define intelligence. Is a crow who can solve a maze and get at some food intelligent? Is a child intelligent?
|
||||
|
||||
|
@ -86,7 +86,7 @@ Alternately, we can try to model the simplest elements inside our brain – a ne
|
|||
|
||||
## A Brief History of AI
|
||||
|
||||
Artificial Intelligence was started as a field in the middle of the twentieth century. Initially symbolic reasoning was a prevalent approach, and it led to a number of important successes, such as expert systems – computer programs that were able to act as an expert in some limited problem domain. However, it soon became clear that such approach does not scale well. Extracting the knowledge from an expert, representing it in a computer, and keeping that knowledgebase accurate turns out to be a very complex task, and too expensive to be practical in many cases. This led to so-called [AI Winter](https://en.wikipedia.org/wiki/AI_winter) in the 1970s.
|
||||
Artificial Intelligence was started as a field in the middle of the twentieth century. Initially, symbolic reasoning was a prevalent approach, and it led to a number of important successes, such as expert systems – computer programs that were able to act as an expert in some limited problem domains. However, it soon became clear that such approach does not scale well. Extracting the knowledge from an expert, representing it in a computer, and keeping that knowledgebase accurate turns out to be a very complex task, and too expensive to be practical in many cases. This led to so-called [AI Winter](https://en.wikipedia.org/wiki/AI_winter) in the 1970s.
|
||||
|
||||
<img alt="Brief History of AI" src="images/history-of-ai.png" width="70%"/>
|
||||
|
||||
|
@ -97,16 +97,16 @@ As time passed, computing resources became cheaper, and more data has become ava
|
|||
We can observe how the approaches changed, for example, in creating a chess playing computer program:
|
||||
|
||||
* Early chess programs were based on search – a program explicitly tried to estimate possible moves of an opponent for a given number of next moves, and selected an optimal move based on the optimal position that can be achieved in a few moves. It led to the development of the so-called [alpha-beta pruning](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning) search algorithm.
|
||||
* Search strategies work well towards the end of the game, where the search space is limited by a small number of possible moves. However, in the beginning of the game the search space is huge, and the algorithm can be improved by learning from existing matches between human players. Subsequent experiments employed so-called [case-based reasoning](https://en.wikipedia.org/wiki/Case-based_reasoning), where the program looked for cases in the knowledge base very similar to the current position in the game.
|
||||
* Modern programs that win over human players are based on neural networks and [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning), where the programs learn to play solely by playing a long time against itself and learning from its own mistakes – much like human beings do when learning to play chess. However, a computer program can play many more games in much less time, and thus can learn much faster.
|
||||
* Search strategies work well toward the end of the game, where the search space is limited by a small number of possible moves. However, at the beginning of the game, the search space is huge, and the algorithm can be improved by learning from existing matches between human players. Subsequent experiments employed so-called [case-based reasoning](https://en.wikipedia.org/wiki/Case-based_reasoning), where the program looked for cases in the knowledge base very similar to the current position in the game.
|
||||
* Modern programs that win over human players are based on neural networks and [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning), where the programs learn to play solely by playing a long time against themselves and learning from their own mistakes – much like human beings do when learning to play chess. However, a computer program can play many more games in much less time, and thus can learn much faster.
|
||||
|
||||
✅ Do a little research on other games that have been played by AI.
|
||||
|
||||
Similarly, we can see how the approach towards creating “talking programs” (that might pass the Turing test) changed:
|
||||
|
||||
* Early programs of this kind such as [Eliza](https://en.wikipedia.org/wiki/ELIZA), were based on very simple grammatical rules and the re-formulation of the input sentence into a question.
|
||||
* Modern assistants, such as Cortana, Siri or Google Assistant are all hybrid systems that use Neural networks to convert speech into text and to recognize our intent, and then employ some reasoning or explicit algorithms to perform required actions.
|
||||
* In the future, we may expect complete neural-based model to handle dialogue by itself. The recent GPT and [Turing-NLG](https://turing.microsoft.com/) family of neural networks show great success in this.
|
||||
* Modern assistants, such as Cortana, Siri or Google Assistant are all hybrid systems that use Neural networks to convert speech into text and recognize our intent, and then employ some reasoning or explicit algorithms to perform required actions.
|
||||
* In the future, we may expect a complete neural-based model to handle dialogue by itself. The recent GPT and [Turing-NLG](https://turing.microsoft.com/) family of neural networks show great success in this.
|
||||
|
||||
<img alt="the Turing test's evolution" src="images/turing-test-evol.png" width="70%"/>
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ Reinforcement learning (RL) is seen as one of the basic machine learning paradig
|
|||
To perform RL, we need:
|
||||
|
||||
* An **environment** or **simulator** that sets the rules of the game. We should be able to run the experiments in the simulator and observe the results.
|
||||
* Some **Reward function**, which indicates how successful our experiment was. In case of learning to play a computer game, the reward would be our final score.
|
||||
* Some **Reward function**, which indicate how successful our experiment was. In case of learning to play a computer game, the reward would be our final score.
|
||||
|
||||
Based on the reward function, we should be able to adjust our behavior and improve our skills, so that the next time we play better. The main difference between other types of machine learning and RL is that in RL we typically do not know whether we win or lose until we finish the game. Thus, we cannot say whether a certain move alone is good or not - we only receive a reward at the end of the game.
|
||||
|
||||
|
@ -21,7 +21,7 @@ A great tool for RL is the [OpenAI Gym](https://gym.openai.com/) - a **simulatio
|
|||
|
||||
## CartPole Balancing
|
||||
|
||||
You have probably all seen modern balancing devices such as the *Segway* or *Gyroscooters*. They are able to automatically balance by adjusting their wheels in response to a signal from accelerometer or gyroscope. In this section, we will learn how to solve a similar problem - balancing a pole. It is similar to a situation when a circus performer needs to balance a pole on his hand - but this pole balancing only occurs in 1D.
|
||||
You have probably all seen modern balancing devices such as the *Segway* or *Gyroscooters*. They are able to automatically balance by adjusting their wheels in response to a signal from an accelerometer or gyroscope. In this section, we will learn how to solve a similar problem - balancing a pole. It is similar to a situation when a circus performer needs to balance a pole on his hand - but this pole balancing only occurs in 1D.
|
||||
|
||||
A simplified version of balancing is known as a **CartPole** problem. In the cartpole world, we have a horizontal slider that can move left or right, and the goal is to balance a vertical pole on top of the slider as it moves.
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче