Merge pull request #42 from DaveVoyles/master
typo corrections, formatting
This commit is contained in:
Коммит
20af61da4c
|
@ -32,12 +32,12 @@ The worker nodes communicate with the master components, configure the networkin
|
|||
Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. A Kubernetes object is a "record of intent" – once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you’re telling the Kubernetes system your cluster’s desired state.
|
||||
|
||||
The basic Kubernetes objects include:
|
||||
* Pod - the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod encapsulates an application container (or multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run.
|
||||
* Service - an abstraction which defines a logical set of Pods and a policy by which to access them.
|
||||
* Volume - an abstraction which allows data to be preserved across container restarts and allows data to be shared between different containers.
|
||||
* Namespace - a way to divide a physical cluster resources into multiple virtual clusters between multiple users.
|
||||
* Deployment - Manages pods and ensures a certain number of them are running. This is typically used to deploy pods that should always be up, such as a web server.
|
||||
* Job - A job creates one or more pods and ensures that a specified number of them successfully terminate. In other words, we use Job to run a task that finishes at some point, such as training a model.
|
||||
* **Pod** - the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod encapsulates an application container (or multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run.
|
||||
* **Service** - an abstraction which defines a logical set of Pods and a policy by which to access them.
|
||||
* **Volume** - an abstraction which allows data to be preserved across container restarts and allows data to be shared between different containers.
|
||||
* **Namespace** - a way to divide a physical cluster resources into multiple virtual clusters between multiple users.
|
||||
* **Deployment** - Manages pods and ensures a certain number of them are running. This is typically used to deploy pods that should always be up, such as a web server.
|
||||
* **Job** - A job creates one or more pods and ensures that a specified number of them successfully terminate. In other words, we use Job to run a task that finishes at some point, such as training a model.
|
||||
|
||||
### Creating a Kubernetes Object
|
||||
|
||||
|
@ -276,4 +276,3 @@ module2-ex1 1 1 3m
|
|||
Currently our training doesn't do anything interesting. We are not even saving the model and summaries anywhere, but don't worry we are going to dive into this starting in Module 4.
|
||||
|
||||
[Module 3: Helm](../3-helm/README.md)
|
||||
|
||||
|
|
|
@ -75,6 +75,19 @@ To use Helm, you need to have the [CLI installed on your machine](https://github
|
|||
|
||||
Let's try to deploy an official Chart such as the popular [Wordpress](https://github.com/kubernetes/charts/tree/master/stable/wordpress)
|
||||
|
||||
We'll need to initialize helm first, with this command:
|
||||
|
||||
```bash
|
||||
helm init
|
||||
```
|
||||
|
||||
Which should return something similar to:
|
||||
```bash
|
||||
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
|
||||
Adding local repo with URL: http://127.0.0.1:8879/charts
|
||||
$HELM_HOME has been configured at /Users/YOURUSER/.helm.
|
||||
```
|
||||
|
||||
```bash
|
||||
helm install stable/wordpress
|
||||
```
|
||||
|
|
|
@ -142,7 +142,7 @@ spec:
|
|||
restartPolicy: OnFailure
|
||||
```
|
||||
|
||||
No need to mount drivers anymore! Note that we are note specifying `TfReplicaType` or `Replicas` as the default values are already what we want.
|
||||
No need to mount drivers anymore! Note that we are not specifying `TfReplicaType` or `Replicas` as the default values are already what we want.
|
||||
|
||||
#### How does this work?
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@
|
|||
Just as distributed training, automated hyperparameter sweep is barely used in many organizations.
|
||||
The reasons are similar: It takes a lot of resources, or time, to run more than a couple training for the same model.
|
||||
* Either you run different hypothesis in parallel, which will likely requires a lot of resources and VMs. These VMs need to be managed by someone, the model need to be deployed, logs and checkpoints have to be gathered etc.
|
||||
* Or you run everything sequentially on a few number of VMs, which takes a lot of time before being able to compare result
|
||||
* Or you run everything sequentially on a small number of VMs, which takes a lot of time before being able to compare results.
|
||||
|
||||
So in practice most people manually fine-tune their hyperparameters through a few runs and pick a winner.
|
||||
|
||||
|
@ -23,7 +23,7 @@ In practice, this process is still rudimentary today as the technologies involve
|
|||
|
||||
### Why Helm?
|
||||
|
||||
As we saw in module [3 - Helm](../3-helm), Helm enables us to package an application in a chart and parametrize it's deployment easily.
|
||||
As we saw in module [3 - Helm](../3-helm), Helm enables us to package an application in a chart and parametrize its deployment easily.
|
||||
To do that, Helm allows us to use Golang templating engine in the chart definitions. This means we can use conditions, loops, variables and [much more](https://docs.helm.sh/chart_template_guide).
|
||||
This will allow us to create complex deployment flow.
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче