This commit is contained in:
William Buchwalter 2018-06-08 16:53:37 -04:00 коммит произвёл Jack Francis
Родитель 4f6df79776
Коммит 714bb486f5
2 изменённых файлов: 2 добавлений и 2 удалений

Просмотреть файл

@ -1,7 +1,7 @@
# Microsoft Azure Container Service Engine - Using GPUs with Kubernetes
If you created a Kubernetes cluster with one or multiple agent pool(s) whose VM size is `Standard_NC*` or `Standard_NV*` you can schedule GPU workload on your cluster.
The NVIDIA drivers are automatically installed on every GPU agent in your cluster, so you don't need to do that manually, unless you require a specific version of the drivers. Currently, the installed driver is version 390.30.
The NVIDIA drivers are automatically installed on every GPU agent in your cluster, so you don't need to do that manually, unless you require a specific version of the drivers. Currently, the installed driver is version 396.26.
To make sure everything is fine, run `kubectl describe node <name-of-a-gpu-node>`. You should see the correct number of GPU reported (in this example shows 2 GPU for a NC12 VM):

Просмотреть файл

@ -445,7 +445,7 @@ func isCustomVNET(a []*api.AgentPoolProfile) bool {
func getGPUDriversInstallScript(profile *api.AgentPoolProfile) string {
// latest version of the drivers. Later this parameter could be bubbled up so that users can choose specific driver versions.
dv := "390.30"
dv := "396.26"
dest := "/usr/local/nvidia"
nvidiaDockerVersion := "2.0.3"
dockerVersion := "1.13.1-1"