Merge pull request #785 from enisoc/k8s-v1beta3

Update Kubernetes example for v1beta3 API.
This commit is contained in:
Anthony Yeh 2015-06-12 11:42:34 -07:00
Родитель eb29d20c2d 70a2f57632
Коммит f3f221ccd3
15 изменённых файлов: 611 добавлений и 802 удалений

Просмотреть файл

@ -1,6 +1,11 @@
This page explains how to start a Kubernetes cluster and also run
Vitess on Kubernetes. This example was most recently tested using
the binary release of Kubernetes v0.9.1.
This page explains how to run Vitess on [Kubernetes](http://kubernetes.io).
It also gives the steps to start a Kubernetes cluster with
[Google Container Engine](https://cloud.google.com/container-engine/).
If you already have Kubernetes v0.18+ running in one of the other
[supported platforms](http://kubernetes.io/gettingstarted/),
you can skip the <code>gcloud</code> steps.
The <code>kubectl</code> steps will apply to any Kubernetes cluster.
## Prerequisites
@ -82,17 +87,23 @@ account with a project in the Google Developers Console.
## Start a Kubernetes cluster
1. Set the <code>KUBECTL</code> environment variable to point to the
<code>gcloud</code> command:
1. Enable or update alpha features in the <code>gcloud</code> tool, and install
the <code>kubectl</code> tool:
``` sh
$ export KUBECTL='gcloud alpha container kubectl'
$ gcloud components update alpha kubectl
# Check if kubectl is on your PATH:
$ which kubectl
### example output:
# ~/google-cloud-sdk/bin/kubectl
```
1. Enable alpha features in the <code>gcloud</code> tool:
If <code>kubectl</code> isn't on your PATH, you can tell our scripts where
to find it by setting the <code>KUBECTL</code> environment variable:
``` sh
$ gcloud components update alpha
$ export KUBECTL=/example/path/to/google-cloud-sdk/bin/kubectl
```
1. If you did not complete the [GCE quickstart guide]
@ -126,17 +137,16 @@ $ gcloud alpha container clusters create example --machine-type n1-standard-1 --
times for the passphrase you created while setting up Google
Compute Engine.
1. The command's output includes the URL for the Kubernetes master server:
1. The command's output includes the IP of the Kubernetes master server:
``` sh
endpoint: 146.148.70.28
masterAuth:
password: YOUR_PASSWORD
user: admin
```
NAME ZONE CLUSTER_API_VERSION MASTER_IP MACHINE_TYPE NODES STATUS
example us-central1-b 0.18.2 1.2.3.4 n1-standard-1, container-vm-v20150505 3 running
```
1. Open the endpoint URL in a browser to get the full effect
of the "Hello World" experience in Kubernetes.
1. Open /static/app/ on the MASTER_IP in a browser over HTTPS
(e.g. <code>https://1.2.3.4/static/app/</code>) to see the Kubernetes
dashboard, where you can monitor nodes, services, pods, etc.
1. If you see a <code>ERRCERTAUTHORITY_INVALID</code> error
indicating that the server's security certificate is not
@ -144,9 +154,23 @@ masterAuth:
**Advanced** link and then the link to proceed to the URL.
1. You should be prompted to enter a username and password to
access the requested page. Enter the <code>masterAuth</code>
username and password from the <code>gcloud</code> command's
output.
access the requested page. Use <code>admin</code> as the username.
The randomly-generated password can be found in the <code>token</code>
field of the kubectl config:
``` sh
$ kubectl config view
### example output:
# apiVersion: v1
# clusters:
# - cluster:
# server: https://1.2.3.4
# ...
# users:
# - name: gke_project_us-central1-b_example
# user:
# token: randompassword
```
## Start a Vitess cluster
@ -174,10 +198,10 @@ vitess/examples/kubernetes$ ./etcd-up.sh
in the cluster by running:
``` sh
$ $KUBECTL get pods
$ kubectl get pods
```
<br>It may take a while for each Kubernetes minion to download the
<br>It may take a while for each Kubernetes node to download the
Docker images the first time it needs them. While the images
are downloading, the pod status will be Pending.<br><br>
@ -201,40 +225,46 @@ vitess/examples/kubernetes$ ./etcd-down.sh
vitess/examples/kubernetes$ ./vtctld-up.sh
```
<br>To let you access <code>vtctld</code> from outside Kubernetes,
the <code>vtctld</code> service is created with the
<code>createExternalLoadBalancer</code> option. This is a
[convenient shortcut]
(https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#external-services)
for cloud providers that support external load balancers.
On supported platforms, Kubernetes will then automatically
create an external IP that load balances onto the pods
comprising the service.<br><br>
<br>To let you access vtctld from outside Kubernetes, the
<code>vtctld</code> service is created with the <code>type: NodePort</code>
option. This creates an
[external service](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#external-services)
by exposing a port on each node that forwards to the vtctld service.<br><br>
1. **Access vtctld**
To access the <code>vtctld</code> service from outside
Kubernetes, you need to open port 15000 on the GCE firewall.
Kubernetes, you need to open port 30000 in your platform's firewall.
(If you don't complete this step, the only way to issue commands
to <code>vtctld</code> would be to SSH into a Kubernetes node
and install and run <code>vtctlclient</code> there.)
and install and run <code>vtctlclient</code> there.)<br><br>
On GCE, you can open the port like this:
``` sh
$ gcloud compute firewall-rules create vtctld --allow tcp:15000
$ gcloud compute firewall-rules create vtctld --allow tcp:30000
```
<br>Then, get the address of the load balancer for <code>vtctld</code>:
<br>Then, get the <code>ExternalIP</code> of any Kubernetes node
(not the master):
``` sh
$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
vtctld us-central1 104.154.64.12 TCP us-central1/targetPools/vtctld
$ kubectl get -o yaml nodes
### example output:
# - apiVersion: v1beta3
# kind: Node
# ...
# status:
# addresses:
# - address: 2.3.4.5
# type: ExternalIP
# ...
```
<br>You can then access the <code>vtctld</code> web interface
at port 15000 of the IP address returned in the above command.
<br>You can then access the <code>vtctld</code> web interface at port 30000
of the EXTERNAL_IP address returned in the above command.
In this example, the web UI would be at
<code>https://104.154.64.12:15000</code>.
<code>http://2.3.4.5:30000</code>.
1. **Use <code>vtctlclient</code> to call <code>vtctld</code>**
@ -247,20 +277,25 @@ vtctld us-central1 104.154.64.12 TCP us-central1/targetPools/vtctld
called <code>kvtctl</code> that points to the address from above:
``` sh
$ alias kvtctl='vtctlclient -server 104.154.64.12:15000'
$ alias kvtctl='vtctlclient -server 2.3.4.5:30000'
```
<br>Now, running <code>kvtctl</code> will test your connection to
<br>Now, running <code>kvtctl help</code> will test your connection to
<code>vtctld</code> and also list the <code>vtctlclient</code>
commands that you can use to administer the Vitess cluster.
``` sh
# Test the connection to vtctld and list available commands
$ kvtctl help
No command specified please see the list below:
Tablets:
InitTablet ...
...
### example output:
# Available commands:
#
# Tablets:
# InitTablet ...
# ...
# Get usage for a specific command:
$ kvtctl help InitTablet
```
1. **Start vttablets**
@ -271,76 +306,95 @@ Tablets:
``` sh
vitess/examples/kubernetes$ ./vttablet-up.sh
### Output from vttablet-up.sh is shown below
### example output:
# Creating test_keyspace.shard-0 pods in cell test...
# Creating pod for tablet test-0000000100...
# vttablet-100
#
# pods/vttablet-100
# Creating pod for tablet test-0000000101...
# vttablet-101
#
# pods/vttablet-101
# Creating pod for tablet test-0000000102...
# vttablet-102
# pods/vttablet-102
```
<br>Wait until you see the tablets listed in the
**DBTopology Tool** summary page for your <code>vtctld</code>
instance. This can take some time if a pod was scheduled on a
minion that needs to download the latest Vitess Docker image.
You can also check the status of the tablets from the command
line using <code>kvtctl</code>.
<br>Wait until you see all 3 tablets listed in the **Topology** summary page
for your <code>vtctld</code> instance
(e.g. <code>http://2.3.4.5:30000/dbtopo</code>).
This can take some time if a pod was scheduled on a node that needs to
download the latest Vitess Docker image. You can also check the status of
the tablets from the command line using <code>kvtctl</code>:
``` sh
$ kvtctl ListAllTablets test
### example output:
# test-0000000100 test_keyspace 0 replica 10.64.2.9:15002 10.64.2.9:3306 []
# test-0000000101 test_keyspace 0 replica 10.64.1.12:15002 10.64.1.12:3306 []
# test-0000000102 test_keyspace 0 replica 10.64.2.10:15002 10.64.2.10:3306 []
```
<br>By bringing up tablets in a previously empty keyspace, you
have effectively just created a new shard. To initialize the
keyspace for the new shard, call the
<code>vtctl RebuildKeyspaceGraph</code> command:
<br>By bringing up tablets in a previously empty keyspace, you have
effectively just created a new shard. To initialize the keyspace for the new
shard, call the <code>kvtctl RebuildKeyspaceGraph</code> command:
``` sh
$ kvtctl RebuildKeyspaceGraph test_keyspace
```
<br>After this command completes, go back to the <code>vtctld</code>
UI and click the **DBTopology Tool** link. You should see the
**Note:** Many <code>vtctlclient</code> commands produce no output on
success.<br><br>
After this command completes, go back to the <code>vtctld</code>
UI and click the **Topology** link in the top nav bar. You should see the
three tablets listed. If you click the address of a tablet, you
will see the coordination data stored in <code>etcd</code>.<br><br>
**Note:** Most <code>vtctlclient</code> commands produce no
output on success.<br><br>
**_Status pages for vttablets_**
Each <code>vttablet</code> serves a set of HTML status pages
on its primary port. The <code>vtctld</code> interface provides
a link to the status page for each tablet, but the links are
actually to internal, per-pod IPs that can only be accessed
from within Kubernetes.<br><br>
Each <code>vttablet</code> serves a set of HTML status pages on its primary
port. The <code>vtctld</code> interface provides a **[status]** link for
each tablet, but the links are actually to internal, per-pod IPs that can
only be accessed from within Kubernetes.<br><br>
As such, if you try to connect to one of the **[status]**
links, you will get a 502 HTTP response.<br><br>
As a workaround, you can access tablet status pages through the apiserver
proxy, provided by the Kubernetes master. For example, to see the status
page for the tablet with ID 100 (recall that our Kubernetes master is
on public IP 1.2.3.4), you could navigate to:
As a workaround, you can proxy over an SSH connection to a
Kubernetes minion, or you can launch a proxy as a Kubernetes
service. In the future, we plan to provide proxying via the
Kubernetes API server without a need for additional setup.<br><br>
```
https://1.2.3.4/api/v1beta3/proxy/namespaces/default/pods/vttablet-100:15002/debug/status
```
In the future, we plan to have vtctld directly link through this proxy from
the **[status]** link.<br><br>
**_Direct connection to mysqld_**
Since the mysqld within the vttablet pod is only meant to be
accessed via vttablet, our default bootstrap settings allow
connections only from localhost.<br><br>
Since the <code>mysqld</code> within the <code>vttablet</code> pod is only
meant to be accessed via vttablet, our default bootstrap settings only allow
connections from localhost.<br><br>
If you want to check or manipulate the underlying mysqld,
you can SSH to the Kubernetes node on which the pod is running.
Then use [docker exec](https://docs.docker.com/reference/commandline/cli/#exec)
to launch a bash shell inside the mysql container, and connect with:
to launch a bash shell inside the mysql container, and connect with the
<code>mysql</code> command-line client:
``` sh
# For example, while inside the mysql container for pod vttablet-100:
$ TERM=ansi mysql -u vt_dba -S /vt/vtdataroot/vt_0000000100/mysql.sock
# For example, to connect to the mysql container within the vttablet-100 pod:
$ kubectl get pods | grep vttablet-100
### example output:
# vttablet-100 [...] 10.64.2.9 k8s-example-3c0115e4-node-x6jc [...]
$ gcloud compute ssh k8s-example-3c0115e4-node-x6jc
k8s-example-3c0115e4-node-x6jc:~$ sudo docker ps | grep vttablet-100
### example output:
# ef40b4ff08fa vitess/lite:latest [...] k8s_mysql.16e2a810_vttablet-100[...]
k8s-example-3c0115e4-node-x6jc:~$ sudo docker exec -ti ef40b4ff08fa bash
# Now you're in a shell inside the mysql container.
# We need to tell the mysql client the username and socket file to use.
vttablet-100:/# TERM=ansi mysql -u vt_dba -S /vt/vtdataroot/vt_0000000100/mysql.sock
```
1. **Elect a master vttablet**
@ -369,7 +423,7 @@ $ kvtctl InitShardMaster -force test_keyspace/0 test-0000000100
replicating at all, that check would fail and the command
would fail as well.<br><br>
After running this command, go back to the **DBTopology Tool**
After running this command, go back to the **Topology** page
in the <code>vtctld</code> web interface. When you refresh the
page, you should see that one <code>vttablet</code> is the master
and the other two are replicas.<br><br>
@ -387,17 +441,18 @@ $ kvtctl ListAllTablets test
1. **Create a table**
The <code>vtctlclient</code> tool implements the database schema
The <code>vtctlclient</code> tool can be used to apply the database schema
across all tablets in a keyspace. The following command creates
the table defined in the _createtesttable.sql_ file:
the table defined in the <code>create_test_table.sql</code> file:
``` sh
# Make sure to run this from the examples/kubernetes dir, so it finds the file.
vitess/examples/kubernetes$ kvtctl ApplySchema -sql "$(cat create_test_table.sql)" test_keyspace
```
<br>The SQL to create the table is shown below:
``` sh
```
CREATE TABLE test_table (
id BIGINT AUTO_INCREMENT,
msg VARCHAR(250),
@ -407,10 +462,22 @@ CREATE TABLE test_table (
<br>You can run this command to confirm that the schema was created
properly on a given tablet, where <code>test-0000000100</code>
is a tablet ID as listed in step 4 or step 7:
is a tablet alias as shown by the <code>ListAllTablets</code> command:
``` sh
kvtctl GetSchema test-0000000100
### example output:
# {
# "DatabaseSchema": "CREATE DATABASE `{{.DatabaseName}}` /*!40100 DEFAULT CHARACTER SET utf8 */",
# "TableDefinitions": [
# {
# "Name": "test_table",
# "Schema": "CREATE TABLE `test_table` (\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\n `msg` varchar(250) DEFAULT NULL,\n PRIMARY KEY (`id`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8",
# "Columns": [
# "id",
# "msg"
# ],
# ...
```
1. **Start <code>vtgate</code>**
@ -427,32 +494,50 @@ vitess/examples/kubernetes$ ./vtgate-up.sh
## Test your instance with a client app
The GuestBook app in the example is ported from the [Kubernetes GuestBook example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook-go). The server-side code has been rewritten in Python to use Vitess as the storage engine. The client-side code (HTML/JavaScript) is essentially unchanged.
The GuestBook app in the example is ported from the
[Kubernetes GuestBook example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook-go).
The server-side code has been rewritten in Python to use Vitess as the storage
engine. The client-side code (HTML/JavaScript) is essentially unchanged.
``` sh
vitess/examples/kubernetes$ ./guestbook-up.sh
```
As with the <code>vtctld</code> service, to access the GuestBook
app from outside Kubernetes, you need to open a port (3000) on
your firewall.
app from outside Kubernetes, we need to set the <code>type</code> field in the
service definition to something that generates an external service.
In this case, since this is a user-facing frontend, we use
<code>type: LoadBalancer</code>, which tells Kubernetes to create a public
[load balancer](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#type--loadbalancer)
using the API for whatever platform your Kubernetes cluster is in.
As before, you also need to allow access through your platform's firewall:
``` sh
# Open port 3000 in the firewall
$ gcloud compute firewall-rules create guestbook --allow tcp:3000
# For example, to open port 80 in the GCE firewall:
$ gcloud compute firewall-rules create guestbook --allow tcp:80
```
Then, get the external IP of the load balancer for the GuestBook service:
``` sh
$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
guestbook us-central1 146.148.72.125 TCP us-central1/targetPools/guestbook
vtctld us-central1 104.154.64.12 TCP us-central1/targetPools/vtctld
$ kubectl get -o yaml service guestbook
### example output:
# apiVersion: v1beta3
# kind: Service
# ...
# status:
# loadBalancer:
# ingress:
# - ip: 3.4.5.6
```
If the status shows <code>loadBalancer: {}</code>, it may just need more time.
Once the pods are running, the GuestBook app should be accessible
from port 3000 on the external IP.
from the load balancer's external IP. In the example above, it would be at
<code>http://3.4.5.6</code>.
You can see Vitess' replication capabilities by opening the app in
multiple browser windows. Each new entry is committed to the master
@ -467,38 +552,44 @@ provides more detail about how the app server interacts with Vitess.
## Tear down and clean up
The following command tears down the Container Engine cluster. It is
necessary to stop the virtual machines running on the Cloud platform.
Before stopping the Container Engine cluster, you should tear down the Vitess
services. Kubernetes will then take care of cleaning up any entities it created
for those services, like external load balancers.
``` sh
vitess/examples/kubernetes $ ./guestbook-down.sh
vitess/examples/kubernetes $ ./vtgate-down.sh
vitess/examples/kubernetes $ ./vttablet-down.sh
vitess/examples/kubernetes $ ./vtctld-down.sh
vitess/examples/kubernetes $ ./etcd-down.sh
```
Then tear down the Container Engine cluster itself, which will stop the virtual
machines running on Compute Engine:
``` sh
$ gcloud alpha container clusters delete example
```
And these commands clean up other entities created for this example.
They are suggested to prevent conflicts that might occur if you
don't run them and then rerun this example in a different mode.
It's also a good idea to remove the firewall rules you created, unless you plan
to use them again soon:
``` sh
$ gcloud compute forwarding-rules delete k8s-example-default-vtctld
$ gcloud compute forwarding-rules delete k8s-example-default-guestbook
$ gcloud compute firewall-rules delete vtctld
$ gcloud compute firewall-rules delete guestbook
$ gcloud compute target-pools delete k8s-example-default-vtctld
$ gcloud compute target-pools delete k8s-example-default-guestbook
$ gcloud compute firewall-rules delete vtctld guestbook
```
## Troubleshooting
If a pod enters the <code>Running</code> state, but the server
doesn't respond as expected, use the <code>kubectl log</code>
doesn't respond as expected, use the <code>kubectl logs</code>
command to check the pod output:
``` sh
# show logs for container 'vttablet' within pod 'vttablet-100'
$ $KUBECTL log vttablet-100 vttablet
$ kubectl logs vttablet-100 vttablet
# show logs for container 'mysql' within pod 'vttablet-100'
$ $KUBECTL log vttablet-100 mysql
$ kubectl logs vttablet-100 mysql
```
Post the logs somewhere and send a link to the [Vitess

Просмотреть файл

@ -3,292 +3,6 @@
This directory contains an example configuration for running Vitess on
[Kubernetes](https://github.com/GoogleCloudPlatform/kubernetes/).
These instructions are written for running in
[Google Container Engine](https://cloud.google.com/container-engine/),
but they can be adapted to run on other
[platforms that Kubernetes supports](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides).
See the [Vitess on Kubernetes](http://vitess.io/getting-started/) guide for
instructions on using these files.
## Prerequisites
If you're running Kubernetes manually, instead of through Container Engine,
make sure to use at least
[v0.9.2](https://github.com/GoogleCloudPlatform/kubernetes/releases).
Container Engine will use the latest available release by default.
You'll need [Go 1.3+](http://golang.org/doc/install) in order to build the
`vtctlclient` tool used to issue commands to Vitess:
### Build and install vtctlclient
```
$ go get github.com/youtube/vitess/go/cmd/vtctlclient
```
### Set the path to kubectl
If you're running in Container Engine, set the `KUBECTL` environment variable
to point to the `kubectl` command provided by the Google Cloud SDK (if you've
already added gcloud to your PATH, you likely have kubectl):
```
$ export KUBECTL='kubectl'
```
If you're running Kubernetes manually, set the `KUBECTL` environment variable
to point to the location of `kubectl.sh`. For example:
```
$ export KUBECTL=$HOME/kubernetes/cluster/kubectl.sh
```
### Create a Container Engine cluster
Follow the steps to
[enable the Container Engine API](https://cloud.google.com/container-engine/docs/before-you-begin).
Set the [zone](https://cloud.google.com/compute/docs/zones#available) you want to use:
```
$ gcloud config set compute/zone us-central1-b
```
Then create a cluster:
```
$ gcloud alpha container clusters create example --machine-type n1-standard-1 --num-nodes 3
```
If prompted, install the alpha commands.
Update the configuration with the cluster name:
```
$ gcloud config set container/cluster example
```
## Start an etcd cluster for Vitess
Once you have a running Kubernetes deployment, make sure to set `KUBECTL`
as described above, and then run:
```
vitess/examples/kubernetes$ ./etcd-up.sh
```
This will create two clusters: one for the 'global' cell, and one for the
'test' cell.
You can check the status of the pods with `$KUBECTL get pods`.
Note that it may take a while for each minion to download the Docker images the
first time it needs them, during which time the pod status will be `Pending`.
In general, each `-up.sh` script in this example has a corresponding `-down.sh`
in case you want to stop certain pieces without bringing down the whole cluster.
For example, to tear down the etcd deployment:
```
vitess/examples/kubernetes$ ./etcd-down.sh
```
## Start vtctld
The vtctld server provides a web interface to inspect the state of the system,
and also accepts RPC commands from `vtctlclient` to modify the system.
```
vitess/examples/kubernetes$ ./vtctld-up.sh
```
To let you access vtctld from outside Kubernetes, the vtctld service is created
with the createExternalLoadBalancer option. On supported platforms, Kubernetes
will then automatically create an external IP that load balances onto the pods
comprising the service. Note that you also need to open port 15000 in your
firewall.
```
# open port 15000
$ gcloud compute firewall-rules create vtctld --allow tcp:15000
# get the address of the load balancer for vtctld
$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
aa6f47950f5a011e4b8f242010af0fe1 us-central1 12.34.56.78 TCP us-central1/targetPools/aa6f47950f5a011e4b8f242010af0fe1
```
Note that Kubernetes will generate the name of the forwarding-rule and
target-pool based on a hash of source/target IP addresses. If there are
multiple rules (perhaps due to running other services on GKE), use the following
to determine the correct target pool:
```
$ util/get_forwarded_pool.sh example us-central1 15000
aa6f47950f5a011e4b8f242010af0fe1
```
In the example above, you would then access vtctld at
http://12.34.56.78:15000/ once the pod has entered the `Running` state.
## Control vtctld with vtctlclient
If you've opened port 15000 on your firewall, you can run `vtctlclient`
locally to issue commands. Depending on your actual vtctld IP,
the `vtctlclient` command will look different. So from here on, we'll assume
you've made an alias called `kvtctl` with your particular parameters, such as:
```
$ alias kvtctl='vtctlclient -server 12.34.56.78:15000'
# check the connection to vtctld, and list available commands
$ kvtctl
```
## Start vttablets
We launch vttablet in a
[pod](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md)
along with mysqld. The following script will instantiate `vttablet-pod-template.yaml`
for three replicas.
```
vitess/examples/kubernetes$ ./vttablet-up.sh
```
Wait for the pods to enter Running state (`$KUBECTL get pods`).
Again, this may take a while if a pod was scheduled on a minion that needs to
download the Vitess Docker image. Eventually you should see the tablets show up
in the *DB topology* summary page of vtctld (`http://12.34.56.78:15000/dbtopo`).
By bringing up tablets into a previously empty keyspace, we effectively just
created a new shard. To initialize the keyspace for the new shard, we need to
perform a keyspace rebuild:
```
$ kvtctl RebuildKeyspaceGraph test_keyspace
```
Note that most vtctlclient commands produce no output on success.
### Status pages for vttablets
Each vttablet serves a set of HTML status pages on its primary port.
The vtctld interface provides links on each tablet entry marked *[status]*,
but these links are to internal per-pod IPs that can only be accessed from
within Kubernetes. As a workaround, you can proxy over an SSH connection to
a Kubernetes minion, or launch a proxy as a Kubernetes service.
In the future, we plan to accomplish the proxying via the Kubernetes API
server, without the need for additional setup.
## Elect a master vttablet
The vttablets have all been started as replicas, but there is no master yet.
When we pick a master vttablet, Vitess will also take care of connecting the
other replicas' mysqld instances to start replicating from the master mysqld.
Since this is the first time we're starting up the shards, there is no existing
replication happening, and all tablets are of the same base replica or spare
type. So we use the -force flag on InitShardMaster to allow the transition
of the first tablet from its type to master.
```
$ kvtctl InitShardMaster -force test_keyspace/0 test-0000000100
```
Once this is done, you should see one master and two replicas in vtctld's
web interface. You can also check this on the command line with vtctlclient:
```
$ kvtctl ListAllTablets test
test-0000000100 test_keyspace 0 master 10.244.4.6:15002 10.244.4.6:3306 []
test-0000000101 test_keyspace 0 replica 10.244.1.8:15002 10.244.1.8:3306 []
test-0000000102 test_keyspace 0 replica 10.244.1.9:15002 10.244.1.9:3306 []
```
## Create a table
The `vtctlclient` tool can manage schema across all tablets in a keyspace.
To create the table defined in `create_test_table.sql`:
```
# run this from the example dir so it finds the create_test_table.sql file
vitess/examples/kubernetes$ kvtctl ApplySchema -sql "$(cat create_test_table.sql)" test_keyspace
```
## Start a vtgate pool
Clients send queries to Vitess through vtgate, which routes them to the
correct vttablet(s) behind the scenes. In Kubernetes, we define a vtgate
service that distributes connections to a pool of vtgate pods curated by a
[replication controller](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md).
```
vitess/examples/kubernetes$ ./vtgate-up.sh
```
## Start the sample GuestBook app server
The GuestBook app in this example is ported from the
[Kubernetes GuestBook example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook-go).
The server-side code has been rewritten in Python to use Vitess as the storage
engine. The client-side code (HTML/JavaScript) is essentially unchanged.
```
vitess/examples/kubernetes$ ./guestbook-up.sh
# open port 3000 in the firewall
$ gcloud compute firewall-rules create guestbook --allow tcp:3000
# find the external IP of the load balancer for the guestbook service
$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
guestbook us-central1 1.2.3.4 TCP us-central1/targetPools/guestbook
vtctld us-central1 12.34.56.78 TCP us-central1/targetPools/vtctld
```
Once the pods are running, the GuestBook should be accessible from port 3000 on
the external IP, for example: http://1.2.3.4:3000/
Try opening multiple browser windows of the app, and adding an entry on one
side. The JavaScript on each page polls the app server once a second, so the
other windows should update automatically. Since the app serves read-only
requests by querying Vitess in 'replica' mode, this confirms that replication
is working.
See the
[GuestBook source](https://github.com/youtube/vitess/tree/master/examples/kubernetes/guestbook)
for more details on how the app server interacts with Vitess.
## Tear down and clean up
Tear down the Container Engine cluster:
```
$ gcloud alpha container clusters delete example
```
Clean up other entities created for this example:
```
$ gcloud compute forwarding-rules delete k8s-example-default-vtctld
$ gcloud compute forwarding-rules delete k8s-example-default-guestbook
$ gcloud compute firewall-rules delete vtctld
$ gcloud compute firewall-rules delete guestbook
$ gcloud compute target-pools delete k8s-example-default-vtctld
$ gcloud compute target-pools delete k8s-example-default-guestbook
```
## Troubleshooting
If a pod enters the `Running` state, but the server doesn't respond as expected,
try checking the pod output with the `kubectl log` command:
```
# show logs for container 'vttablet' within pod 'vttablet-100'
$ $KUBECTL log vttablet-100 vttablet
# show logs for container 'mysql' within pod 'vttablet-100'
$ $KUBECTL log vttablet-100 mysql
```
You can post the logs somewhere and send a link to the
[Vitess mailing list](https://groups.google.com/forum/#!forum/vitess)
to get more help.

Просмотреть файл

@ -1,6 +1,7 @@
# This is an include file used by the other scripts in this directory.
if [ -z "$KUBECTL" ]; then
echo 'Please set KUBECTL env var to point to kubectl or kubectl.sh'
exit 1
fi
# Most clusters will just be accessed with 'kubectl' on $PATH.
# However, some might require a different command. For example, GKE required
# KUBECTL='gcloud alpha container kubectl' for a while. Now that most of our
# use cases just need KUBECTL=kubectl, we'll make that the default.
KUBECTL=${KUBECTL:-kubectl}

Просмотреть файл

@ -1,44 +1,44 @@
apiVersion: v1beta1
apiVersion: v1beta3
kind: ReplicationController
id: etcd-{{cell}}
desiredState:
replicas: 3
replicaSelector:
metadata:
name: etcd-{{cell}}
labels:
name: etcd
cell: {{cell}}
podTemplate:
desiredState:
manifest:
version: v1beta1
id: etcd-{{cell}}
containers:
- name: etcd
image: vitess/etcd:v0.4.6-lite
command:
- bash
- "-c"
- >-
ipaddr=$(hostname -i)
spec:
replicas: 3
selector:
name: etcd
cell: {{cell}}
template:
metadata:
labels:
name: etcd
cell: {{cell}}
spec:
containers:
- name: etcd
image: vitess/etcd:v0.4.6-lite
command:
- bash
- "-c"
- >-
ipaddr=$(hostname -i)
global_etcd=$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
global_etcd=$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
cell="{{cell}}" &&
local_etcd_host_var="ETCD_${cell^^}_SERVICE_HOST" &&
local_etcd_port_var="ETCD_${cell^^}_SERVICE_PORT" &&
local_etcd=${!local_etcd_host_var}:${!local_etcd_port_var}
cell="{{cell}}" &&
local_etcd_host_var="ETCD_${cell^^}_SERVICE_HOST" &&
local_etcd_port_var="ETCD_${cell^^}_SERVICE_PORT" &&
local_etcd=${!local_etcd_host_var}:${!local_etcd_port_var}
if [ "{{cell}}" != "global" ]; then
until curl -L http://$global_etcd/v2/keys/vt/cells/{{cell}}
-XPUT -d value=http://$local_etcd; do
echo "[$(date)] waiting for global etcd to register cell '{{cell}}'";
sleep 1;
done;
fi
if [ "{{cell}}" != "global" ]; then
until curl -L http://$global_etcd/v2/keys/vt/cells/{{cell}}
-XPUT -d value=http://$local_etcd; do
echo "[$(date)] waiting for global etcd to register cell '{{cell}}'";
sleep 1;
done;
fi
etcd -name $HOSTNAME -peer-addr $ipaddr:7001 -addr $ipaddr:4001 -discovery {{discovery}}
etcd -name $HOSTNAME -peer-addr $ipaddr:7001 -addr $ipaddr:4001 -discovery {{discovery}}
labels:
name: etcd
cell: {{cell}}
labels:
name: etcd
cell: {{cell}}

Просмотреть файл

@ -1,11 +1,14 @@
apiVersion: v1beta1
kind: Service
id: etcd-{{cell}}
port: 4001
containerPort: 4001
selector:
name: etcd
cell: {{cell}}
labels:
name: etcd
cell: {{cell}}
apiVersion: v1beta3
metadata:
name: etcd-{{cell}}
labels:
name: etcd
cell: {{cell}}
spec:
ports:
- port: 4001
selector:
name: etcd
cell: {{cell}}

Просмотреть файл

@ -1,21 +1,20 @@
apiVersion: v1beta1
kind: ReplicationController
id: guestbook
desiredState:
replicas: 3
replicaSelector: {name: guestbook}
podTemplate:
desiredState:
manifest:
version: v1beta1
id: guestbook
containers:
- name: guestbook
image: vitess/guestbook
ports:
- name: http-server
containerPort: 8080
labels:
name: guestbook
labels:
apiVersion: v1beta3
metadata:
name: guestbook
labels:
name: guestbook
spec:
replicas: 3
selector: {name: guestbook}
template:
metadata:
labels: {name: guestbook}
spec:
containers:
- name: guestbook
image: vitess/guestbook
ports:
- name: http-server
containerPort: 8080

Просмотреть файл

@ -1,8 +1,11 @@
apiVersion: v1beta1
kind: Service
id: guestbook
port: 3000
containerPort: http-server
selector:
apiVersion: v1beta3
metadata:
name: guestbook
createExternalLoadBalancer: true
spec:
ports:
- port: 80
targetPort: http-server
selector: {name: guestbook}
type: LoadBalancer

Просмотреть файл

@ -1,29 +1,28 @@
apiVersion: v1beta1
kind: Pod
id: vtctld
desiredState:
manifest:
version: v1beta1
id: vtctld
containers:
- name: vtctld
image: vitess/lite
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt &&
su -p -c "/vt/bin/vtctld -debug -templates $VTTOP/go/cmd/vtctld/templates -log_dir $VTDATAROOT/tmp -alsologtostderr -port 15000 -topo_implementation etcd -etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT" vitess
volumes:
- name: syslog
source: {hostDir: {path: /dev/log}}
- name: vtdataroot
source: {emptyDir: {}}
labels:
apiVersion: v1beta3
metadata:
name: vtctld
labels:
name: vtctld
spec:
containers:
- name: vtctld
image: vitess/lite
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt &&
su -p -c "/vt/bin/vtctld -debug -templates $VTTOP/go/cmd/vtctld/templates -log_dir $VTDATAROOT/tmp -alsologtostderr -port 15000 -topo_implementation etcd -etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT" vitess
volumes:
- name: syslog
hostPath: {path: /dev/log}
- name: vtdataroot
emptyDir: {}

Просмотреть файл

@ -1,10 +1,14 @@
apiVersion: v1beta1
kind: Service
id: vtctld
port: 15000
containerPort: 15000
createExternalLoadBalancer: true
selector:
name: vtctld
labels:
apiVersion: v1beta3
metadata:
name: vtctld
labels:
name: vtctld
spec:
ports:
- port: 15000
targetPort: 15000
nodePort: 30000
selector: {name: vtctld}
type: NodePort

Просмотреть файл

@ -1,41 +1,39 @@
apiVersion: v1beta1
kind: ReplicationController
id: vtgate
desiredState:
replicas: {{replicas}}
replicaSelector: {name: vtgate}
podTemplate:
desiredState:
manifest:
version: v1beta1
id: vtgate
containers:
- name: vtgate
image: vitess/lite
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt &&
su -p -c "/vt/bin/vtgate
-topo_implementation etcd
-etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-port 15001
-cell test" vitess
volumes:
- name: syslog
source: {hostDir: {path: /dev/log}}
- name: vtdataroot
source: {emptyDir: {}}
labels:
name: vtgate
labels:
apiVersion: v1beta3
metadata:
name: vtgate
labels: {name: vtgate}
spec:
replicas: {{replicas}}
selector: {name: vtgate}
template:
metadata:
labels: {name: vtgate}
spec:
containers:
- name: vtgate
image: vitess/lite
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt &&
su -p -c "/vt/bin/vtgate
-topo_implementation etcd
-etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-port 15001
-cell test" vitess
volumes:
- name: syslog
hostPath: {path: /dev/log}
- name: vtdataroot
emptyDir: {}

Просмотреть файл

@ -1,38 +1,36 @@
apiVersion: v1beta1
kind: Pod
id: vtgate-{{uid}}
desiredState:
manifest:
version: v1beta1
id: vt
containers:
- name: vtgate
image: vitess/root
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt &&
su -p -c "/vt/bin/vtgate
-topo_implementation etcd
-etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-port 15001
-cell test" vitess
env:
- name: GOMAXPROCS
value: "16"
volumes:
- name: syslog
source: {hostDir: {path: /dev/log}}
- name: vtdataroot
source: {{vtdataroot_volume}}
labels:
name: vtgate
apiVersion: v1beta3
metadata:
name: vtgate-{{uid}}
labels: {name: vtgate}
spec:
containers:
- name: vtgate
image: vitess/root
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt &&
su -p -c "/vt/bin/vtgate
-topo_implementation etcd
-etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-port 15001
-cell test" vitess
env:
- name: GOMAXPROCS
value: "16"
volumes:
- name: syslog
hostPath: {path: /dev/log}
- name: vtdataroot
{{vtdataroot_volume}}

Просмотреть файл

@ -1,10 +1,11 @@
apiVersion: v1beta1
kind: Service
id: vtgate
port: 15001
containerPort: 15001
selector:
apiVersion: v1beta3
metadata:
name: vtgate
labels:
name: vtgate
createExternalLoadBalancer: true
labels: {name: vtgate}
spec:
ports:
- port: 15001
selector: {name: vtgate}
type: LoadBalancer

Просмотреть файл

@ -1,113 +1,112 @@
apiVersion: v1beta1
kind: Pod
id: vttablet-{{uid}}
desiredState:
manifest:
version: v1beta1
id: vttablet-{{uid}}
containers:
- name: vttablet
image: vitess/root
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- bash
- "-c"
- >-
set -e
apiVersion: v1beta3
metadata:
name: vttablet-{{uid}}
labels:
name: vttablet
keyspace: "{{keyspace}}"
shard: "{{shard_label}}"
tablet: "{{alias}}"
spec:
containers:
- name: vttablet
image: vitess/root
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- bash
- "-c"
- >-
set -e
mysql_socket="$VTDATAROOT/{{tablet_subdir}}/mysql.sock"
mysql_socket="$VTDATAROOT/{{tablet_subdir}}/mysql.sock"
mkdir -p $VTDATAROOT/tmp
mkdir -p $VTDATAROOT/tmp
chown -R vitess /vt
chown -R vitess /vt
while [ ! -e $mysql_socket ]; do
echo "[$(date)] waiting for $mysql_socket" ;
sleep 1 ;
done
while [ ! -e $mysql_socket ]; do
echo "[$(date)] waiting for $mysql_socket" ;
sleep 1 ;
done
su -p -s /bin/bash -c "mysql -u vt_dba -S $mysql_socket
-e 'CREATE DATABASE IF NOT EXISTS vt_{{keyspace}}'" vitess
su -p -s /bin/bash -c "mysql -u vt_dba -S $mysql_socket
-e 'CREATE DATABASE IF NOT EXISTS vt_{{keyspace}}'" vitess
su -p -s /bin/bash -c "/vt/bin/vttablet
-topo_implementation etcd
-etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-port {{port}}
-tablet-path {{alias}}
-tablet_hostname $(hostname -i)
-init_keyspace {{keyspace}}
-init_shard {{shard}}
-target_tablet_type replica
-mysqlctl_socket $VTDATAROOT/mysqlctl.sock
-db-config-app-uname vt_app
-db-config-app-dbname vt_{{keyspace}}
-db-config-app-charset utf8
-db-config-dba-uname vt_dba
-db-config-dba-dbname vt_{{keyspace}}
-db-config-dba-charset utf8
-db-config-repl-uname vt_repl
-db-config-repl-dbname vt_{{keyspace}}
-db-config-repl-charset utf8
-db-config-filtered-uname vt_filtered
-db-config-filtered-dbname vt_{{keyspace}}
-db-config-filtered-charset utf8
-queryserver-config-transaction-cap 300
-queryserver-config-schema-reload-time 1
-queryserver-config-pool-size 100
-enable-rowcache
-rowcache-bin /usr/bin/memcached
-rowcache-socket $VTDATAROOT/{{tablet_subdir}}/memcache.sock" vitess
env:
- name: GOMAXPROCS
value: "16"
- name: mysql
image: vitess/root
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt
su -p -s /bin/bash -c "/vt/bin/vttablet
-topo_implementation etcd
-etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-port {{port}}
-tablet-path {{alias}}
-tablet_hostname $(hostname -i)
-init_keyspace {{keyspace}}
-init_shard {{shard}}
-target_tablet_type replica
-mysqlctl_socket $VTDATAROOT/mysqlctl.sock
-db-config-app-uname vt_app
-db-config-app-dbname vt_{{keyspace}}
-db-config-app-charset utf8
-db-config-dba-uname vt_dba
-db-config-dba-dbname vt_{{keyspace}}
-db-config-dba-charset utf8
-db-config-repl-uname vt_repl
-db-config-repl-dbname vt_{{keyspace}}
-db-config-repl-charset utf8
-db-config-filtered-uname vt_filtered
-db-config-filtered-dbname vt_{{keyspace}}
-db-config-filtered-charset utf8
-queryserver-config-transaction-cap 300
-queryserver-config-schema-reload-time 1
-queryserver-config-pool-size 100
-enable-rowcache
-rowcache-bin /usr/bin/memcached
-rowcache-socket $VTDATAROOT/{{tablet_subdir}}/memcache.sock" vitess
env:
- name: GOMAXPROCS
value: "16"
- name: mysql
image: vitess/root
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt
su -p -c "/vt/bin/mysqlctld
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-tablet_uid {{uid}}
-socket_file $VTDATAROOT/mysqlctl.sock
-db-config-app-uname vt_app
-db-config-app-dbname vt_{{keyspace}}
-db-config-app-charset utf8
-db-config-dba-uname vt_dba
-db-config-dba-dbname vt_{{keyspace}}
-db-config-dba-charset utf8
-db-config-repl-uname vt_repl
-db-config-repl-dbname vt_{{keyspace}}
-db-config-repl-charset utf8
-db-config-filtered-uname vt_filtered
-db-config-filtered-dbname vt_{{keyspace}}
-db-config-filtered-charset utf8
-bootstrap_archive mysql-db-dir_10.0.13-MariaDB.tbz" vitess
env:
- name: EXTRA_MY_CNF
value: /vt/config/mycnf/benchmark.cnf:/vt/config/mycnf/master_mariadb.cnf
volumes:
- name: syslog
hostPath: {path: /dev/log}
- name: vtdataroot
{{vtdataroot_volume}}
su -p -c "/vt/bin/mysqlctld
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-tablet_uid {{uid}}
-socket_file $VTDATAROOT/mysqlctl.sock
-db-config-app-uname vt_app
-db-config-app-dbname vt_{{keyspace}}
-db-config-app-charset utf8
-db-config-dba-uname vt_dba
-db-config-dba-dbname vt_{{keyspace}}
-db-config-dba-charset utf8
-db-config-repl-uname vt_repl
-db-config-repl-dbname vt_{{keyspace}}
-db-config-repl-charset utf8
-db-config-filtered-uname vt_filtered
-db-config-filtered-dbname vt_{{keyspace}}
-db-config-filtered-charset utf8
-bootstrap_archive mysql-db-dir_10.0.13-MariaDB.tbz" vitess
env:
- name: EXTRA_MY_CNF
value: /vt/config/mycnf/benchmark.cnf:/vt/config/mycnf/master_mariadb.cnf
volumes:
- name: syslog
source: {hostDir: {path: /dev/log}}
- name: vtdataroot
source: {{vtdataroot_volume}}
labels:
name: vttablet
keyspace: "{{keyspace}}"
shard: "{{shard_label}}"
tablet: "{{alias}}"

Просмотреть файл

@ -1,107 +1,106 @@
apiVersion: v1beta1
kind: Pod
id: vttablet-{{uid}}
desiredState:
manifest:
version: v1beta1
id: vttablet-{{uid}}
containers:
- name: vttablet
image: vitess/lite
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- bash
- "-c"
- >-
set -e
apiVersion: v1beta3
metadata:
name: vttablet-{{uid}}
labels:
name: vttablet
keyspace: "{{keyspace}}"
shard: "{{shard_label}}"
tablet: "{{alias}}"
spec:
containers:
- name: vttablet
image: vitess/lite
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- bash
- "-c"
- >-
set -e
mysql_socket="$VTDATAROOT/{{tablet_subdir}}/mysql.sock"
mysql_socket="$VTDATAROOT/{{tablet_subdir}}/mysql.sock"
mkdir -p $VTDATAROOT/tmp
mkdir -p $VTDATAROOT/tmp
chown -R vitess /vt
chown -R vitess /vt
while [ ! -e $mysql_socket ]; do
echo "[$(date)] waiting for $mysql_socket" ;
sleep 1 ;
done
while [ ! -e $mysql_socket ]; do
echo "[$(date)] waiting for $mysql_socket" ;
sleep 1 ;
done
su -p -s /bin/bash -c "mysql -u vt_dba -S $mysql_socket
-e 'CREATE DATABASE IF NOT EXISTS vt_{{keyspace}}'" vitess
su -p -s /bin/bash -c "mysql -u vt_dba -S $mysql_socket
-e 'CREATE DATABASE IF NOT EXISTS vt_{{keyspace}}'" vitess
su -p -s /bin/bash -c "/vt/bin/vttablet
-topo_implementation etcd
-etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-port {{port}}
-tablet-path {{alias}}
-tablet_hostname $(hostname -i)
-init_keyspace {{keyspace}}
-init_shard {{shard}}
-target_tablet_type replica
-mysqlctl_socket $VTDATAROOT/mysqlctl.sock
-db-config-app-uname vt_app
-db-config-app-dbname vt_{{keyspace}}
-db-config-app-charset utf8
-db-config-dba-uname vt_dba
-db-config-dba-dbname vt_{{keyspace}}
-db-config-dba-charset utf8
-db-config-repl-uname vt_repl
-db-config-repl-dbname vt_{{keyspace}}
-db-config-repl-charset utf8
-db-config-filtered-uname vt_filtered
-db-config-filtered-dbname vt_{{keyspace}}
-db-config-filtered-charset utf8
-enable-rowcache
-rowcache-bin /usr/bin/memcached
-rowcache-socket $VTDATAROOT/{{tablet_subdir}}/memcache.sock" vitess
- name: mysql
image: vitess/lite
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt
su -p -s /bin/bash -c "/vt/bin/vttablet
-topo_implementation etcd
-etcd_global_addrs http://$ETCD_GLOBAL_SERVICE_HOST:$ETCD_GLOBAL_SERVICE_PORT
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-port {{port}}
-tablet-path {{alias}}
-tablet_hostname $(hostname -i)
-init_keyspace {{keyspace}}
-init_shard {{shard}}
-target_tablet_type replica
-mysqlctl_socket $VTDATAROOT/mysqlctl.sock
-db-config-app-uname vt_app
-db-config-app-dbname vt_{{keyspace}}
-db-config-app-charset utf8
-db-config-dba-uname vt_dba
-db-config-dba-dbname vt_{{keyspace}}
-db-config-dba-charset utf8
-db-config-repl-uname vt_repl
-db-config-repl-dbname vt_{{keyspace}}
-db-config-repl-charset utf8
-db-config-filtered-uname vt_filtered
-db-config-filtered-dbname vt_{{keyspace}}
-db-config-filtered-charset utf8
-enable-rowcache
-rowcache-bin /usr/bin/memcached
-rowcache-socket $VTDATAROOT/{{tablet_subdir}}/memcache.sock" vitess
- name: mysql
image: vitess/lite
volumeMounts:
- name: syslog
mountPath: /dev/log
- name: vtdataroot
mountPath: /vt/vtdataroot
command:
- sh
- "-c"
- >-
mkdir -p $VTDATAROOT/tmp &&
chown -R vitess /vt
su -p -c "/vt/bin/mysqlctld
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-tablet_uid {{uid}}
-socket_file $VTDATAROOT/mysqlctl.sock
-db-config-app-uname vt_app
-db-config-app-dbname vt_{{keyspace}}
-db-config-app-charset utf8
-db-config-dba-uname vt_dba
-db-config-dba-dbname vt_{{keyspace}}
-db-config-dba-charset utf8
-db-config-repl-uname vt_repl
-db-config-repl-dbname vt_{{keyspace}}
-db-config-repl-charset utf8
-db-config-filtered-uname vt_filtered
-db-config-filtered-dbname vt_{{keyspace}}
-db-config-filtered-charset utf8
-bootstrap_archive mysql-db-dir_10.0.13-MariaDB.tbz" vitess
env:
- name: EXTRA_MY_CNF
value: /vt/config/mycnf/master_mariadb.cnf
volumes:
- name: syslog
hostPath: {path: /dev/log}
- name: vtdataroot
{{vtdataroot_volume}}
su -p -c "/vt/bin/mysqlctld
-log_dir $VTDATAROOT/tmp
-alsologtostderr
-tablet_uid {{uid}}
-socket_file $VTDATAROOT/mysqlctl.sock
-db-config-app-uname vt_app
-db-config-app-dbname vt_{{keyspace}}
-db-config-app-charset utf8
-db-config-dba-uname vt_dba
-db-config-dba-dbname vt_{{keyspace}}
-db-config-dba-charset utf8
-db-config-repl-uname vt_repl
-db-config-repl-dbname vt_{{keyspace}}
-db-config-repl-charset utf8
-db-config-filtered-uname vt_filtered
-db-config-filtered-dbname vt_{{keyspace}}
-db-config-filtered-charset utf8
-bootstrap_archive mysql-db-dir_10.0.13-MariaDB.tbz" vitess
env:
- name: EXTRA_MY_CNF
value: /vt/config/mycnf/master_mariadb.cnf
volumes:
- name: syslog
source: {hostDir: {path: /dev/log}}
- name: vtdataroot
source: {{vtdataroot_volume}}
labels:
name: vttablet
keyspace: "{{keyspace}}"
shard: "{{shard_label}}"
tablet: "{{alias}}"

Просмотреть файл

@ -18,9 +18,9 @@ FORCE_NODE=${FORCE_NODE:-false}
VTTABLET_TEMPLATE=${VTTABLET_TEMPLATE:-'vttablet-pod-template.yaml'}
VTDATAROOT_VOLUME=${VTDATAROOT_VOLUME:-''}
vtdataroot_volume='{emptyDir: {}}'
vtdataroot_volume='emptyDir: {}'
if [ -n "$VTDATAROOT_VOLUME" ]; then
vtdataroot_volume="{hostDir: {path: ${VTDATAROOT_VOLUME}}}"
vtdataroot_volume="hostDir: {path: ${VTDATAROOT_VOLUME}}"
fi
index=1