Update Getting Started Local guide.

This commit is contained in:
Anthony Yeh 2015-10-29 16:48:07 -07:00
Родитель 3f31b25530
Коммит d4de115720
3 изменённых файлов: 227 добавлений и 347 удалений

Просмотреть файл

@ -6,62 +6,93 @@ If you run into issues or have questions, please post on our
## Docker Build
To run Vitess in Docker, use an
[Automated Build](https://hub.docker.com/u/vitess/).
To run Vitess in Docker, you can either use our pre-built images on [Docker Hub]
(https://hub.docker.com/u/vitess/), or build them yourself.
* The <code>vitess/base</code> image contains a full development
environment, capable of building Vitess and running integration tests.
* The <code>vitess/lite</code> image contains only the compiled Vitess
binaries, excluding ZooKeeper. it can run Vitess, but lacks the
environment needed to build Vitess or run tests.
### Docker Hub Images
* The [vitess/base](https://hub.docker.com/r/vitess/base/) image contains a full
development environment, capable of building Vitess and running integration tests.
* The [vitess/lite](https://hub.docker.com/r/vitess/lite/) image contains only
the compiled Vitess binaries, excluding ZooKeeper. It can run Vitess, but
lacks the environment needed to build Vitess or run tests. It's primarily used
for the [Vitess on Kubernetes](http://vitess.io/getting-started/) guide.
For example, you can directly run `vitess/base`, and Docker will download the
image for you:
``` sh
$ sudo docker run -ti vitess/base bash
vitess@32f187ef9351:/vt/src/github.com/youtube/vitess$ make build
```
Now you can proceed to [start a Vitess cluster](#start-a-vitess-cluster) inside
the Docker container you just started. Note that if you want to access the
servers from outside the container, you'll need to expose the ports as described
in the [Docker user guide](https://docs.docker.com/userguide/).
For local testing, you can also access the servers on the local IP address
created for the container by Docker:
``` sh
$ docker inspect 32f187ef9351 | grep IPAddress
### example output:
# "IPAddress": "172.17.3.1",
```
### Custom Docker Image
You can also build Vitess Docker images yourself to include your
own patches or configuration data. The
[Dockerfile](https://github.com/youtube/vitess/blob/master/Dockerfile)
in the root of the Vitess tree builds the <code>vitess/base</code> image.
in the root of the Vitess tree builds the `vitess/base` image.
The [docker](https://github.com/youtube/vitess/tree/master/docker)
subdirectory contains scripts for building other images.
subdirectory contains scripts for building other images, such as `vitess/lite`.
Our `Makefile` also contains rules to build the images. For example:
``` sh
# Create vitess/bootstrap, which prepares everything up to ./bootstrap.sh
vitess$ make docker_bootstrap
# Create vitess/base from vitess/bootstrap by copying in your local working directory.
vitess$ make docker_base
```
## Manual Build
The following sections explain the process for manually building
Vitess on a local server:
- [Install Dependencies](#install-dependencies)
- [Build Vitess](#build-vitess)
- [Test Your Vitess Cluster](#test-your-vitess-cluster)
- [Start a Vitess Cluster](#start-a-vitess-cluster)
- [Run a Client Application](#run-a-client-application)
- [Tear Down the Cluster](#tear-down-the-cluster)
Vitess without Docker.
### Install Dependencies
Vitess runs on either Ubuntu 14.04 (Trusty) or Debian 7.0 (Wheezy). It requires the software and libraries listed below.
We currently test Vitess regularly on Ubuntu 14.04 (Trusty) and Debian 8 (Jessie).
1. [Install Go 1.3+](http://golang.org/doc/install).
In addition, Vitess requires the software and libraries listed below.
1. [Install Go 1.4+](http://golang.org/doc/install).
2. Install [MariaDB 10.0](https://downloads.mariadb.org/) or
[MySQL 5.6](http://dev.mysql.com/downloads/mysql). You can use any
installation method (src/bin/rpm/deb), but be sure to include the client
development headers (**libmariadbclient-dev** or **libmysqlclient-dev**).
development headers (`libmariadbclient-dev` or `libmysqlclient-dev`).
The Vitess development team currently tests against MariaDB 10.0.17
and MySQL 5.6.24.
The Vitess development team currently tests against MariaDB 10.0.21
and MySQL 5.6.27.
If you are installing MariaDB, note that you must install version 10.0 or
higher. If you are using <code>apt-get</code>, confirm that your repository
higher. If you are using `apt-get`, confirm that your repository
offers an option to install that version. You can also download the source
directly from [mariadb.org](https://downloads.mariadb.org/mariadb/10.0.16/).
directly from [mariadb.org](https://downloads.mariadb.org/mariadb/).
3. Select a lock service from the options listed below. It is technically
possible to use another lock server, but plugins currently exist only
for ZooKeeper and etcd.
- ZooKeeper 3.3.5 is included by default.
- [Install etcd v2.0+](https://github.com/coreos/etcd/releases).
If you use etcd, remember to include the <code>etcd</code> command
If you use etcd, remember to include the `etcd` command
on your path.
4. Install the following other tools needed to build and run Vitess:
- make
- automake
@ -82,21 +113,21 @@ Vitess runs on either Ubuntu 14.04 (Trusty) or Debian 7.0 (Wheezy). It requires
These can be installed with the following apt-get command:
``` sh
sudo apt-get install make automake libtool memcached python-dev python-virtualenv python-mysqldb libssl-dev g++ mercurial git pkg-config bison curl unzip
$ sudo apt-get install make automake libtool memcached python-dev python-virtualenv python-mysqldb libssl-dev g++ mercurial git pkg-config bison curl unzip
```
5. If you decided to use ZooKeeper in step 3, you also need to install a
Java Runtime, such as OpenJDK.
``` sh
sudo apt-get install openjdk-7-jre
$ sudo apt-get install openjdk-7-jre
```
### Build Vitess
1. Navigate to the directory where you want to download the Vitess
source code and clone the Vitess Github repo. After doing so,
navigate to the **src/github.com/youtube/vitess** directory.
navigate to the `src/github.com/youtube/vitess` directory.
``` sh
cd $WORKSPACE
@ -104,7 +135,7 @@ Vitess runs on either Ubuntu 14.04 (Trusty) or Debian 7.0 (Wheezy). It requires
cd src/github.com/youtube/vitess
```
1. Set the **MYSQL_FLAVOR** environment variable. Choose the appropriate
1. Set the `MYSQL_FLAVOR` environment variable. Choose the appropriate
value for your database. This value is case-sensitive.
``` sh
@ -113,31 +144,31 @@ Vitess runs on either Ubuntu 14.04 (Trusty) or Debian 7.0 (Wheezy). It requires
export MYSQL_FLAVOR=MySQL56
```
1. If your selected database installed in a location other than **/usr/bin**,
set the **VT&#95;MYSQL&#95;ROOT** variable to the root directory of your
1. If your selected database installed in a location other than `/usr/bin`,
set the `VT_MYSQL_ROOT` variable to the root directory of your
MariaDB installation. For example, if MariaDB is installed in
**/usr/local/mysql**, run the following command.
`/usr/local/mysql`, run the following command.
``` sh
export VT_MYSQL_ROOT=/usr/local/mysql
```
Note that the command indicates that the **mysql** executable should
be found at **/usr/local/mysql/bin/mysql**.
Note that the command indicates that the `mysql` executable should
be found at `/usr/local/mysql/bin/mysql`.
1. Run <code>mysql_config --version</code> and confirm that you
1. Run `mysql_config --version` and confirm that you
are running the correct version of MariaDB or MySQL. The value should
be 10 or higher for MariaDB and 5.6.x for MySQL.
1. Build Vitess using the commands below. Note that the
<code>bootstrap.sh</code> script needs to download some dependencies.
`bootstrap.sh` script needs to download some dependencies.
If your machine requires a proxy to access the Internet, you will need
to set the usual environment variables (e.g. <code>http_proxy</code>,
<code>https_proxy</code>, <code>no_proxy</code>).
to set the usual environment variables (e.g. `http_proxy`,
`https_proxy`, `no_proxy`).
``` sh
./bootstrap.sh
# Output of bootstrap.sh is shown below
### example output:
# skipping zookeeper build
# go install golang.org/x/tools/cmd/cover ...
# Found MariaDB installation in ...
@ -145,18 +176,15 @@ Vitess runs on either Ubuntu 14.04 (Trusty) or Debian 7.0 (Wheezy). It requires
# creating git pre-commit hooks
#
# source dev.env in your shell before building
```
``` sh
# Remaining commands to build Vitess
. ./dev.env
make build
```
If you build Vitess successfully, you can proceed to
[start a Vitess cluster](#start-a-vitess-cluster).
If your build attempt fails, see the following section,
[Test your Vitess Cluster](#test-your-vitess-cluster), for help
troubleshooting the errors.
### Test your Vitess Cluster
### Run Tests
**Note:** If you are using etcd, set the following environment variable:
@ -164,8 +192,8 @@ Vitess runs on either Ubuntu 14.04 (Trusty) or Debian 7.0 (Wheezy). It requires
export VT_TEST_FLAGS='--topo-server-flavor=etcd'
```
The default _make_ and _make test_ targets contain a full set of tests
intended to help Vitess developers to verify code changes. Those tests
The default targets when running `make` or `make test` contain a full set of
tests intended to help Vitess developers to verify code changes. Those tests
simulate a small Vitess cluster by launching many servers on the local
machine. To do so, they require a lot of resources; a minimum of 8GB RAM
and SSD is recommended to run the tests.
@ -179,9 +207,9 @@ make site_test
#### Common Test Issues
Attempts to run the full developer test suite (_make_ or _make test_)
Attempts to run the full developer test suite (`make` or `make test`)
on an underpowered machine often results in failure. If you still see
the same failures when running the lighter set of tests (*make site_test*),
the same failures when running the lighter set of tests (`make site_test`),
please let the development team know in the
[vitess@googlegroups.com](https://groups.google.com/forum/#!forum/vitess)
discussion forum.
@ -219,7 +247,7 @@ installed, which is a requirement if you are using ZooKeeper as the lock server.
Some of the larger tests use up to 4GB of temporary space on disk.
### Start a Vitess cluster
## Start a Vitess cluster
After completing the instructions above to [build Vitess](#build-vitess),
you can use the example scripts in the Github repo to bring up a Vitess
@ -243,19 +271,22 @@ lock service. ZooKeeper is included in the Vitess distribution.
you used to run the build commands, you can skip to the next
step since the environment variables will already be set.
Navigate to the directory where you built Vitess
(**$WORKSPACE/src/<wbr>github.com/<wbr>youtube/vitess**) and run the
following command:
If you're adapting this example to your own deployment, the only environment
variables required before running the scripts are `VTROOT` and `VTDATAROOT`.
Set `VTROOT` to the parent of the Vitess source tree. For example, if you
ran `make build` while in `$HOME/vt/src/github.com/youtube/vitess`,
then you should set:
``` sh
. ./dev.env
export VTROOT=$HOME/vt
```
From that directory, navigate to the directory that contains
the example scripts:
Set `VTDATAROOT` to the directory where you want data files and logs to
be stored. For example:
``` sh
cd examples/local
export VTDATAROOT=$HOME/vtdataroot
```
1. **Start ZooKeeper**
@ -265,7 +296,9 @@ lock service. ZooKeeper is included in the Vitess distribution.
service. The following script creates a small ZooKeeper cluster:
``` sh
./zk-up.sh
$ cd $VTROOT/src/github.com/youtube/vitess/examples/local
vitess/examples/local$ ./zk-up.sh
### example output:
# Starting zk servers...
# Waiting for zk servers to be ready...
```
@ -274,44 +307,47 @@ lock service. ZooKeeper is included in the Vitess distribution.
Vitess process how to connect to ZooKeeper. Then, each process can
find all of the other Vitess processes by coordinating via ZooKeeper.
To instruct Vitess processes on how to connect to ZooKeeper, set
the <code>ZK_CLIENT_CONFIG</code> environment variable to the path
to the <code>zk-client-conf.json</code> file, which contains the
Each of our scripts automatically sets the `ZK_CLIENT_CONFIG` environment
variable to point to the `zk-client-conf.json` file, which contains the
ZooKeeper server addresses for each cell.
1. **Start vtctld**
The <code>vtctld</code> server provides a web interface that
The `vtctld` server provides a web interface that
displays all of the coordination information stored in ZooKeeper.
``` sh
./vtctld-up.sh
vitess/examples/local$ ./vtctld-up.sh
# Starting vtctld
# Access vtctld at http://localhost:15000
# Access vtctld web UI at http://localhost:15000
# Send commands with: vtctlclient -server localhost:15999 ...
```
Open <code>http://localhost:15000</code> to verify that
<code>vtctld</code> is running. There won't be any information
Open `http://localhost:15000` to verify that
`vtctld` is running. There won't be any information
there yet, but the menu should come up, which indicates that
<code>vtctld</code> is running.
`vtctld` is running.
**Note:** The <code>vtctld</code> server accepts commands from
the <code>vtctlclient</code> tool, which administers the Vitess
cluster. The following command lists the commands that
<code>vtctlclient</code> supports:
The `vtctld` server also accepts commands from the `vtctlclient` tool,
which is used to administer the cluster. Note that the port for RPCs
(in this case `15999`) is different from the web UI port (`15000`).
These ports can be configured with command-line flags, as demonstrated
in `vtctld-up.sh`.
``` sh
$VTROOT/bin/vtctlclient -server localhost:15000
# List available commands
$ $VTROOT/bin/vtctlclient -server localhost:15999 Help
```
1. **Start vttablets**
The following command brings up three vttablets. Bringing up
tablets in a previously empty keyspace effectively creates
a new shard.
The `vttablet-up.sh` script brings up three vttablets, and assigns them to
a [keyspace](http://vitess.io/overview/concepts.html#keyspace) and [shard]
(http://vitess.io/overview/concepts.html#shard) according to the variables
set at the top of the script file.
``` sh
./vttablet-up.sh
vitess/examples/local$ ./vttablet-up.sh
# Output from vttablet-up.sh is below
# Starting MySQL for tablet test-0000000100...
# Starting vttablet for test-0000000100...
@ -324,54 +360,64 @@ lock service. ZooKeeper is included in the Vitess distribution.
# Access tablet test-0000000102 at http://localhost:15102/debug/status
```
After this command completes, go back to the <code>vtctld</code>
web page from the previous step and click the **Topology** link.
You should see the three tablets listed. If you click the address
of a tablet, you will see the coordination data stored in ZooKeeper.
From there, if you click the **status** link at the top of the page,
you will see the debug page generated by the tablet itself.
After this command completes, refresh the `vtctld` web UI, and you should
see a keyspace named `test_keyspace` with a single shard named `0`.
This is what an unsharded keyspace looks like.
1. **Initialize a keyspace for the shard**
If you click on the shard box, you'll see a list of [tablets]
(http://vitess.io/overview/concepts.html#tablet) in that shard.
Note that it's normal for the tablets to be unhealthy at this point, since
you haven't initialized them yet.
Perform a keyspace rebuild to initialize the keyspace for the
new shard:
You can also click the **STATUS** link on each tablet to be taken to its
status page, showing more details on its operation. Every Vitess server has
a status page served at `/debug/status` on its web port.
1. **Initialize the new keyspace**
By launching tablets assigned to a nonexistent keyspace, we've essentially
created a new keyspace. To complete the initialization of the
[local topology data](http://vitess.io/doc/TopologyService/#local-data),
perform a keyspace rebuild:
``` sh
$VTROOT/bin/vtctlclient -server localhost:15000 RebuildKeyspaceGraph test_keyspace
$ $VTROOT/bin/vtctlclient -server localhost:15999 RebuildKeyspaceGraph test_keyspace
```
**Note:** Most <code>vtctlclient</code> commands yield no output if
**Note:** Many `vtctlclient` commands yield no output if
they run successfully.
1. **Elect a master vttablet**
1. **Initialize MySQL databases**
The <code>vttablet</code> servers are all started as replicas.
In this step, you designate one of the vttablets to be the master.
Vitess automatically connects the other replicas' <code>mysqld</code>
instances so that they start replicating from the master
<code>mysqld</code> instance.
Since this is the first time the shard has been started, the
vttablets are not already doing any replication. As a result,
the following command uses the <code>-force</code> flag when
calling the <code>InitShardMaster</code> command so that it
accepts the change of a replica to be the master.
Next, designate one of the tablets to be the initial master.
Vitess will automatically connect the other slaves' mysqld instances so
that they start replicating from the master's mysqld.
This is also when the default database is created. Since our keyspace is
named `test_keyspace`, the MySQL database will be named `vt_test_keyspace`.
``` sh
$VTROOT/bin/vtctlclient -server localhost:15000 InitShardMaster -force test_keyspace/0 test-0000000100
$ $VTROOT/bin/vtctlclient -server localhost:15999 InitShardMaster -force test_keyspace/0 test-0000000100
### example output:
# master-elect tablet test-0000000100 is not the shard master, proceeding anyway as -force was used
# master-elect tablet test-0000000100 is not a master in the shard, proceeding anyway as -force was used
```
After running this command, go back to the **Topology** page
in the <code>vtctld</code> web interface. When you refresh the
page, you should see that one <code>vttablet</code> is the master
**Note:** Since this is the first time the shard has been started,
the tablets are not already doing any replication, and there is no
existing master. The `InitShardMaster` command above uses the `-force` flag
to bypass the usual sanity checks that would apply if this wasn't a
brand new shard.
After running this command, go back to the **Shard Status** page
in the `vtctld` web interface. When you refresh the
page, you should see that one `vttablet` is the master
and the other two are replicas.
You can also run this command on the command line to see the
same data:
You can also see this on the command line:
``` sh
$VTROOT/bin/vtctlclient -server localhost:15000 ListAllTablets test
# The command's output is shown below:
$ $VTROOT/bin/vtctlclient -server localhost:15999 ListAllTablets test
### example output:
# test-0000000100 test_keyspace 0 master localhost:15100 localhost:33100 []
# test-0000000101 test_keyspace 0 replica localhost:15101 localhost:33101 []
# test-0000000102 test_keyspace 0 replica localhost:15102 localhost:33102 []
@ -379,16 +425,18 @@ lock service. ZooKeeper is included in the Vitess distribution.
1. **Create a table**
Create the database table defined in the <i>createtesttable.sql</i>
file.
The `vtctlclient` tool can be used to apply the database schema across all
tablets in a keyspace. The following command creates the table defined in
the `create_test_table.sql` file:
``` sh
$VTROOT/bin/vtctlclient -server localhost:15000 ApplySchema -sql "$(cat create_test_table.sql)" test_keyspace
# Make sure to run this from the examples/local dir, so it finds the file.
vitess/examples/local$ $VTROOT/bin/vtctlclient -server localhost:15999 ApplySchema -sql "$(cat create_test_table.sql)" test_keyspace
```
The SQL to create the table is shown below:
``` sh
``` sql
CREATE TABLE test_table (
id BIGINT AUTO_INCREMENT,
msg VARCHAR(250),
@ -396,33 +444,61 @@ lock service. ZooKeeper is included in the Vitess distribution.
) Engine=InnoDB
```
1. **Take a backup**
Now that the initial schema is applied, it's a good time to take the first
[backup](http://vitess.io/user-guide/backup-and-restore.html). This backup
will be used to automatically restore any additional replicas that you run,
before they connect themselves to the master and catch up on replication.
If an existing tablet goes down and comes back up without its data, it will
also automatically restore from the latest backup and then resume replication.
``` sh
$ $VTROOT/bin/vtctlclient -server localhost:15999 Backup test-0000000101
```
After the backup completes, you can list available backups for the shard:
``` sh
$ $VTROOT/bin/vtctlclient -server localhost:15999 ListBackups test_keyspace/0
### example output:
# 2015-10-21.042940.test-0000000104
```
**Note:** In this single-server example setup, backups are stored at
`$VTDATAROOT/backups`. In a multi-server deployment, you would usually mount
an NFS directory there. You can also change the location by setting the
`-file_backup_storage_root` flag on `vtctld` and `vttablet`, as demonstrated
in `vtctld-up.sh` and `vttablet-up.sh`.
1. **Start vtgate**
Vitess uses <code>vtgate</code> to route each client query to
the correct <code>vttablet</code>. This local example runs a
single <code>vtgate</code> instance, though a real deployment
would likely run multiple <code>vtgate</code> instances to share
Vitess uses `vtgate` to route each client query to
the correct `vttablet`. This local example runs a
single `vtgate` instance, though a real deployment
would likely run multiple `vtgate` instances to share
the load.
``` sh
./vtgate-up.sh
vitess/examples/local$ ./vtgate-up.sh
```
### Run a Client Application
The <code>client.py</code> file is a simple sample application
that connects to <code>vtgate</code> and executes some queries.
The `client.py` file is a simple sample application
that connects to `vtgate` and executes some queries.
To run it, you need to either:
* Add the Vitess Python packages to your <code>PYTHONPATH</code>
<br class="bigbreak">
or
* Use the <code>client.sh</code> wrapper script, which temporarily
sets up the environment and then runs <code>client.py</code>
* Add the Vitess Python packages to your `PYTHONPATH`.
or
* Use the `client.sh` wrapper script, which temporarily
sets up the environment and then runs `client.py`.
``` sh
./client.sh --server=localhost:15001
# Output from client.sh is shown below
vitess/examples/local$ ./client.sh
### example output:
# Inserting into master...
# Reading from master...
# (1L, 'V is for speed')
@ -430,29 +506,34 @@ To run it, you need to either:
# (1L, 'V is for speed')
```
### Tear down the cluster
As long as your <code>VTDATAROOT</code> directory is only used
for this test, you can kill the processes created during this
test with the following commands:
Each `-up.sh` script has a corresponding `-down.sh` script to stop the servers.
``` sh
# Look for processes started during the test
$ pgrep -u $USER -f -l $VTDATAROOT
# If the list looks correct, kill the processes
$ pkill -u $USER -f $VTDATAROOT
vitess/examples/local$ ./vtgate-down.sh
vitess/examples/local$ ./vttablet-down.sh
vitess/examples/local$ ./vtctld-down.sh
vitess/examples/local$ ./zk-down.sh
```
Repeat the <code>pgrep</code> command to make sure all processes
have been killed and manually kill any remaining processes.
Also clear out the contents of <code>VTDATAROOT</code> before
starting again:
Note that the `-down.sh` scripts will leave behind any data files created.
If you're done with this example data, you can clear out the contents of `VTDATAROOT`:
``` sh
$ cd $VTDATAROOT
$ /path/to/vtdataroot$ rm -rf *
/path/to/vtdataroot$ rm -rf *
```
## Troubleshooting
If anything goes wrong, check the logs in your `$VTDATAROOT/tmp` directory
for error messages. There are also some tablet-specific logs, as well as
MySQL logs in the various `$VTDATAROOT/vt_*` directories.
If you need help diagnosing a problem, send a message to our
[mailing list](https://groups.google.com/forum/#!forum/vitess).
In addition to any errors you see at the command-line, it would also help to
upload an archive of your `VTDATAROOT` directory to a file sharing service
and provide a link to it.

Просмотреть файл

@ -483,7 +483,7 @@ $ export KUBECTL=/example/path/to/google-cloud-sdk/bin/kubectl
The SQL to create the table is shown below:
```
``` sql
CREATE TABLE messages (
page BIGINT(20) UNSIGNED,
time_created_ns BIGINT(20) UNSIGNED,

Просмотреть файл

@ -5,207 +5,6 @@ local machine, which may be useful for experimentation. These scripts can
also serve as a starting point for configuring Vitess into your preferred
deployment strategy or toolset.
## Requirements
See the [Run Vitess Locally](http://vitess.io/getting-started/local-instance.html)
guide for instructions on using these scripts.
You should have completed a successful `make build` after following the
[Getting Started](https://github.com/youtube/vitess/blob/master/doc/GettingStarted.md)
guide.
## Configuration
Before starting, set `VTROOT` to the base of the Vitess tree that you built.
For example, if you ran `make build` while in
`$HOME/vt/src/github.com/youtube/vitess`, then you should set:
`export VTROOT=$HOME/vt`
Also set `VTDATAROOT` to the directory where you want data files and logs to
be stored. For example: `export VTDATAROOT=$HOME/vtdataroot`
Alternatively, if you are testing on the same machine you just used for
building, you can source `dev.env` as described the Getting Started guide.
## Starting ZooKeeper
The way servers in a Vitess cluster find each other is by looking for dynamic
configuration stored in a distributed lock service. For this example, we will
use ZooKeeper. The following script creates a small ZooKeeper cluster.
```
vitess/examples/local$ ./zk-up.sh
Starting zk servers...
Waiting for zk servers to be ready...
```
Once we have a ZooKeeper cluster, we only need to tell each Vitess process how
to connect to ZooKeeper. Then they can find all the other Vitess processes
by coordinating via ZooKeeper. The way we tell them how to find ZooKeeper is by
setting the `ZK_CLIENT_CONFIG` environment variable to the path to the file
`zk-client-conf.json`, which contains ZK server addresses for each cell.
## Starting vtctld
The vtctld server provides a web interface to view all the coordination
information stored in ZooKeeper.
```
vitess/examples/local$ ./vtctld-up.sh
Starting vtctld...
Access vtctld at http://localhost:15000
```
There won't be anything there yet, but the menu should come up,
verifying that vtctld is running.
## Issuing commands with vtctlclient
The vtctld server also accepts commands from the vtctlclient tool,
which is used to administer the cluster.
```
# list available commands
vitess/examples/local$ $VTROOT/bin/vtctlclient -server localhost:15000
```
## Starting vttablets
For this example, we will bring up 3 tablets.
```
vitess/examples/local$ ./vttablet-up.sh
Starting MySQL for tablet test-0000000100...
Starting vttablet for test-0000000100...
Access tablet test-0000000100 at http://localhost:15100/debug/status
Starting MySQL for tablet test-0000000101...
Starting vttablet for test-0000000101...
Access tablet test-0000000101 at http://localhost:15101/debug/status
Starting MySQL for tablet test-0000000102...
Starting vttablet for test-0000000102...
Access tablet test-0000000102 at http://localhost:15102/debug/status
```
Once they are up, go back to the vtctld web page and click on the
*DbTopology Tool*. You should see all three tablets listed. If you click the
address of one of the tablets, you'll see the coordination data stored in
ZooKeeper. At the top of that page, there is also `[status]` link, which takes
you to the debug page generated by the tablet itself.
By bringing up tablets into a previously empty keyspace, we effectively just
created a new shard. To initialize the keyspace for the new shard, we need to
perform a keyspace rebuild:
```
$ $VTROOT/bin/vtctlclient -server localhost:15000 RebuildKeyspaceGraph test_keyspace
```
Note that most vtctlclient commands produce no output on success.
## Electing a master vttablet
The vttablets have all been started as replicas, but there is no master yet.
When we pick a master vttablet, Vitess will also take care of connecting the
other replicas' mysqld instances to start replicating from the master mysqld.
Since all instances are of type replica, we need to use the -force
flag on InitShardMaster, so it accepts to change the replica type to master.
```
$ $VTROOT/bin/vtctlclient -server localhost:15000 InitShardMaster -force test_keyspace/0 test-0000000100
```
Once this is done, you should see one master and two replicas in vtctld's web
interface. You can also check this on the command line with vtctlclient:
```
$ $VTROOT/bin/vtctlclient -server localhost:15000 ListAllTablets test
test-0000000100 test_keyspace 0 master localhost:15100 localhost:33100 []
test-0000000101 test_keyspace 0 replica localhost:15101 localhost:33101 []
test-0000000102 test_keyspace 0 replica localhost:15102 localhost:33102 []
```
## Creating a table
The vtctl tool can manage schema across all tablets in a keyspace.
To create the table defined in `create_test_table.sql`:
```
vitess/examples/local$ $VTROOT/bin/vtctlclient -server localhost:15000 ApplySchema -sql "$(cat create_test_table.sql)" test_keyspace
```
# Starting vtgate
Clients send queries to Vitess through vtgate, which routes them to the
correct vttablet behind the scenes. In a real deployment, you would likely
run multiple vtgate instances to share the load. For this local example,
we only need one.
```
vitess/examples/local$ ./vtgate-up.sh
```
## Creating a client app
The file `client.py` contains a simple example app that connects to vtgate
and executes some queries. To run it, you need to add the Vitess Python
packages to your `PYTHONPATH`. Or you can use the wrapper script `client.sh`
that temporarily sets up the environment and then runs `client.py`.
```
vitess/examples/local$ ./client.sh --server=localhost:15001
Inserting into master...
Reading from master...
(1L, 'V is for speed')
Reading from replica...
(1L, 'V is for speed')
```
## Tearing down the cluster
Assuming your `VTDATAROOT` directory is something that you use just for this,
you can kill all the processes we've started like this:
```
# look for processes we started
$ pgrep -u $USER -f -l $VTDATAROOT
# if that list looks right, then kill them
$ pkill -u $USER -f $VTDATAROOT
```
## Note about cluster state
There are two components in the cluster that are stateful: ZooKeeper and MySQL.
In the `zk-up.sh` and `vttablet-up.sh` scripts, we run `zkctl` and `mysqlctl`
with the `init` command, in order to bootstrap the data directories before
running `zkd` and `mysqld`. If you stop ZooKeeper or MySQL and want to restart
them with existing saved state, you should run `zkctl` or `mysqlctl` with the
`start` command instead of `init`.
If instead you want to start over from scratch, you must first clear out the
data directory, or else the `init` command will fail because it requires the
directory to be empty:
```
$ cd $VTDATAROOT
/home/user/vt/vtdataroot$ rm -rf *
```
Note that it's important to have a separate initialization command, instead of
always bootstrapping if the directory is empty. For example, suppose you start
up a stateful server that ought to have existing data, but it finds the
directory empty due to a misconfiguration. You would not want that server to
automatically bootstrap the data directory, because then it would come up and
serve the wrong data. Similarly, if you instruct it to initialize, you would not
want it to silently succeed if there is already existing data there.
## Troubleshooting
If anything goes wrong, check the logs in your `$VTDATAROOT/tmp` directory
for error messages. There are also some tablet-specific logs, as well as
MySQL logs in the various `$VTDATAROOT/vt_*` directories.
If you need help diagnosing a problem, send a message to our
[mailing list](https://groups.google.com/forum/#!forum/vitess).
In addition to any errors you see at the command-line, it would also help to
upload an archive of your `$VTDATAROOT` directory to a file sharing service
and provide a link to it.