This commit is contained in:
Sugu Sougoumarane 2016-05-20 10:00:01 -07:00
Родитель 1b1ec26f2f 923ca110a8
Коммит 78fcdf8f20
93 изменённых файлов: 558 добавлений и 1986 удалений

Просмотреть файл

@ -62,7 +62,6 @@ the shards randomly.
We can load this VSchema into Vitess like this:
``` sh
vitess/examples/local$ ./lvtctl.sh SetKeyspaceShardingInfo test_keyspace keyspace_id uint64
vitess/examples/local$ ./lvtctl.sh ApplyVSchema -vschema "$(cat vschema.json)" test_keyspace
```

Просмотреть файл

@ -164,7 +164,6 @@ That command performs the following operations:
<code>spare</code> to ensure that it does not interfere with ongoing
operations.
1. Updates the <code>Shard</code> object to specify the new master.
1. Rebuilds the shard's serving graph.
The <code>TabletExternallyReparented</code> command fails in the following
cases:

Просмотреть файл

@ -55,7 +55,6 @@ the shards randomly.
We can load this VSchema into Vitess like this:
``` sh
vitess/examples/kubernetes$ ./kvtctl.sh SetKeyspaceShardingInfo test_keyspace keyspace_id uint64
vitess/examples/kubernetes$ ./kvtctl.sh ApplyVSchema -vschema "$(cat vschema.json)" test_keyspace
```
@ -240,7 +239,6 @@ unsharded example:
``` sh
vitess/examples/kubernetes$ ./vttablet-down.sh
### example output:
# Removing tablet test-0000000100 from Vitess topology...
# Deleting pod for tablet test-0000000100...
# pods/vttablet-100
# ...
@ -249,7 +247,7 @@ vitess/examples/kubernetes$ ./vttablet-down.sh
Then we can delete the now-empty shard:
``` sh
vitess/examples/kubernetes$ ./kvtctl.sh DeleteShard test_keyspace/0
vitess/examples/kubernetes$ ./kvtctl.sh DeleteShard -recursive test_keyspace/0
```
You should then see in the vtctld **Topology** page, or in the output of

Просмотреть файл

@ -1,164 +1,232 @@
# Topology Service
This document describes the Topology Service, a key part of the Vitess architecture. This service is exposed to all Vitess processes, and is used to store small pieces of configuration data about the Vitess cluster, and provide cluster-wide locks. It also supports watches, which we will use soon.
This document describes the Topology Service, a key part of the Vitess
architecture. This service is exposed to all Vitess processes, and is used to
store small pieces of configuration data about the Vitess cluster, and provide
cluster-wide locks. It also supports watches, which we will use soon.
Concretely, the Topology Service features are implemented by a [Lock Server](http://en.wikipedia.org/wiki/Distributed_lock_manager), referred to as Topology Server in the rest of this document. We use a plug-in implementation and we support multiple Lock Servers (ZooKeeper, etcd, …) as backends for the service.
Concretely, the Topology Service features are implemented by a
[Lock Server](http://en.wikipedia.org/wiki/Distributed_lock_manager), referred
to as Topology Server in the rest of this document. We use a plug-in
implementation and we support multiple Lock Servers (ZooKeeper, etcd, …) as
backends for the service.
## Requirements and Usage
The Topology Service is used to store information about the Keyspaces, the Shards, the Tablets, the Replication Graph, and the Serving Graph. We store small data structures (a few hundred bytes) per object.
The Topology Service is used to store information about the Keyspaces, the
Shards, the Tablets, the Replication Graph, and the Serving Graph. We store
small data structures (a few hundred bytes) per object.
The main contract for the Topology Server is to be very highly available and consistent. It is understood it will come at a higher latency cost and very low throughput.
The main contract for the Topology Server is to be very highly available and
consistent. It is understood it will come at a higher latency cost and very low
throughput.
We never use the Topology Server as an RPC mechanism, nor as a storage system for logs. We never depend on the Topology Server being responsive and fast to serve every query.
We never use the Topology Server as an RPC mechanism, nor as a storage system
for logs. We never depend on the Topology Server being responsive and fast to
serve every query.
The Topology Server must also support a Watch interface, to signal when certain conditions occur on a node. This is used for instance to know when servers come online or go offline.
The Topology Server must also support a Watch interface, to signal when certain
conditions occur on a node. This is used for instance to know when keyspaces
topology changes (for resharding for instance).
### Global vs Local
We differentiate two instances of the Topology Server: the global instance, and the per-cell local instance:
The global instance is used to store global data about the topology that doesnt change very often, for instance information about Keyspaces and Shards. The data is independent of individual instances and cells, and needs to survive a cell going down entirely.
There is one local instance per cell, that contains cell-specific information, and also rolled-up data from the global + local cell to make it easier for clients to find the data. The Vitess clients should not use the global topology instance, but instead the rolled-up data in the local topology server.
We differentiate two instances of the Topology Server: the Global instance, and
the per-cell Local instance:
The global instance can go down for a while and not impact the local cells (an exception to that is if a reparent needs to be processed, it might not work). If a local instance goes down, it only affects the local tablets in that instance (and then the cell is usually in bad shape, and should not be used).
* The Global instance is used to store global data about the topology that
doesnt change very often, for instance information about Keyspaces and
Shards. The data is independent of individual instances and cells, and needs
to survive a cell going down entirely.
* There is one Local instance per cell, that contains cell-specific information,
and also rolled-up data from the global + local cell to make it easier for
clients to find the data. The Vitess local processes should not use the Global
topology instance, but instead the rolled-up data in the Local topology
server as much as possible.
The Global instance can go down for a while and not impact the local cells (an
exception to that is if a reparent needs to be processed, it might not work). If
a Local instance goes down, it only affects the local tablets in that instance
(and then the cell is usually in bad shape, and should not be used).
Furthermore, the Vitess processes will not use the Global nor the Local Topology
Server to serve individual queries. They only use the Topology Server to get the
topology information at startup and in the background, but never to directly
serve queries.
### Recovery
If a local Topology Server dies and is not recoverable, it can be wiped out. All the tablets in that cell then need to be restarted so they re-initialize their topology records (but they wont lose any MySQL data).
If a local Topology Server dies and is not recoverable, it can be wiped out. All
the tablets in that cell then need to be restarted so they re-initialize their
topology records (but they wont lose any MySQL data).
If the global Topology Server dies and is not recoverable, this is more of a problem. All the Keyspace / Shard objects have to be re-created. Then the cells should recover.
If the global Topology Server dies and is not recoverable, this is more of a
problem. All the Keyspace / Shard objects have to be re-created. Then the cells
should recover.
## Global Data
This section describes the data structures stored in the global instance of the topology server.
This section describes the data structures stored in the global instance of the
topology server.
### Keyspace
The Keyspace object contains various information, mostly about sharding: how is this Keyspace sharded, what is the name of the sharding key column, is this Keyspace serving data yet, how to split incoming queries, …
The Keyspace object contains various information, mostly about sharding: how is
this Keyspace sharded, what is the name of the sharding key column, is this
Keyspace serving data yet, how to split incoming queries, …
An entire Keyspace can be locked. We use this during resharding for instance, when we change which Shard is serving what inside a Keyspace. That way we guarantee only one operation changes the keyspace data concurrently.
An entire Keyspace can be locked. We use this during resharding for instance,
when we change which Shard is serving what inside a Keyspace. That way we
guarantee only one operation changes the keyspace data concurrently.
### Shard
A Shard contains a subset of the data for a Keyspace. The Shard record in the global topology contains:
A Shard contains a subset of the data for a Keyspace. The Shard record in the
global topology contains:
* the MySQL Master tablet alias for this shard
* the sharding key range covered by this Shard inside the Keyspace
* the tablet types this Shard is serving (master, replica, batch, …), per cell if necessary.
* if during filtered replication, the source shards this shard is replicating from
* the tablet types this Shard is serving (master, replica, batch, …), per cell
if necessary.
* if during filtered replication, the source shards this shard is replicating
from
* the list of cells that have tablets in this shard
* shard-global tablet controls, like blacklisted tables no tablet should serve in this shard
* shard-global tablet controls, like blacklisted tables no tablet should serve
in this shard
A Shard can be locked. We use this during operations that affect either the Shard record, or multiple tablets within a Shard (like reparenting), so multiple jobs dont concurrently alter the data.
A Shard can be locked. We use this during operations that affect either the
Shard record, or multiple tablets within a Shard (like reparenting), so multiple
jobs dont concurrently alter the data.
### VSchema Data
(experimental) The VSchema data contains sharding and routing information for the [VTGate V3](http://vitess.io/doc/VTGateV3Features/) API.
The VSchema data contains sharding and routing information for
the [VTGate V3](http://vitess.io/doc/VTGateV3Features/) API.
## Local Data
This section describes the data structures stored in the local instance (per cell) of the topology server.
This section describes the data structures stored in the local instance (per
cell) of the topology server.
### Tablets
The Tablet record has a lot of information about a single vttablet process running inside a tablet (along with the MySQL process):
The Tablet record has a lot of information about a single vttablet process
running inside a tablet (along with the MySQL process):
* the Tablet Alias (cell+unique id) that uniquely identifies the Tablet
* the Hostname, IP address and port map of the Tablet
* the current Tablet type (master, replica, batch, spare, …)
* which Keyspace / Shard the tablet is part of
* the health map for the Tablet (if in degraded mode)
* the sharding Key Range served by this Tablet
* user-specified tag map (to store per installation data for instance)
A Tablet record is created before a tablet can be running (either by `vtctl InitTablet` or by passing the `init_*` parameters to vttablet). The only way a Tablet record will be updated is one of:
A Tablet record is created before a tablet can be running (either by `vtctl
InitTablet` or by passing the `init_*` parameters to vttablet). The only way a
Tablet record will be updated is one of:
* The vttablet process itself owns the record while it is running, and can change it.
* The vttablet process itself owns the record while it is running, and can
change it.
* At init time, before the tablet starts
* After shutdown, when the tablet gets deleted.
* If a tablet becomes unresponsive, it may be forced to spare to remove it from the serving graph (such as when reparenting away from a dead master, by the `vtctl ReparentShard` action).
* If a tablet becomes unresponsive, it may be forced to spare to make it
unhealthy when it restarts.
### Replication Graph
The Replication Graph allows us to find Tablets in a given Cell / Keyspace / Shard. It used to contain information about which Tablet is replicating from which other Tablet, but that was too complicated to maintain. Now it is just a list of Tablets.
The Replication Graph allows us to find Tablets in a given Cell / Keyspace /
Shard. It used to contain information about which Tablet is replicating from
which other Tablet, but that was too complicated to maintain. Now it is just a
list of Tablets.
### Serving Graph
The Serving Graph is what the clients use to find which EndPoints to send queries to. It is a roll-up of global data and local data. Clients only open a small number of these objects and get all they need quickly.
The Serving Graph is what the clients use to find the per-cell topology of a
Keyspace. It is a roll-up of global data (Keyspace + Shard). vtgates only open a
small number of these objects and get all they need quickly.
#### SrvKeyspace
It is the local representation of a Keyspace. It contains information on what shard to use for getting to the data (but not information about each individual shard):
It is the local representation of a Keyspace. It contains information on what
shard to use for getting to the data (but not information about each individual
shard):
* the partitions map is keyed by the tablet type (master, replica, batch, …) and the values are list of shards to use for serving.
* the partitions map is keyed by the tablet type (master, replica, batch, …) and
the values are list of shards to use for serving.
* it also contains the global Keyspace fields, copied for fast access.
It can be rebuilt by running `vtctl RebuildKeyspaceGraph`. It is not automatically rebuilt when adding new tablets in a cell, as this would cause too much overhead and is only needed once per cell/keyspace. It may also be changed during horizontal and vertical splits.
#### SrvShard
It is the local representation of a Shard. It contains information on details internal to this Shard only, but not to any tablet running in this shard:
* the name and sharding Key Range for this Shard.
* the cell that has the master for this Shard.
It is possible to lock a SrvShard object, to massively update all EndPoints in it.
It can be rebuilt (along with all the EndPoints in this Shard) by running `vtctl RebuildShardGraph`.
#### EndPoints
For each possible serving type (master, replica, batch), in each Cell / Keyspace / Shard, we maintain a rolled-up EndPoint list. Each entry in the list has information about one Tablet:
* the Tablet Uid
* the Host on which the Tablet resides
* the port map for that Tablet
* the health map for that Tablet
It can be rebuilt by running `vtctl RebuildKeyspaceGraph`. It is not
automatically rebuilt when adding new tablets in a cell, as this would cause too
much overhead and is only needed once per cell/keyspace. It may also be changed
during horizontal and vertical splits.
## Workflows Involving the Topology Server
The Topology Server is involved in many Vitess workflows.
When a Tablet is initialized, we create the Tablet record, and add the Tablet to the Replication Graph. If it is the master for a Shard, we update the global Shard record as well (we may also update the SrvShard objects in all cells if this master is in a different cell). When the Tablet is changed to be serving (its type is changed to master, replica or batch), it is also added to the Serving Graph.
When a Tablet is initialized, we create the Tablet record, and add the Tablet to
the Replication Graph. If it is the master for a Shard, we update the global
Shard record as well.
Administration tools need to find the tablets for a given Keyspace / Shard:
first we get the list of Cells that have Tablets for the Shard (global topology Shard record has these)
then we use the Replication Graph for that Cell / Keyspace / Shard to find all the tablets
then we can read each tablet record.
first we get the list of Cells that have Tablets for the Shard (global topology
Shard record has these) then we use the Replication Graph for that Cell /
Keyspace / Shard to find all the tablets then we can read each tablet record.
When a Shard is reparented, we need to update the global Shard record with the new master alias. If the cell the master is in has changed, we also need to change the SrvShard records in all cells. And obviously the master EndPoints list will change.
When a Shard is reparented, we need to update the global Shard record with the
new master alias.
Finding a tablet to serve the data is fairly easy: the client needs to read the SrvKeyspace object to find which shard to use, then it can directly read the EndPoints. To access the master remotely, the client may read the SrvShard to find the master cell, and get the EndPoints there.
Finding a tablet to serve the data is done in two stages: vtgate maintains a
health check connection to all possible tablets, and they report which keyspace
/ shard / tablet type they serve. vtgate also reads the SrvKeyspace object, to
find out the shard map. With these two pieces of information, vtgate can route
the query to the right vttablet.
During resharding events, we also change the topology a lot. An horizontal split will change the global Shard records, and the local SrvKeyspace records. A vertical split will change the global Keyspace records, and the local SrvKeyspace records.
During resharding events, we also change the topology a lot. An horizontal split
will change the global Shard records, and the local SrvKeyspace records. A
vertical split will change the global Keyspace records, and the local
SrvKeyspace records.
## Implementations
The Topology Server interface is defined in our code in go/vt/topo/server.go and we also have a set of unit tests for it in go/vt/topo/test.
The Topology Server interface is defined in our code in go/vt/topo/server.go and
we also have a set of unit tests for it in go/vt/topo/test.
This part describes the two implementations we have, and their specific behavior.
This part describes the two implementations we have, and their specific
behavior.
### ZooKeeper
Our ZooKeeper implementation is based on a configuration file that describes where the global and each local cell ZK instances are. When adding a cell, all processes that may access that cell should be restarted with the new configuration file.
Our ZooKeeper implementation is based on a configuration file that describes
where the global and each local cell ZK instances are. When adding a cell, all
processes that may access that cell should be restarted with the new
configuration file.
The global cell typically has around 5 servers, distributed one in each cell. The local cells typically have 3 or 5 servers, in different server racks / sub-networks for higher resiliency. For our integration tests, we use a single ZK server that serves both global and local cells.
The global cell typically has around 5 servers, distributed one in each
cell. The local cells typically have 3 or 5 servers, in different server racks /
sub-networks for higher resiliency. For our integration tests, we use a single
ZK server that serves both global and local cells.
We sometimes store both data and sub-directories in a path (for a keyspace for instance). We use JSON to encode the data.
We sometimes store both data and sub-directories in a path (for a keyspace for
instance). We use JSON to encode the data.
For locking, we use an auto-incrementing file name in the `/action` subdirectory of the object directory. We also move them to `/actionlogs` when the lock is released. And we have a purge process to clear the old locks (which should be run on a crontab, typically).
For locking, we use an auto-incrementing file name in the `/action` subdirectory
of the object directory. We also move them to `/actionlogs` when the lock is
released. And we have a purge process to clear the old locks (which should be
run on a crontab, typically).
Note the paths used to store global and per-cell data do not overlap, so a single ZK can be used for both global and local ZKs. This is however not recommended, for reliability reasons.
Note the paths used to store global and per-cell data do not overlap, so a
single ZK can be used for both global and local ZKs. This is however not
recommended, for reliability reasons.
* Keyspace: `/zk/global/vt/keyspaces/<keyspace>`
* Shard: `/zk/global/vt/keyspaces/<keyspace>/shards/<shard>`
* Tablet: `/zk/<cell>/vt/tablets/<uid>`
* Replication Graph: `/zk/<cell>/vt/replication/<keyspace>/<shard>`
* SrvKeyspace: `/zk/<cell>/vt/ns/<keyspace>`
* SrvShard: `/zk/<cell>/vt/ns/<keyspace>/<shard>`
* EndPoints: `/zk/<cell>/vt/ns/<keyspace>/<shard>/<tablet type>`
We provide the 'zk' utility for easy access to the topology data in ZooKeeper. For instance:
We provide the 'zk' utility for easy access to the topology data in
ZooKeeper. For instance:
```
\# NOTE: You need to source zookeeper client config file, like so:
\# export ZK_CLIENT_CONFIG=/path/to/zk/client.conf
@ -170,11 +238,15 @@ shards
### Etcd
Our etcd implementation is based on a command-line parameter that gives the location(s) of the global etcd server. Then we query the path `/vt/cells` and each file in there is named after a cell, and contains the list of etcd servers for that cell.
Our etcd implementation is based on a command-line parameter that gives the
location(s) of the global etcd server. Then we query the path `/vt/cells` and
each file in there is named after a cell, and contains the list of etcd servers
for that cell.
We use the `_Data` filename to store the data, JSON encoded.
For locking, we store a `_Lock` file with various contents in the directory that contains the object to lock.
For locking, we store a `_Lock` file with various contents in the directory that
contains the object to lock.
We use the following paths:
@ -183,6 +255,4 @@ We use the following paths:
* Tablet: `/vt/tablets/<cell>-<uid>/_Data`
* Replication Graph: `/vt/replication/<keyspace>/<shard>/_Data`
* SrvKeyspace: `/vt/ns/<keyspace>/_Data`
* SrvShard: `/vt/ns/<keyspace>/<shard>/_Data`
* EndPoints: `/vt/ns/<keyspace>/<shard>/<tablet type>`

Просмотреть файл

@ -816,7 +816,7 @@ KeyRange describes a range of sharding keys, when range-based sharding is used.
### topodata.ShardReference
ShardReference is used as a pointer from a SrvKeyspace to a SrvShard
ShardReference is used as a pointer from a SrvKeyspace to a Shard
#### Properties
@ -837,7 +837,6 @@ SrvKeyspace is a rollup node for the keyspace itself.
| <code>sharding_column_name</code> <br>string| copied from Keyspace |
| <code>sharding_column_type</code> <br>[KeyspaceIdType](#topodata.keyspaceidtype)| |
| <code>served_from</code> <br>list &lt;[ServedFrom](#srvkeyspace.servedfrom)&gt;| |
| <code>split_shard_count</code> <br>int32| |
#### Messages
@ -848,7 +847,7 @@ SrvKeyspace is a rollup node for the keyspace itself.
| Name |Description |
| :-------- | :--------
| <code>served_type</code> <br>[TabletType](#topodata.tablettype)| The type this partition applies to. |
| <code>shard_references</code> <br>list &lt;[ShardReference](#topodata.shardreference)&gt;| ShardReference is used as a pointer from a SrvKeyspace to a SrvShard |
| <code>shard_references</code> <br>list &lt;[ShardReference](#topodata.shardreference)&gt;| ShardReference is used as a pointer from a SrvKeyspace to a Shard |
##### SrvKeyspace.ServedFrom

163
doc/VitessReplication.md Normal file
Просмотреть файл

@ -0,0 +1,163 @@
# Vitess, MySQL Replication, and Schema Changes
## Statement vs Row Based Replication
MySQL supports two primary modes of replication in its binary logs: statement or
row based.
**Statement Based Replication**:
* The statements executed on the master are copied almost as-is in the master
logs.
* The slaves replay these statements as is.
* If the statements are expensive (especially an update with a complicated WHERE
clause), they will be expensive on the slaves too.
* For current timestamp and auto-increment values, the master also puts
additional SET statements in the logs to make the statement have the same
effect, so the slaves end up with the same values.
**Row Based Replication**:
* The statements executed on the master result in updated rows. The new full
values for these rows are copied to the master logs.
* The slaves change their records for the rows they receive. The update is by
primary key, and contains the new values for each column, so its very fast.
* Each updated row contains the entire row, not just the columns that were
updated. (this is inefficient if only one column out of a large number has
changed, but its more efficient on the slave to just swap out the row with
the new one).
* The replication stream is harder to read, as it contains almost binary data,
that dont easily map to the original statements.
* There is a configurable limit on how many rows can be affected by one
statement, so the master logs are not flooded.
* The format of the logs depends on the master schema: each row has a list of
values, one value for each column. So if the master schema is different from
the slave schema, updates will misbehave (exception being if slave has extra
columns at the end).
* It is possible to revert to statement based replication for some commands to
avoid these drawbacks (for instance for DELETE statements that affect a large
number of rows).
* Schema changes revert to statement based replication.
* If comments are added to a statement, they are stripped from the
replication stream (as only rows are transmitted). There is a debug flag to
add the original statement to each row update, but it is costly in terms of
binlog size, and very verbose.
For the longest time, MySQL replication has been single-threaded: only one
statement is applied by the slaves at a time. Since the master applies more
statements in parallel, replication can fall behind on the slaves fairly easily,
under higher load. Even though the situation has improved (group commit), the
slave replication speed is still a limiting factor for a lot of
applications. Since row based replication achieves higher update rates on the
slaves, it has been the only viable option for most performance sensitive
applications.
Schema changes however are not easy to achieve with row based
replication. Adding columns can be done offline, but removing or changing
columns cannot easily be done (there are multiple ways to achieve this, but they
all have limitations or performance implications, and are not that easy to
setup).
Vitess helps by using statement based replication (therefore allowing complex
schema changes), while at the same time simplifying the replication stream (so
slaves can be fast), by rewriting Update statements.
Then, with statement based replication, it becomes easier to perform offline
advanced schema changes, or large data updates. Vitesss solution is called
pivot.
We plan to also support row based replication in the future, and adapt our tools
to provide the same features when possible.
## Rewriting Update Statements
Vitess rewrites UPDATE SQL statements to always know what rows will be
affected. For instance, this statement:
```
UPDATE <table> SET <set values> WHERE <clause>
```
Will be rewritten into:
```
SELECT <primary key columns> FROM <table> WHERE <clause> FOR UPDATE
UPDATE <table> SET <set values> WHERE <primary key columns> IN <result from previous SELECT> /* primary key values: … */
```
With this rewrite in effect, we know exactly which rows are affected, by primary
key, and we also document them as a SQL comment.
The replication stream then doesnt contain the expensive WHERE clauses, but
only the UPDATE statements by primary key. In a sense, it is combining the best
of row based and statement based replication: the slaves only do primary key
based updates, but the replication stream is very friendly for schema changes.
Also, Vitess adds comments to the rewritten statements that identify the primary
key affected by that statement. This allows us to produce an Update Stream (see
section below).
## Vitess Pivot
In a Vitess shard, we have a master and multiple slaves (replicas, batch,
…). Slaves are brought up by restoring a recent backup, and catching up on
replication. It is possible to efficiently and safely re-parent the master
to a slave on demand. We use statement based replication. The combination
of all these features makes our pivot workflow work very well.
The pivot operation works as follows:
* Pick a slave, take it out of service. It is not used by clients any more.
* Stop replication on the slave.
* Apply whatever schema or large data change is needed, on the slave.
* Take a backup of that slave.
* On all the other slaves, one at a time, take them out of service, restore the
backup, catch up on replication, put them back into service.
* When all slaves are done, reparent to a slave that has applied the change.
* The old master can then be restored from a backup again, and put back into
service.
With this process, the only guarantee we need is for the change (schema or data)
to be backward compatible: the clients wont know if they talk to a server
that has applied the change yet or not. This is usually fairly easy to deal
with:
* When adding a column, clients cannot use it until the pivot is done.
* When removing a column, all clients must stop referring to it before the
pivot begins.
* A column rename is still tricky: the best way to do it is to add a new column
with the new name in one pivot, then change the client to populate both (and
possibly backfill the values), then change the client again to use the new
column only, then use another pivot to remove the original column.
* A whole bunch of operations are really easy to perform though: index changes,
optimize table, …
Note the real change is only applied to one instance. We then rely on the backup
/ restore process to propagate the change. This is a very good improvement from
letting the changes through the replication stream, where they are applied to
all hosts, not just one. This is also a very good improvement over the industry
practice of online schema change, which also must run on all hosts.
Since Vitesss backup / restore and reparent processes
are very reliable (they need to be reliable on their own, independently of this
process!), this does not add much more complexity to a running system.
However, the pivot operations are fairly involved, and may take a long time, so
they need to be resilient and automated. We are in the process of streamlining
them, with the goal of making them completely automated.
## Update Stream
Since the replication stream also contains comments of which primary key is
affected by a change, it is possible to look at the replication stream and know
exactly what objects have changed. This Vitess feature is called Update Stream.
By subscribing to the Update Stream for a given shard, one can know what values
change. This stream can be used to create a stream of data changes (export to an
Apache Kafka for instance), or even invalidate an application layer cache.
Note: the Update Stream only reliably contains the primary key values of the
rows that have changed, not the actual values for all columns. To get these
values, it is necessary to re-query the database.
We have plans to make this Update Stream feature more consistent, very
resilient, fast, and transparent to sharding.

Просмотреть файл

@ -118,7 +118,7 @@ Creates the specified keyspace.
#### Example
<pre class="command-example">CreateKeyspace [-sharding_column_name=name] [-sharding_column_type=type] [-served_from=tablettype1:ks1,tablettype2,ks2,...] [-split_shard_count=N] [-force] &lt;keyspace name&gt;</pre>
<pre class="command-example">CreateKeyspace [-sharding_column_name=name] [-sharding_column_type=type] [-served_from=tablettype1:ks1,tablettype2,ks2,...] [-force] &lt;keyspace name&gt;</pre>
#### Flags
@ -128,7 +128,6 @@ Creates the specified keyspace.
| served_from | string | Specifies a comma-separated list of dbtype:keyspace pairs used to serve traffic |
| sharding_column_name | string | Specifies the column to use for sharding operations |
| sharding_column_type | string | Specifies the type of the column to use for sharding operations |
| split_shard_count | Int | Specifies the number of shards to use for data splits |
#### Arguments
@ -292,18 +291,17 @@ Migrates a serving type from the source shard to the shards that it replicates t
### RebuildKeyspaceGraph
Rebuilds the serving data for the keyspace and, optionally, all shards in the specified keyspace. This command may trigger an update to all connected clients.
Rebuilds the serving data for the keyspace. This command may trigger an update to all connected clients.
#### Example
<pre class="command-example">RebuildKeyspaceGraph [-cells=a,b] [-rebuild_srv_shards] &lt;keyspace&gt; ...</pre>
<pre class="command-example">RebuildKeyspaceGraph [-cells=a,b] &lt;keyspace&gt; ...</pre>
#### Flags
| Name | Type | Definition |
| :-------- | :--------- | :--------- |
| cells | string | Specifies a comma-separated list of cells to update |
| rebuild_srv_shards | Boolean | Indicates whether all SrvShard objects should also be rebuilt. The default value is <code>false</code>. |
#### Arguments
@ -389,14 +387,13 @@ Updates the sharding information for a keyspace.
#### Example
<pre class="command-example">SetKeyspaceShardingInfo [-force] [-split_shard_count=N] &lt;keyspace name&gt; [&lt;column name&gt;] [&lt;column type&gt;]</pre>
<pre class="command-example">SetKeyspaceShardingInfo [-force] &lt;keyspace name&gt; [&lt;column name&gt;] [&lt;column type&gt;]</pre>
#### Flags
| Name | Type | Definition |
| :-------- | :--------- | :--------- |
| force | Boolean | Updates fields even if they are already set. Use caution before calling this command. |
| split_shard_count | Int | Specifies the number of shards to use for data splits |
#### Arguments
@ -1144,7 +1141,6 @@ Outputs a list of keyspace names.
* [ListBackups](#listbackups)
* [ListShardTablets](#listshardtablets)
* [PlannedReparentShard](#plannedreparentshard)
* [RebuildShardGraph](#rebuildshardgraph)
* [RemoveBackup](#removebackup)
* [RemoveShardCell](#removeshardcell)
* [SetShardServedTypes](#setshardservedtypes)
@ -1228,7 +1224,6 @@ Reparents the shard to the new master. Assumes the old master is dead and not re
#### Errors
* action <code>&lt;EmergencyReparentShard&gt;</code> requires <code>&lt;keyspace/shard&gt;</code> <code>&lt;tablet alias&gt;</code> This error occurs if the command is not called with exactly 2 arguments.
* active reparent actions disable in this cluster
### GetShard
@ -1272,7 +1267,6 @@ Sets the initial master for a shard. Will make all other tablets in the shard sl
#### Errors
* action <code>&lt;InitShardMaster&gt;</code> requires <code>&lt;keyspace/shard&gt;</code> <code>&lt;tablet alias&gt;</code> This error occurs if the command is not called with exactly 2 arguments.
* active reparent actions disable in this cluster
### ListBackups
@ -1327,31 +1321,6 @@ Reparents the shard to the new master. Both old and new master need to be up and
#### Errors
* action <code>&lt;PlannedReparentShard&gt;</code> requires <code>&lt;keyspace/shard&gt;</code> <code>&lt;tablet alias&gt;</code> This error occurs if the command is not called with exactly 2 arguments.
* active reparent actions disable in this cluster
### RebuildShardGraph
Rebuilds the replication graph and shard serving data in ZooKeeper or etcd. This may trigger an update to all connected clients.
#### Example
<pre class="command-example">RebuildShardGraph [-cells=a,b] &lt;keyspace/shard&gt; ...</pre>
#### Flags
| Name | Type | Definition |
| :-------- | :--------- | :--------- |
| cells | string | Specifies a comma-separated list of cells to update |
#### Arguments
* <code>&lt;keyspace/shard&gt;</code> &ndash; Required. The name of a sharded database that contains one or more tables as well as the shard associated with the command. The keyspace must be identified by a string that does not contain whitepace, while the shard is typically identified by a string in the format <code>&lt;range start&gt;-&lt;range end&gt;</code>. To specify multiple values for this argument, separate individual values with a space.
#### Errors
* The <code>&lt;keyspace/shard&gt;</code> argument must be used to identify at least one keyspace and shard when calling the <code>&lt;RebuildShardGraph&gt;</code> command. This error occurs if the command is not called with at least one argument.
### RemoveBackup
@ -1724,14 +1693,13 @@ Deletes tablet(s) from the topology.
#### Example
<pre class="command-example">DeleteTablet [-allow_master] [-skip_rebuild] &lt;tablet alias&gt; ...</pre>
<pre class="command-example">DeleteTablet [-allow_master] &lt;tablet alias&gt; ...</pre>
#### Flags
| Name | Type | Definition |
| :-------- | :--------- | :--------- |
| allow_master | Boolean | Allows for the master tablet of a shard to be deleted. Use with caution. |
| skip_rebuild | Boolean | Skips rebuilding the shard serving graph after deleting the tablet |
#### Arguments
@ -1754,7 +1722,6 @@ Demotes a master tablet.
#### Errors
* action <code>&lt;DemoteMaster&gt;</code> requires <code>&lt;tablet alias&gt;</code> This error occurs if the command is not called with exactly one argument.
* active reparent actions disable in this cluster
### ExecuteFetchAsDba
@ -1935,7 +1902,6 @@ Reparent a tablet to the current master in the shard. This only works if the cur
#### Errors
* action <code>&lt;ReparentTablet&gt;</code> requires <code>&lt;tablet alias&gt;</code> This error occurs if the command is not called with exactly one argument.
* active reparent actions disable in this cluster
### RunHealthCheck

Просмотреть файл

@ -361,6 +361,7 @@ def main(root_directory):
current_command = ''
current_function = ''
is_func_init = False
is_flag_section = False
# treat func init() same as var commands
# treat addCommand("Group Name"... same as command {... in vtctl.go group
@ -373,8 +374,15 @@ def main(root_directory):
if line.strip() == '' or line.strip().startswith('//'):
continue
if is_func_init and line.strip() == '}':
#get_commands = False
if is_func_init and not is_flag_section and re.search(r'^if .+ {', line.strip()):
is_flag_section = True
elif is_func_init and not is_flag_section and line.strip() == 'servenv.OnRun(func() {':
pass
elif is_func_init and is_flag_section and line.strip() == 'return':
pass
elif is_func_init and is_flag_section and line.strip() == '}':
is_flag_section = False
elif is_func_init and (line.strip() == '}' or line.strip() == '})'):
is_func_init = False
elif get_commands:
# This line precedes a command group's name, e.g. "Tablets" or "Shards."

Просмотреть файл

@ -120,7 +120,7 @@ if [ $num_shards -gt 0 ]
then
echo Calling CreateKeyspace and SetKeyspaceShardingInfo
$kvtctl CreateKeyspace -force $KEYSPACE
$kvtctl SetKeyspaceShardingInfo -force -split_shard_count $num_shards $KEYSPACE keyspace_id uint64
$kvtctl SetKeyspaceShardingInfo -force $KEYSPACE keyspace_id uint64
fi
echo 'Running vttablet-up.sh' && CELLS=$CELLS ./vttablet-up.sh
@ -152,14 +152,8 @@ while [ $counter -lt $MAX_VTTABLET_TOPO_WAIT_RETRIES ]; do
fi
done
# split_shard_count = num_shards for sharded keyspace, 0 for unsharded
split_shard_count=$num_shards
if [ $split_shard_count -eq 1 ]; then
split_shard_count=0
fi
echo -n Setting Keyspace Sharding Info...
$kvtctl SetKeyspaceShardingInfo -force -split_shard_count $split_shard_count $KEYSPACE keyspace_id uint64
$kvtctl SetKeyspaceShardingInfo -force $KEYSPACE keyspace_id uint64
echo Done
echo -n Rebuilding Keyspace Graph...
$kvtctl RebuildKeyspaceGraph $KEYSPACE

Просмотреть файл

@ -33,13 +33,6 @@ for shard in `seq 1 $num_shards`; do
uid=$[$uid_base + $uid_index + $cell_index]
printf -v alias '%s-%010d' $cell $uid
if [ -n "$VTCTLD_ADDR" ]; then
set +e
echo "Removing tablet $alias from Vitess topology..."
vtctlclient -server $VTCTLD_ADDR DeleteTablet -allow_master -skip_rebuild $alias
set -e
fi
echo "Deleting pod for tablet $alias..."
$KUBECTL delete pod vttablet-$uid --namespace=$VITESS_NAME
done

Просмотреть файл

@ -168,7 +168,7 @@ func initTabletMap(ts topo.Server, topology string, mysqld mysqlctl.MysqlDaemon,
// sharding queries.
wr := wrangler.New(logutil.NewConsoleLogger(), ts, nil)
for keyspace := range keyspaceMap {
if err := wr.RebuildKeyspaceGraph(ctx, keyspace, nil, true); err != nil {
if err := wr.RebuildKeyspaceGraph(ctx, keyspace, nil); err != nil {
log.Fatalf("cannot rebuild %v: %v", keyspace, err)
}
}

Просмотреть файл

@ -239,8 +239,8 @@ google.setOnLoadCallback(function() {
<tr>
<td>{{github_com_youtube_vitess_vtctld_srv_cell $ts.Cell}}</td>
<td>{{github_com_youtube_vitess_vtctld_srv_keyspace $ts.Cell $ts.Target.Keyspace}}</td>
<td>{{github_com_youtube_vitess_vtctld_srv_shard $ts.Cell $ts.Target.Keyspace $ts.Target.Shard}}</td>
<td>{{github_com_youtube_vitess_vtctld_srv_type $ts.Cell $ts.Target.Keyspace $ts.Target.Shard $ts.Target.TabletType}}</td>
<td>{{$ts.Target.Shard}}</td>
<td>{{$ts.Target.TabletType}}</td>
<td>{{$ts.StatusAsHTML}}</td>
</tr>
{{end}}

Просмотреть файл

@ -44,7 +44,6 @@ func testGetSrvKeyspace(t *testing.T, conn *vtgateconn.VTGateConn) {
Keyspace: "other_keyspace",
},
},
SplitShardCount: 128,
}
got, err := conn.GetSrvKeyspace(context.Background(), "big")
if err != nil {

Просмотреть файл

@ -288,10 +288,3 @@ func (c *errorClient) GetSrvKeyspace(ctx context.Context, keyspace string) (*top
}
return c.fallbackClient.GetSrvKeyspace(ctx, keyspace)
}
func (c *errorClient) GetSrvShard(ctx context.Context, keyspace, shard string) (*topodatapb.SrvShard, error) {
if err := requestToError(keyspace); err != nil {
return nil, err
}
return c.fallbackClient.GetSrvShard(ctx, keyspace, shard)
}

Просмотреть файл

@ -106,10 +106,6 @@ func (c fallbackClient) GetSrvKeyspace(ctx context.Context, keyspace string) (*t
return c.fallback.GetSrvKeyspace(ctx, keyspace)
}
func (c fallbackClient) GetSrvShard(ctx context.Context, keyspace, shard string) (*topodatapb.SrvShard, error) {
return c.fallback.GetSrvShard(ctx, keyspace, shard)
}
func (c fallbackClient) HandlePanic(err *error) {
c.fallback.HandlePanic(err)
}

Просмотреть файл

@ -71,7 +71,6 @@ func (c *successClient) GetSrvKeyspace(ctx context.Context, keyspace string) (*t
Keyspace: "other_keyspace",
},
},
SplitShardCount: 128,
}, nil
}
if keyspace == "small" {

Просмотреть файл

@ -110,10 +110,6 @@ func (c *terminalClient) GetSrvKeyspace(ctx context.Context, keyspace string) (*
return nil, errTerminal
}
func (c *terminalClient) GetSrvShard(ctx context.Context, keyspace, shard string) (*topodatapb.SrvShard, error) {
return nil, errTerminal
}
func (c *terminalClient) HandlePanic(err *error) {
if x := recover(); x != nil {
log.Errorf("Uncaught panic:\n%v\n%s", x, tb.Stack(4))

Просмотреть файл

@ -40,8 +40,8 @@ var (
<tr border="">
<td width="25%" border="">
Alias: {{github_com_youtube_vitess_vtctld_tablet .Tablet.AliasString}}<br>
Keyspace: {{github_com_youtube_vitess_vtctld_keyspace .Tablet.Keyspace}} Shard: {{github_com_youtube_vitess_vtctld_shard .Tablet.Keyspace .Tablet.Shard}}<br>
Serving graph: {{github_com_youtube_vitess_vtctld_srv_keyspace .Tablet.Alias.Cell .Tablet.Keyspace}} {{github_com_youtube_vitess_vtctld_srv_shard .Tablet.Alias.Cell .Tablet.Keyspace .Tablet.Shard}} {{github_com_youtube_vitess_vtctld_srv_type .Tablet.Alias.Cell .Tablet.Keyspace .Tablet.Shard .Tablet.Type}}<br>
Keyspace: {{github_com_youtube_vitess_vtctld_keyspace .Tablet.Keyspace}} Shard: {{github_com_youtube_vitess_vtctld_shard .Tablet.Keyspace .Tablet.Shard}} Tablet Type: {{.Tablet.Type}}<br>
SrvKeyspace: {{github_com_youtube_vitess_vtctld_srv_keyspace .Tablet.Alias.Cell .Tablet.Keyspace}}<br>
Replication graph: {{github_com_youtube_vitess_vtctld_replication .Tablet.Alias.Cell .Tablet.Keyspace .Tablet.Shard}}<br>
{{if .BlacklistedTables}}
BlacklistedTables: {{range .BlacklistedTables}}{{.}} {{end}}<br>

Просмотреть файл

@ -27,7 +27,7 @@ type Histogram struct {
// NewHistogram creates a histogram with auto-generated labels
// based on the cutoffs. The buckets are categorized using the
// following criterion: cutoff[i-1] <= value < cutoff[i]. Anything
// following criterion: cutoff[i-1] < value <= cutoff[i]. Anything
// higher than the highest cutoff is labeled as "inf".
func NewHistogram(name string, cutoffs []int64) *Histogram {
labels := make([]string, len(cutoffs)+1)
@ -62,7 +62,7 @@ func NewGenericHistogram(name string, cutoffs []int64, labels []string, countLab
// Add adds a new measurement to the Histogram.
func (h *Histogram) Add(value int64) {
for i := range h.labels {
if i == len(h.labels)-1 || value < h.cutoffs[i] {
if i == len(h.labels)-1 || value <= h.cutoffs[i] {
h.buckets[i].Add(1)
h.total.Add(value)
break

Просмотреть файл

@ -15,7 +15,7 @@ func TestHistogram(t *testing.T) {
for i := 0; i < 10; i++ {
h.Add(int64(i))
}
want := `{"1": 1, "5": 5, "inf": 10, "Count": 10, "Total": 45}`
want := `{"1": 2, "5": 6, "inf": 10, "Count": 10, "Total": 45}`
if h.String() != want {
t.Errorf("got %v, want %v", h.String(), want)
}
@ -23,9 +23,9 @@ func TestHistogram(t *testing.T) {
counts["Count"] = h.Count()
counts["Total"] = h.Total()
for k, want := range map[string]int64{
"1": 1,
"1": 2,
"5": 4,
"inf": 5,
"inf": 4,
"Count": 10,
"Total": 45,
} {

Просмотреть файл

@ -16,9 +16,9 @@ func TestTimings(t *testing.T) {
tm.Add("tag1", 500*time.Microsecond)
tm.Add("tag1", 1*time.Millisecond)
tm.Add("tag2", 1*time.Millisecond)
want := `{"TotalCount":3,"TotalTime":2500000,"Histograms":{"tag1":{"500000":0,"1000000":1,"5000000":2,"10000000":2,"50000000":2,"100000000":2,"500000000":2,"1000000000":2,"5000000000":2,"10000000000":2,"inf":2,"Count":2,"Time":1500000},"tag2":{"500000":0,"1000000":0,"5000000":1,"10000000":1,"50000000":1,"100000000":1,"500000000":1,"1000000000":1,"5000000000":1,"10000000000":1,"inf":1,"Count":1,"Time":1000000}}}`
if tm.String() != want {
t.Errorf("want %s, got %s", want, tm.String())
want := `{"TotalCount":3,"TotalTime":2500000,"Histograms":{"tag1":{"500000":1,"1000000":2,"5000000":2,"10000000":2,"50000000":2,"100000000":2,"500000000":2,"1000000000":2,"5000000000":2,"10000000000":2,"inf":2,"Count":2,"Time":1500000},"tag2":{"500000":0,"1000000":1,"5000000":1,"10000000":1,"50000000":1,"100000000":1,"500000000":1,"1000000000":1,"5000000000":1,"10000000000":1,"inf":1,"Count":1,"Time":1000000}}}`
if got := tm.String(); got != want {
t.Errorf("got %s, want %s", got, want)
}
}
@ -28,9 +28,9 @@ func TestMultiTimings(t *testing.T) {
mtm.Add([]string{"tag1a", "tag1b"}, 500*time.Microsecond)
mtm.Add([]string{"tag1a", "tag1b"}, 1*time.Millisecond)
mtm.Add([]string{"tag2a", "tag2b"}, 1*time.Millisecond)
want := `{"TotalCount":3,"TotalTime":2500000,"Histograms":{"tag1a.tag1b":{"500000":0,"1000000":1,"5000000":2,"10000000":2,"50000000":2,"100000000":2,"500000000":2,"1000000000":2,"5000000000":2,"10000000000":2,"inf":2,"Count":2,"Time":1500000},"tag2a.tag2b":{"500000":0,"1000000":0,"5000000":1,"10000000":1,"50000000":1,"100000000":1,"500000000":1,"1000000000":1,"5000000000":1,"10000000000":1,"inf":1,"Count":1,"Time":1000000}}}`
if mtm.String() != want {
t.Errorf("want %s, got %s", want, mtm.String())
want := `{"TotalCount":3,"TotalTime":2500000,"Histograms":{"tag1a.tag1b":{"500000":1,"1000000":2,"5000000":2,"10000000":2,"50000000":2,"100000000":2,"500000000":2,"1000000000":2,"5000000000":2,"10000000000":2,"inf":2,"Count":2,"Time":1500000},"tag2a.tag2b":{"500000":0,"1000000":1,"5000000":1,"10000000":1,"50000000":1,"100000000":1,"500000000":1,"1000000000":1,"5000000000":1,"10000000000":1,"inf":1,"Count":1,"Time":1000000}}}`
if got := mtm.String(); got != want {
t.Errorf("got %s, want %s", got, want)
}
}
@ -43,11 +43,12 @@ func TestTimingsHook(t *testing.T) {
gotv = v.(*Timings)
})
v := NewTimings("timings2")
if gotname != "timings2" {
t.Errorf("want timings2, got %s", gotname)
name := "timings2"
v := NewTimings(name)
if gotname != name {
t.Errorf("got %q, want %q", gotname, name)
}
if gotv != v {
t.Errorf("want %#v, got %#v", v, gotv)
t.Errorf("got %#v, want %#v", gotv, v)
}
}

Просмотреть файл

@ -7,7 +7,6 @@ package etcdtopo
import (
"flag"
"path"
"strings"
"github.com/youtube/vitess/go/flagutil"
@ -32,8 +31,6 @@ const (
tabletFilename = dataFilename
shardReplicationFilename = dataFilename
srvKeyspaceFilename = dataFilename
srvShardFilename = dataFilename
endPointsFilename = dataFilename
vschemaFilename = "_VSchema"
)
@ -101,19 +98,3 @@ func srvKeyspaceDirPath(keyspace string) string {
func srvKeyspaceFilePath(keyspace string) string {
return path.Join(srvKeyspaceDirPath(keyspace), srvKeyspaceFilename)
}
func srvShardDirPath(keyspace, shard string) string {
return path.Join(srvKeyspaceDirPath(keyspace), shard)
}
func srvShardFilePath(keyspace, shard string) string {
return path.Join(srvShardDirPath(keyspace, shard), srvShardFilename)
}
func endPointsDirPath(keyspace, shard string, tabletType topodatapb.TabletType) string {
return path.Join(srvShardDirPath(keyspace, shard), strings.ToLower(tabletType.String()))
}
func endPointsFilePath(keyspace, shard string, tabletType topodatapb.TabletType) string {
return path.Join(endPointsDirPath(keyspace, shard, tabletType), endPointsFilename)
}

Просмотреть файл

@ -231,23 +231,6 @@ func waitForLock(ctx context.Context, client Client, lockPath string, waitIndex
}
}
// LockSrvShardForAction implements topo.Server.
func (s *Server) LockSrvShardForAction(ctx context.Context, cellName, keyspace, shard, contents string) (string, error) {
cell, err := s.getCell(cellName)
if err != nil {
return "", err
}
return lock(ctx, cell.Client, srvShardDirPath(keyspace, shard), contents,
false /* mustExist */)
}
// UnlockSrvShardForAction implements topo.Server.
func (s *Server) UnlockSrvShardForAction(ctx context.Context, cellName, keyspace, shard, actionPath, results string) error {
log.Infof("results of %v: %v", actionPath, results)
return unlock(srvShardDirPath(keyspace, shard), actionPath)
}
// LockKeyspaceForAction implements topo.Server.
func (s *Server) LockKeyspaceForAction(ctx context.Context, keyspace, contents string) (string, error) {
return lock(ctx, s.getGlobal(), keyspaceDirPath(keyspace), contents,

Просмотреть файл

@ -142,17 +142,6 @@ func TestShardLock(t *testing.T) {
test.CheckShardLock(ctx, t, ts)
}
func TestSrvShardLock(t *testing.T) {
ctx := context.Background()
if testing.Short() {
t.Skip("skipping wait-based test in short mode.")
}
ts := newTestServer(t, []string{"test"})
defer ts.Close()
test.CheckSrvShardLock(ctx, t, ts)
}
func TestVSchema(t *testing.T) {
ctx := context.Background()
if testing.Short() {

Просмотреть файл

@ -21,55 +21,6 @@ import (
// test and main programs can change it.
var WatchSleepDuration = 30 * time.Second
// UpdateSrvShard implements topo.Server.
func (s *Server) UpdateSrvShard(ctx context.Context, cellName, keyspace, shard string, srvShard *topodatapb.SrvShard) error {
cell, err := s.getCell(cellName)
if err != nil {
return err
}
data, err := json.MarshalIndent(srvShard, "", " ")
if err != nil {
return err
}
_, err = cell.Set(srvShardFilePath(keyspace, shard), string(data), 0 /* ttl */)
return convertError(err)
}
// GetSrvShard implements topo.Server.
func (s *Server) GetSrvShard(ctx context.Context, cellName, keyspace, shard string) (*topodatapb.SrvShard, error) {
cell, err := s.getCell(cellName)
if err != nil {
return nil, err
}
resp, err := cell.Get(srvShardFilePath(keyspace, shard), false /* sort */, false /* recursive */)
if err != nil {
return nil, convertError(err)
}
if resp.Node == nil {
return nil, ErrBadResponse
}
value := &topodatapb.SrvShard{}
if err := json.Unmarshal([]byte(resp.Node.Value), value); err != nil {
return nil, fmt.Errorf("bad serving shard data (%v): %q", err, resp.Node.Value)
}
return value, nil
}
// DeleteSrvShard implements topo.Server.
func (s *Server) DeleteSrvShard(ctx context.Context, cellName, keyspace, shard string) error {
cell, err := s.getCell(cellName)
if err != nil {
return err
}
_, err = cell.Delete(srvShardDirPath(keyspace, shard), true /* recursive */)
return convertError(err)
}
// UpdateSrvKeyspace implements topo.Server.
func (s *Server) UpdateSrvKeyspace(ctx context.Context, cellName, keyspace string, srvKeyspace *topodatapb.SrvKeyspace) error {
cell, err := s.getCell(cellName)

Просмотреть файл

@ -15,7 +15,6 @@ It has these top-level messages:
Shard
Keyspace
ShardReplication
SrvShard
ShardReference
SrvKeyspace
*/
@ -340,9 +339,6 @@ type Keyspace struct {
// type of the column used for sharding
// UNSET if the keyspace is not sharded
ShardingColumnType KeyspaceIdType `protobuf:"varint,2,opt,name=sharding_column_type,json=shardingColumnType,enum=topodata.KeyspaceIdType" json:"sharding_column_type,omitempty"`
// SplitShardCount stores the number of jobs to run to be sure to
// always have at most one job per shard (used during resharding).
SplitShardCount int32 `protobuf:"varint,3,opt,name=split_shard_count,json=splitShardCount" json:"split_shard_count,omitempty"`
// ServedFrom will redirect the appropriate traffic to
// another keyspace.
ServedFroms []*Keyspace_ServedFrom `protobuf:"bytes,4,rep,name=served_froms,json=servedFroms" json:"served_froms,omitempty"`
@ -413,28 +409,7 @@ func (m *ShardReplication_Node) GetTabletAlias() *TabletAlias {
return nil
}
// SrvShard is a rollup node for the shard itself.
type SrvShard struct {
// Copied from Shard.
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
KeyRange *KeyRange `protobuf:"bytes,2,opt,name=key_range,json=keyRange" json:"key_range,omitempty"`
// The cell that master tablet resides in.
MasterCell string `protobuf:"bytes,3,opt,name=master_cell,json=masterCell" json:"master_cell,omitempty"`
}
func (m *SrvShard) Reset() { *m = SrvShard{} }
func (m *SrvShard) String() string { return proto.CompactTextString(m) }
func (*SrvShard) ProtoMessage() {}
func (*SrvShard) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
func (m *SrvShard) GetKeyRange() *KeyRange {
if m != nil {
return m.KeyRange
}
return nil
}
// ShardReference is used as a pointer from a SrvKeyspace to a SrvShard
// ShardReference is used as a pointer from a SrvKeyspace to a Shard
type ShardReference struct {
// Copied from Shard.
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
@ -444,7 +419,7 @@ type ShardReference struct {
func (m *ShardReference) Reset() { *m = ShardReference{} }
func (m *ShardReference) String() string { return proto.CompactTextString(m) }
func (*ShardReference) ProtoMessage() {}
func (*ShardReference) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
func (*ShardReference) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
func (m *ShardReference) GetKeyRange() *KeyRange {
if m != nil {
@ -461,13 +436,12 @@ type SrvKeyspace struct {
ShardingColumnName string `protobuf:"bytes,2,opt,name=sharding_column_name,json=shardingColumnName" json:"sharding_column_name,omitempty"`
ShardingColumnType KeyspaceIdType `protobuf:"varint,3,opt,name=sharding_column_type,json=shardingColumnType,enum=topodata.KeyspaceIdType" json:"sharding_column_type,omitempty"`
ServedFrom []*SrvKeyspace_ServedFrom `protobuf:"bytes,4,rep,name=served_from,json=servedFrom" json:"served_from,omitempty"`
SplitShardCount int32 `protobuf:"varint,5,opt,name=split_shard_count,json=splitShardCount" json:"split_shard_count,omitempty"`
}
func (m *SrvKeyspace) Reset() { *m = SrvKeyspace{} }
func (m *SrvKeyspace) String() string { return proto.CompactTextString(m) }
func (*SrvKeyspace) ProtoMessage() {}
func (*SrvKeyspace) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
func (*SrvKeyspace) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
func (m *SrvKeyspace) GetPartitions() []*SrvKeyspace_KeyspacePartition {
if m != nil {
@ -494,7 +468,7 @@ func (m *SrvKeyspace_KeyspacePartition) Reset() { *m = SrvKeyspace_Keysp
func (m *SrvKeyspace_KeyspacePartition) String() string { return proto.CompactTextString(m) }
func (*SrvKeyspace_KeyspacePartition) ProtoMessage() {}
func (*SrvKeyspace_KeyspacePartition) Descriptor() ([]byte, []int) {
return fileDescriptor0, []int{8, 0}
return fileDescriptor0, []int{7, 0}
}
func (m *SrvKeyspace_KeyspacePartition) GetShardReferences() []*ShardReference {
@ -516,7 +490,7 @@ type SrvKeyspace_ServedFrom struct {
func (m *SrvKeyspace_ServedFrom) Reset() { *m = SrvKeyspace_ServedFrom{} }
func (m *SrvKeyspace_ServedFrom) String() string { return proto.CompactTextString(m) }
func (*SrvKeyspace_ServedFrom) ProtoMessage() {}
func (*SrvKeyspace_ServedFrom) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8, 1} }
func (*SrvKeyspace_ServedFrom) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7, 1} }
func init() {
proto.RegisterType((*KeyRange)(nil), "topodata.KeyRange")
@ -530,7 +504,6 @@ func init() {
proto.RegisterType((*Keyspace_ServedFrom)(nil), "topodata.Keyspace.ServedFrom")
proto.RegisterType((*ShardReplication)(nil), "topodata.ShardReplication")
proto.RegisterType((*ShardReplication_Node)(nil), "topodata.ShardReplication.Node")
proto.RegisterType((*SrvShard)(nil), "topodata.SrvShard")
proto.RegisterType((*ShardReference)(nil), "topodata.ShardReference")
proto.RegisterType((*SrvKeyspace)(nil), "topodata.SrvKeyspace")
proto.RegisterType((*SrvKeyspace_KeyspacePartition)(nil), "topodata.SrvKeyspace.KeyspacePartition")
@ -540,74 +513,71 @@ func init() {
}
var fileDescriptor0 = []byte{
// 1097 bytes of a gzipped FileDescriptorProto
// 1051 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xac, 0x56, 0xdd, 0x6e, 0xe2, 0x46,
0x14, 0xae, 0xcd, 0x4f, 0xe0, 0x40, 0x58, 0x67, 0x9a, 0xad, 0x2c, 0x57, 0xd5, 0x46, 0xdc, 0x74,
0x95, 0xaa, 0xb4, 0xca, 0xf6, 0x27, 0x5a, 0xa9, 0xd2, 0x12, 0xca, 0xb6, 0xd9, 0x24, 0x84, 0x0e,
0x8e, 0xb6, 0xb9, 0xb2, 0x0c, 0xcc, 0x66, 0xad, 0x05, 0xdb, 0xf5, 0x0c, 0x48, 0x3c, 0xc3, 0x5e,
0xf4, 0xbe, 0x0f, 0xd2, 0xdb, 0x3e, 0x51, 0x1f, 0xa1, 0x52, 0x67, 0xce, 0xd8, 0x60, 0xc8, 0x4f,
0xb3, 0x55, 0xae, 0x98, 0x33, 0xe7, 0x67, 0xce, 0xf9, 0xe6, 0xfb, 0xc6, 0x40, 0x43, 0x44, 0x71,
0x34, 0xf6, 0x85, 0xdf, 0x8a, 0x93, 0x48, 0x44, 0xa4, 0x92, 0xd9, 0xcd, 0x03, 0xa8, 0x9c, 0xb0,
0x05, 0xf5, 0xc3, 0x2b, 0x46, 0x76, 0xa1, 0xc4, 0x85, 0x9f, 0x08, 0xdb, 0xd8, 0x33, 0x9e, 0xd6,
0xa9, 0x36, 0x88, 0x05, 0x05, 0x16, 0x8e, 0x6d, 0x13, 0xf7, 0xd4, 0xb2, 0xf9, 0x0c, 0x6a, 0xae,
0x3f, 0x9c, 0x30, 0xd1, 0x9e, 0x04, 0x3e, 0x27, 0x04, 0x8a, 0x23, 0x36, 0x99, 0x60, 0x56, 0x95,
0xe2, 0x5a, 0x25, 0xcd, 0x02, 0x9d, 0xb4, 0x4d, 0xd5, 0xb2, 0xf9, 0x4f, 0x01, 0xca, 0x3a, 0x8b,
0x7c, 0x01, 0x25, 0x5f, 0x65, 0x62, 0x46, 0xed, 0xe0, 0x71, 0x6b, 0xd9, 0x5d, 0xae, 0x2c, 0xd5,
0x31, 0xc4, 0x81, 0xca, 0xdb, 0x88, 0x8b, 0xd0, 0x9f, 0x32, 0x2c, 0x57, 0xa5, 0x4b, 0x9b, 0x34,
0xc0, 0x0c, 0x62, 0xbb, 0x80, 0xbb, 0x72, 0x45, 0x0e, 0xa1, 0x12, 0x47, 0x89, 0xf0, 0xa6, 0x7e,
0x6c, 0x17, 0xf7, 0x0a, 0xb2, 0xf6, 0x67, 0x9b, 0xb5, 0x5b, 0x7d, 0x19, 0x70, 0xe6, 0xc7, 0xdd,
0x50, 0x24, 0x0b, 0xba, 0x15, 0x6b, 0x4b, 0x9d, 0xf2, 0x8e, 0x2d, 0x78, 0xec, 0x8f, 0x98, 0x5d,
0xd2, 0xa7, 0x64, 0x36, 0xc2, 0xf2, 0xd6, 0x4f, 0xc6, 0x76, 0x19, 0x1d, 0xda, 0x20, 0x5f, 0x41,
0x55, 0x46, 0x78, 0x89, 0x42, 0xce, 0xde, 0xc2, 0x41, 0xc8, 0xea, 0xb0, 0x0c, 0x53, 0x2c, 0xa3,
0xd1, 0x7d, 0x0a, 0x45, 0xb1, 0x88, 0x99, 0x5d, 0x91, 0xb1, 0x8d, 0x83, 0xdd, 0xcd, 0xc6, 0x5c,
0xe9, 0xa3, 0x18, 0x21, 0x23, 0xad, 0xf1, 0xd0, 0x53, 0x13, 0x7a, 0xd1, 0x9c, 0x25, 0x49, 0x30,
0x66, 0x76, 0x15, 0xcf, 0x6e, 0x8c, 0x87, 0x3d, 0xb9, 0x7d, 0x9e, 0xee, 0x92, 0x96, 0xac, 0xe9,
0x5f, 0x71, 0x1b, 0x70, 0x58, 0xe7, 0xda, 0xb0, 0xae, 0x74, 0xea, 0x49, 0x31, 0xce, 0x79, 0x0e,
0xf5, 0xfc, 0xfc, 0xea, 0x9a, 0x64, 0x7f, 0xe9, 0xcd, 0xa9, 0xa5, 0x1a, 0x76, 0xee, 0x4f, 0x66,
0x1a, 0xeb, 0x12, 0xd5, 0xc6, 0x73, 0xf3, 0xd0, 0x70, 0xbe, 0x87, 0xea, 0xb2, 0xdc, 0x7f, 0x25,
0x56, 0x73, 0x89, 0xaf, 0x8a, 0x95, 0x9a, 0x55, 0x6f, 0xbe, 0x2f, 0x43, 0x69, 0x80, 0xc8, 0x1d,
0x42, 0x7d, 0xea, 0x73, 0xc1, 0x12, 0xef, 0x1e, 0x2c, 0xa8, 0xe9, 0x50, 0xcd, 0xb4, 0x35, 0xcc,
0xcd, 0x7b, 0x60, 0xfe, 0x03, 0xd4, 0x39, 0x4b, 0xe6, 0x6c, 0xec, 0x29, 0x60, 0xb9, 0xa4, 0xca,
0x06, 0x4e, 0xd8, 0x51, 0x6b, 0x80, 0x31, 0x78, 0x03, 0x35, 0xbe, 0x5c, 0x73, 0xf2, 0x02, 0xb6,
0x79, 0x34, 0x4b, 0x46, 0xcc, 0xc3, 0x3b, 0xe7, 0x29, 0xa9, 0x3e, 0xbd, 0x96, 0x8f, 0x41, 0xb8,
0xa6, 0x75, 0xbe, 0x32, 0xb8, 0x42, 0x45, 0xe9, 0x81, 0x4b, 0x52, 0x15, 0x14, 0x2a, 0x68, 0x90,
0x97, 0xf0, 0x48, 0xe0, 0x8c, 0xde, 0x28, 0x92, 0x70, 0x46, 0xd2, 0x5f, 0xde, 0xa4, 0xab, 0xae,
0xac, 0xa1, 0xe8, 0xe8, 0x28, 0xda, 0x10, 0x79, 0x93, 0x3b, 0x97, 0x00, 0xab, 0xd6, 0xc9, 0xb7,
0x50, 0x4b, 0xab, 0x22, 0xcf, 0x8c, 0x3b, 0x78, 0x06, 0x62, 0xb9, 0x5e, 0xb5, 0x68, 0xe6, 0x5a,
0x74, 0xfe, 0x30, 0xa0, 0x96, 0x1b, 0x2b, 0x13, 0xb4, 0xb1, 0x14, 0xf4, 0x9a, 0x64, 0xcc, 0xdb,
0x24, 0x53, 0xb8, 0x55, 0x32, 0xc5, 0x7b, 0x5c, 0xdf, 0x27, 0x50, 0xc6, 0x46, 0x33, 0xf8, 0x52,
0xcb, 0xf9, 0xcb, 0x80, 0xed, 0x35, 0x64, 0x1e, 0x74, 0x76, 0x72, 0x00, 0x8f, 0xc7, 0x01, 0x57,
0x51, 0xde, 0x6f, 0x33, 0x96, 0x2c, 0x3c, 0xc5, 0x89, 0x40, 0x8e, 0xa9, 0xa6, 0xa9, 0xd0, 0x8f,
0x53, 0xe7, 0x2f, 0xca, 0x37, 0xd0, 0x2e, 0xf2, 0x25, 0x90, 0xe1, 0xc4, 0x1f, 0xbd, 0x9b, 0x04,
0x92, 0xae, 0x92, 0x6e, 0xba, 0xed, 0x22, 0x96, 0xdd, 0xc9, 0x79, 0xb0, 0x11, 0xde, 0xfc, 0xdb,
0xc4, 0x77, 0x57, 0xa3, 0xf5, 0x35, 0xec, 0x22, 0x40, 0x41, 0x78, 0x25, 0x09, 0x31, 0x99, 0x4d,
0x43, 0x14, 0x7f, 0xaa, 0x2e, 0x92, 0xf9, 0x3a, 0xe8, 0x52, 0xfa, 0x27, 0xaf, 0xae, 0x67, 0xe0,
0xdc, 0x26, 0xce, 0x6d, 0xaf, 0x81, 0x8a, 0x67, 0x1c, 0x6b, 0x76, 0x6f, 0xd4, 0x42, 0x0c, 0xf6,
0x61, 0x87, 0xc7, 0x93, 0x40, 0x68, 0x8e, 0xcb, 0x72, 0xb3, 0x50, 0xe0, 0xa4, 0x25, 0xfa, 0x08,
0x1d, 0x48, 0x80, 0x8e, 0xda, 0x96, 0x82, 0xc8, 0xf4, 0xf4, 0x26, 0x89, 0xa6, 0xfc, 0xfa, 0x23,
0x9b, 0x9d, 0x97, 0x4a, 0xea, 0xa5, 0x8c, 0xca, 0x24, 0xa5, 0xd6, 0xdc, 0x99, 0x65, 0x94, 0x55,
0xe6, 0xc3, 0x5e, 0x5b, 0x9e, 0x90, 0x85, 0x75, 0x42, 0x36, 0xdf, 0x1b, 0x60, 0x69, 0x7d, 0x32,
0x39, 0xd2, 0xc8, 0x17, 0x41, 0x14, 0xca, 0xd3, 0x4b, 0x61, 0x34, 0x66, 0xea, 0x05, 0x52, 0x63,
0x3c, 0xd9, 0x10, 0x5f, 0x2e, 0xb4, 0xd5, 0x93, 0x71, 0x54, 0x47, 0x3b, 0x2f, 0xa0, 0xa8, 0x4c,
0xf5, 0x8e, 0xa5, 0xcd, 0xdf, 0xe7, 0x1d, 0x13, 0x2b, 0xa3, 0x19, 0x43, 0x65, 0x90, 0xcc, 0xb5,
0xb0, 0xe4, 0xd7, 0x33, 0x77, 0xd9, 0xb8, 0xfe, 0xf0, 0x77, 0xee, 0x09, 0xa4, 0xef, 0xa4, 0x87,
0x5f, 0x62, 0x3d, 0x3d, 0xe8, 0xad, 0x8e, 0xdc, 0x69, 0x5e, 0x40, 0x23, 0x9d, 0xe9, 0x0d, 0x4b,
0x58, 0x28, 0x49, 0xf7, 0x10, 0xe7, 0x36, 0xff, 0x2c, 0xca, 0x57, 0x22, 0x99, 0x2f, 0x99, 0xfc,
0x13, 0x40, 0x2c, 0xff, 0x33, 0x04, 0x0a, 0xb3, 0x0c, 0xd6, 0xcf, 0x73, 0xb0, 0xae, 0x42, 0x97,
0x4c, 0xe9, 0x67, 0xf1, 0x34, 0x97, 0x7a, 0xab, 0x24, 0xcc, 0x0f, 0x96, 0x44, 0xe1, 0x7f, 0x48,
0xa2, 0x0d, 0xb5, 0x1c, 0xcd, 0x53, 0x96, 0xef, 0xdd, 0x3c, 0x47, 0x8e, 0xe8, 0xb0, 0x22, 0xfa,
0xcd, 0xaa, 0x2a, 0xdd, 0xa8, 0x2a, 0xe7, 0x77, 0x03, 0x76, 0xae, 0xc1, 0xa1, 0xb4, 0x91, 0xfb,
0x76, 0xdd, 0xad, 0x8d, 0xd5, 0x47, 0x8b, 0x74, 0xc0, 0xd2, 0x47, 0x26, 0xd9, 0x55, 0x6b, 0x99,
0xd4, 0xf2, 0x18, 0xac, 0x73, 0x41, 0x76, 0xb4, 0x66, 0x73, 0xc7, 0x7b, 0x08, 0x95, 0xde, 0xf1,
0x81, 0xd8, 0x3f, 0x80, 0xc6, 0xfa, 0x3d, 0x90, 0x2a, 0x94, 0x2e, 0x7a, 0x83, 0xae, 0x6b, 0x7d,
0x44, 0x00, 0xca, 0x17, 0xc7, 0x3d, 0xf7, 0xbb, 0x6f, 0x2c, 0x43, 0x6d, 0x1f, 0x5d, 0xba, 0xdd,
0x81, 0x65, 0xee, 0x4b, 0x98, 0x60, 0x75, 0x14, 0xa9, 0xc1, 0xd6, 0x45, 0xef, 0xa4, 0x77, 0xfe,
0xba, 0xa7, 0x53, 0xce, 0xda, 0x03, 0xb7, 0x4b, 0x65, 0x8a, 0x74, 0xd0, 0x6e, 0xff, 0xf4, 0xb8,
0xd3, 0xb6, 0x4c, 0xe5, 0xa0, 0x3f, 0x9e, 0xf7, 0x4e, 0x2f, 0xad, 0x02, 0xd6, 0x6a, 0xbb, 0x9d,
0x9f, 0xf5, 0x72, 0xd0, 0x6f, 0xd3, 0xae, 0x55, 0x94, 0x5f, 0xb6, 0x7a, 0xf7, 0xd7, 0x7e, 0x97,
0x1e, 0x9f, 0x75, 0x7b, 0x6e, 0xfb, 0xd4, 0x2a, 0xa9, 0x9c, 0xa3, 0x76, 0xe7, 0xe4, 0xa2, 0x6f,
0x95, 0x75, 0xb1, 0x81, 0x7b, 0x2e, 0x43, 0xb7, 0x94, 0xe3, 0xf5, 0x39, 0x3d, 0x91, 0xa7, 0x54,
0x1c, 0xd3, 0x32, 0x8e, 0x1c, 0xb0, 0x47, 0xd1, 0xb4, 0xb5, 0x88, 0x66, 0x62, 0x36, 0x64, 0xad,
0x79, 0x20, 0x18, 0xe7, 0xfa, 0x2f, 0xf6, 0xb0, 0x8c, 0x3f, 0xcf, 0xfe, 0x0d, 0x00, 0x00, 0xff,
0xff, 0x9d, 0x05, 0xe2, 0x27, 0x7b, 0x0b, 0x00, 0x00,
0x14, 0xae, 0x7f, 0x20, 0x70, 0xcc, 0xb2, 0xde, 0x69, 0xb6, 0xb2, 0x5c, 0x55, 0x8d, 0xb8, 0xe9,
0x6a, 0xab, 0xd2, 0x2a, 0xdb, 0x9f, 0x68, 0xa5, 0x4a, 0x21, 0x94, 0x6d, 0xb3, 0x49, 0x08, 0x1d,
0x8c, 0xb6, 0xb9, 0xb2, 0x0c, 0xcc, 0x66, 0xad, 0x05, 0xec, 0x7a, 0x0c, 0x12, 0xcf, 0xb0, 0x17,
0xed, 0x75, 0x5f, 0xa6, 0x97, 0x7d, 0xaa, 0x4a, 0x9d, 0x39, 0x63, 0x83, 0x81, 0x26, 0xcd, 0x56,
0xb9, 0xca, 0x1c, 0x9f, 0x9f, 0x39, 0xdf, 0x77, 0xbe, 0x33, 0x04, 0xea, 0x69, 0x14, 0x47, 0xe3,
0x20, 0x0d, 0x9a, 0x71, 0x12, 0xa5, 0x11, 0xa9, 0xe4, 0x76, 0xe3, 0x10, 0x2a, 0x67, 0x6c, 0x49,
0x83, 0xd9, 0x35, 0x23, 0xfb, 0x50, 0xe2, 0x69, 0x90, 0xa4, 0x8e, 0x76, 0xa0, 0x3d, 0xa9, 0x51,
0x65, 0x10, 0x1b, 0x0c, 0x36, 0x1b, 0x3b, 0x3a, 0x7e, 0x93, 0xc7, 0xc6, 0x33, 0xb0, 0xbc, 0x60,
0x38, 0x61, 0x69, 0x6b, 0x12, 0x06, 0x9c, 0x10, 0x30, 0x47, 0x6c, 0x32, 0xc1, 0xac, 0x2a, 0xc5,
0xb3, 0x4c, 0x9a, 0x87, 0x2a, 0xe9, 0x01, 0x95, 0xc7, 0xc6, 0xdf, 0x06, 0x94, 0x55, 0x16, 0xf9,
0x1c, 0x4a, 0x81, 0xcc, 0xc4, 0x0c, 0xeb, 0xf0, 0x71, 0x73, 0xd5, 0x5d, 0xa1, 0x2c, 0x55, 0x31,
0xc4, 0x85, 0xca, 0x9b, 0x88, 0xa7, 0xb3, 0x60, 0xca, 0xb0, 0x5c, 0x95, 0xae, 0x6c, 0x52, 0x07,
0x3d, 0x8c, 0x1d, 0x03, 0xbf, 0x8a, 0x13, 0x39, 0x82, 0x4a, 0x1c, 0x25, 0xa9, 0x3f, 0x0d, 0x62,
0xc7, 0x3c, 0x30, 0x44, 0xed, 0x4f, 0xb6, 0x6b, 0x37, 0x7b, 0x22, 0xe0, 0x22, 0x88, 0x3b, 0xb3,
0x34, 0x59, 0xd2, 0xbd, 0x58, 0x59, 0xf2, 0x96, 0xb7, 0x6c, 0xc9, 0xe3, 0x60, 0xc4, 0x9c, 0x92,
0xba, 0x25, 0xb7, 0x91, 0x96, 0x37, 0x41, 0x32, 0x76, 0xca, 0xe8, 0x50, 0x06, 0xf9, 0x12, 0xaa,
0x22, 0xc2, 0x4f, 0x24, 0x73, 0xce, 0x1e, 0x02, 0x21, 0xeb, 0xcb, 0x72, 0x4e, 0xb1, 0x8c, 0x62,
0xf7, 0x09, 0x98, 0xe9, 0x32, 0x66, 0x4e, 0x45, 0xc4, 0xd6, 0x0f, 0xf7, 0xb7, 0x1b, 0xf3, 0x84,
0x8f, 0x62, 0x84, 0x88, 0xb4, 0xc7, 0x43, 0x5f, 0x22, 0xf4, 0xa3, 0x05, 0x4b, 0x92, 0x70, 0xcc,
0x9c, 0x2a, 0xde, 0x5d, 0x1f, 0x0f, 0xbb, 0xe2, 0xf3, 0x65, 0xf6, 0x95, 0x34, 0x45, 0xcd, 0xe0,
0x9a, 0x3b, 0x80, 0x60, 0xdd, 0x1d, 0xb0, 0x9e, 0x70, 0x2a, 0xa4, 0x18, 0xe7, 0x3e, 0x87, 0x5a,
0x11, 0xbf, 0x1c, 0x93, 0xe8, 0x2f, 0x9b, 0x9c, 0x3c, 0x4a, 0xb0, 0x8b, 0x60, 0x32, 0x57, 0x5c,
0x97, 0xa8, 0x32, 0x9e, 0xeb, 0x47, 0x9a, 0xfb, 0x1d, 0x54, 0x57, 0xe5, 0xfe, 0x2b, 0xb1, 0x5a,
0x48, 0x7c, 0x69, 0x56, 0x2c, 0xbb, 0xd6, 0x78, 0x57, 0x86, 0x52, 0x1f, 0x99, 0x3b, 0x82, 0xda,
0x34, 0xe0, 0x29, 0x4b, 0xfc, 0x3b, 0xa8, 0xc0, 0x52, 0xa1, 0x4a, 0x69, 0x1b, 0x9c, 0xeb, 0x77,
0xe0, 0xfc, 0x7b, 0xa8, 0x71, 0x96, 0x2c, 0xd8, 0xd8, 0x97, 0xc4, 0x72, 0x21, 0x95, 0x2d, 0x9e,
0xb0, 0xa3, 0x66, 0x1f, 0x63, 0x70, 0x02, 0x16, 0x5f, 0x9d, 0x39, 0x39, 0x86, 0x07, 0x3c, 0x9a,
0x27, 0x23, 0xe6, 0xe3, 0xcc, 0x79, 0x26, 0xaa, 0x8f, 0x77, 0xf2, 0x31, 0x08, 0xcf, 0xb4, 0xc6,
0xd7, 0x06, 0x97, 0xac, 0xc8, 0x7d, 0xe0, 0x42, 0x54, 0x86, 0x64, 0x05, 0x0d, 0xf2, 0x02, 0x1e,
0xa6, 0x88, 0xd1, 0x1f, 0x45, 0x82, 0xce, 0x48, 0xf8, 0xcb, 0xdb, 0x72, 0x55, 0x95, 0x15, 0x15,
0x6d, 0x15, 0x45, 0xeb, 0x69, 0xd1, 0xe4, 0xee, 0x15, 0xc0, 0xba, 0x75, 0xf2, 0x0d, 0x58, 0x59,
0x55, 0xd4, 0x99, 0x76, 0x8b, 0xce, 0x20, 0x5d, 0x9d, 0xd7, 0x2d, 0xea, 0x85, 0x16, 0xdd, 0x3f,
0x34, 0xb0, 0x0a, 0xb0, 0xf2, 0x85, 0xd6, 0x56, 0x0b, 0xbd, 0xb1, 0x32, 0xfa, 0x4d, 0x2b, 0x63,
0xdc, 0xb8, 0x32, 0xe6, 0x1d, 0xc6, 0xf7, 0x11, 0x94, 0xb1, 0xd1, 0x9c, 0xbe, 0xcc, 0x72, 0xff,
0xd4, 0xe0, 0xc1, 0x06, 0x33, 0xf7, 0x8a, 0x9d, 0x1c, 0xc2, 0xe3, 0x71, 0xc8, 0x65, 0x94, 0xff,
0xeb, 0x9c, 0x25, 0x4b, 0x5f, 0x6a, 0x22, 0x14, 0x30, 0x25, 0x9a, 0x0a, 0xfd, 0x30, 0x73, 0xfe,
0x2c, 0x7d, 0x7d, 0xe5, 0x22, 0x5f, 0x00, 0x19, 0x4e, 0x82, 0xd1, 0xdb, 0x49, 0x28, 0xe4, 0x2a,
0xe4, 0xa6, 0xda, 0x36, 0xb1, 0xec, 0xa3, 0x82, 0x07, 0x1b, 0xe1, 0x8d, 0xbf, 0x74, 0x7c, 0x77,
0x15, 0x5b, 0x5f, 0xc1, 0x3e, 0x12, 0x14, 0xce, 0xae, 0x85, 0x20, 0x26, 0xf3, 0xe9, 0x0c, 0x97,
0x3f, 0xdb, 0x2e, 0x92, 0xfb, 0xda, 0xe8, 0x92, 0xfb, 0x4f, 0x5e, 0xee, 0x66, 0x20, 0x6e, 0x1d,
0x71, 0x3b, 0x1b, 0xa4, 0xe2, 0x1d, 0xa7, 0x4a, 0xdd, 0x5b, 0xb5, 0x90, 0x83, 0xe3, 0xd5, 0x8e,
0xbc, 0x4e, 0xa2, 0x29, 0xdf, 0x7d, 0x38, 0xf3, 0x1a, 0xd9, 0x9a, 0xbc, 0x10, 0x51, 0xf9, 0x9a,
0xc8, 0x33, 0x77, 0xe7, 0xb9, 0x0c, 0xa5, 0x79, 0xbf, 0xa3, 0x28, 0x8a, 0xcc, 0xd8, 0x14, 0x99,
0x78, 0x57, 0x0c, 0xdb, 0x6c, 0xbc, 0xd3, 0xc0, 0x56, 0x9b, 0xc7, 0xe2, 0x49, 0x38, 0x0a, 0xd2,
0x30, 0x9a, 0x89, 0x1e, 0x4a, 0xb3, 0x68, 0xcc, 0xe4, 0xdb, 0x22, 0xc1, 0x7c, 0xba, 0xb5, 0x56,
0x85, 0xd0, 0x66, 0x57, 0xc4, 0x51, 0x15, 0xed, 0x1e, 0x83, 0x29, 0x4d, 0xf9, 0x42, 0x65, 0x10,
0xee, 0xf2, 0x42, 0xa5, 0x6b, 0xa3, 0x31, 0x80, 0x7a, 0x76, 0xc3, 0x6b, 0x96, 0xb0, 0x99, 0x18,
0xae, 0xf8, 0x75, 0x2c, 0x0c, 0x13, 0xcf, 0xef, 0xfd, 0x8e, 0x35, 0x7e, 0x37, 0xc5, 0x36, 0x26,
0x8b, 0x95, 0x62, 0x7e, 0x04, 0x88, 0xc5, 0x6f, 0x73, 0x28, 0x11, 0xe4, 0x20, 0x3f, 0x2b, 0x80,
0x5c, 0x87, 0xae, 0xa6, 0xd7, 0xcb, 0xe3, 0x69, 0x21, 0xf5, 0x46, 0xe9, 0xe9, 0xef, 0x2d, 0x3d,
0xe3, 0x7f, 0x48, 0xaf, 0x05, 0x56, 0x41, 0x7a, 0x99, 0xf2, 0x0e, 0xfe, 0x1d, 0x47, 0x41, 0x7c,
0xb0, 0x16, 0x9f, 0xfb, 0x9b, 0x06, 0x8f, 0x76, 0x20, 0x4a, 0x0d, 0x16, 0xde, 0xfd, 0xdb, 0x35,
0xb8, 0x7e, 0xf0, 0x49, 0x1b, 0x6c, 0xec, 0xd2, 0x4f, 0xf2, 0xf1, 0x29, 0x39, 0x5a, 0x45, 0x5c,
0x9b, 0xf3, 0xa5, 0x0f, 0xf9, 0x86, 0xcd, 0x5d, 0xff, 0x3e, 0xb6, 0xe1, 0x96, 0xc7, 0x55, 0xe8,
0xbe, 0x64, 0x97, 0x9f, 0x1e, 0x42, 0x7d, 0x93, 0x61, 0x52, 0x85, 0xd2, 0xa0, 0xdb, 0xef, 0x78,
0xf6, 0x07, 0x04, 0xa0, 0x3c, 0x38, 0xed, 0x7a, 0xdf, 0x7e, 0x6d, 0x6b, 0xf2, 0xf3, 0xc9, 0x95,
0xd7, 0xe9, 0xdb, 0xfa, 0x53, 0x41, 0x16, 0xac, 0x2f, 0x24, 0x16, 0xec, 0x0d, 0xba, 0x67, 0xdd,
0xcb, 0x57, 0x5d, 0x95, 0x72, 0xd1, 0xea, 0x7b, 0x1d, 0x2a, 0x52, 0x84, 0x83, 0x76, 0x7a, 0xe7,
0xa7, 0xed, 0x96, 0xad, 0x4b, 0x07, 0xfd, 0xe1, 0xb2, 0x7b, 0x7e, 0x65, 0x1b, 0x58, 0xab, 0xe5,
0xb5, 0x7f, 0x52, 0xc7, 0x7e, 0xaf, 0x45, 0x3b, 0xb6, 0x29, 0x7e, 0x1b, 0x6a, 0x9d, 0x5f, 0x7a,
0x1d, 0x7a, 0x7a, 0xd1, 0xe9, 0x7a, 0xad, 0x73, 0xbb, 0x24, 0x73, 0x4e, 0x5a, 0xed, 0xb3, 0x41,
0xcf, 0x2e, 0xab, 0x62, 0x7d, 0xef, 0x52, 0x84, 0xee, 0x49, 0xc7, 0xab, 0x4b, 0x7a, 0x26, 0x6e,
0xa9, 0xb8, 0xba, 0xad, 0x9d, 0xb8, 0xe0, 0x8c, 0xa2, 0x69, 0x73, 0x19, 0xcd, 0xd3, 0xf9, 0x90,
0x35, 0x17, 0x61, 0xca, 0x38, 0x57, 0xff, 0xa4, 0x0e, 0xcb, 0xf8, 0xe7, 0xd9, 0x3f, 0x01, 0x00,
0x00, 0xff, 0xff, 0xd5, 0x8c, 0x39, 0x13, 0xbd, 0x0a, 0x00, 0x00,
}

Просмотреть файл

@ -48,13 +48,6 @@ func Run(port int) {
Close()
}
// FireRunHooks fires the hooks registered by OnHook.
// Use this in a non-server to run the hooks registered
// by servenv.OnRun().
func FireRunHooks() {
onRunHooks.Fire()
}
// Close runs any registered exit hooks in parallel.
func Close() {
onCloseHooks.Fire()

Просмотреть файл

@ -14,7 +14,6 @@
// a vitess distribution, register them using onInit and onClose. A
// clean way of achieving that is adding to this package a file with
// an init() function that registers the hooks.
package servenv
import (
@ -26,17 +25,20 @@ import (
"syscall"
"time"
// register the HTTP handlers for profiling
_ "net/http/pprof"
log "github.com/golang/glog"
"github.com/youtube/vitess/go/event"
"github.com/youtube/vitess/go/netutil"
"github.com/youtube/vitess/go/stats"
// register the proper init and shutdown hooks for logging
_ "github.com/youtube/vitess/go/vt/logutil"
)
var (
// The flags used when calling RegisterDefaultFlags.
// Port is part of the flags used when calling RegisterDefaultFlags.
Port *int
// Flags to alter the behavior of the library.
@ -53,10 +55,11 @@ var (
onRunHooks event.Hooks
inited bool
// filled in when calling Run
// ListeningURL is filled in when calling Run, contains the server URL.
ListeningURL url.URL
)
// Init is the first phase of the server startup.
func Init() {
mu.Lock()
defer mu.Unlock()
@ -72,10 +75,6 @@ func Init() {
}
runtime.MemProfileRate = *memProfileRate
gomaxprocs := os.Getenv("GOMAXPROCS")
if gomaxprocs == "" {
gomaxprocs = "1"
}
// We used to set this limit directly, but you pretty much have to
// use a root account to allow increasing a limit reliably. Dropping
@ -164,6 +163,13 @@ func OnRun(f func()) {
onRunHooks.Add(f)
}
// FireRunHooks fires the hooks registered by OnHook.
// Use this in a non-server to run the hooks registered
// by servenv.OnRun().
func FireRunHooks() {
onRunHooks.Fire()
}
// RegisterDefaultFlags registers the default flags for
// listening to a given port for standard connections.
// If calling this, then call RunDefault()

Просмотреть файл

@ -14,9 +14,6 @@ import (
"strings"
"github.com/youtube/vitess/go/vt/servenv"
"github.com/youtube/vitess/go/vt/topo"
topodatapb "github.com/youtube/vitess/go/vt/proto/topodata"
)
var (
@ -87,33 +84,6 @@ func VtctldSrvKeyspace(cell, keyspace string) template.HTML {
})
}
// VtctldSrvShard returns the shard name, possibly linked to the
// SrvShard page in vtctld.
func VtctldSrvShard(cell, keyspace, shard string) template.HTML {
return MakeVtctldRedirect(shard, map[string]string{
"type": "srv_shard",
"cell": cell,
"keyspace": keyspace,
"shard": shard,
})
}
// VtctldSrvType returns the tablet type, possibly linked to the
// EndPoints page in vtctld.
func VtctldSrvType(cell, keyspace, shard string, tabletType topodatapb.TabletType) template.HTML {
strTabletType := strings.ToLower(tabletType.String())
if !topo.IsInServingGraph(tabletType) {
return template.HTML(strTabletType)
}
return MakeVtctldRedirect(strTabletType, map[string]string{
"type": "srv_type",
"cell": cell,
"keyspace": keyspace,
"shard": shard,
"tablet_type": strTabletType,
})
}
// VtctldReplication returns 'cell/keyspace/shard', possibly linked to the
// ShardReplication page in vtctld.
func VtctldReplication(cell, keyspace, shard string) template.HTML {
@ -141,8 +111,6 @@ func init() {
"github_com_youtube_vitess_vtctld_shard": VtctldShard,
"github_com_youtube_vitess_vtctld_srv_cell": VtctldSrvCell,
"github_com_youtube_vitess_vtctld_srv_keyspace": VtctldSrvKeyspace,
"github_com_youtube_vitess_vtctld_srv_shard": VtctldSrvShard,
"github_com_youtube_vitess_vtctld_srv_type": VtctldSrvType,
"github_com_youtube_vitess_vtctld_replication": VtctldReplication,
"github_com_youtube_vitess_vtctld_tablet": VtctldTablet,
})

Просмотреть файл

@ -175,9 +175,6 @@ const (
// been reparented
ShardActionExternallyReparented = "ShardExternallyReparented"
// ShardActionRebuild recomputes derived shard-wise data
ShardActionRebuild = "RebuildShard"
// ShardActionCheck takes a generic read lock for inexpensive
// shard-wide actions.
ShardActionCheck = "CheckShard"
@ -216,14 +213,6 @@ const (
// KeyspaceActionCreateShard protects shard creation within the keyspace
KeyspaceActionCreateShard = "KeyspaceCreateShard"
//
// SrvShard actions - very local locking, for consistency.
// These are just descriptive and used for locking / logging.
//
// SrvShardActionRebuild locks the SvrShard for rebuild
SrvShardActionRebuild = "RebuildSrvShard"
// all the valid states for an action
// ActionStateQueued is for an action that is going to be executed

Просмотреть файл

@ -76,13 +76,6 @@ func ShardExternallyReparented(tabletAlias *topodatapb.TabletAlias) *ActionNode
}).SetGuid()
}
// RebuildShard returns an ActionNode
func RebuildShard() *ActionNode {
return (&ActionNode{
Action: ShardActionRebuild,
}).SetGuid()
}
// CheckShard returns an ActionNode
func CheckShard() *ActionNode {
return (&ActionNode{
@ -168,12 +161,3 @@ func KeyspaceCreateShard() *ActionNode {
Action: KeyspaceActionCreateShard,
}).SetGuid()
}
//methods to build the serving shard action nodes
// RebuildSrvShard returns an ActionNode
func RebuildSrvShard() *ActionNode {
return (&ActionNode{
Action: SrvShardActionRebuild,
}).SetGuid()
}

Просмотреть файл

@ -158,72 +158,3 @@ func (n *ActionNode) UnlockShard(ctx context.Context, ts topo.Server, keyspace,
}
return err
}
// LockSrvShard will lock the serving shard in the topology server.
// UnlockSrvShard should be called if this returns no error.
func (n *ActionNode) LockSrvShard(ctx context.Context, ts topo.Server, cell, keyspace, shard string) (lockPath string, err error) {
log.Infof("Locking serving shard %v/%v/%v for action %v", cell, keyspace, shard, n.Action)
ctx, cancel := context.WithTimeout(ctx, *LockTimeout)
defer cancel()
span := trace.NewSpanFromContext(ctx)
span.StartClient("TopoServer.LockSrvShardForAction")
span.Annotate("action", n.Action)
span.Annotate("keyspace", keyspace)
span.Annotate("shard", shard)
span.Annotate("cell", cell)
defer span.Finish()
j, err := n.ToJSON()
if err != nil {
return "", err
}
return ts.LockSrvShardForAction(ctx, cell, keyspace, shard, j)
}
// UnlockSrvShard unlocks a previously locked serving shard.
func (n *ActionNode) UnlockSrvShard(ctx context.Context, ts topo.Server, cell, keyspace, shard string, lockPath string, actionError error) error {
// Detach from the parent timeout, but copy the trace span.
// We need to still release the lock even if the parent context timed out.
ctx = trace.CopySpan(context.TODO(), ctx)
ctx, cancel := context.WithTimeout(ctx, DefaultLockTimeout)
defer cancel()
span := trace.NewSpanFromContext(ctx)
span.StartClient("TopoServer.UnlockSrvShardForAction")
span.Annotate("action", n.Action)
span.Annotate("keyspace", keyspace)
span.Annotate("shard", shard)
span.Annotate("cell", cell)
defer span.Finish()
// first update the actionNode
if actionError != nil {
log.Infof("Unlocking serving shard %v/%v/%v for action %v with error %v", cell, keyspace, shard, n.Action, actionError)
n.Error = actionError.Error()
n.State = ActionStateFailed
} else {
log.Infof("Unlocking serving shard %v/%v/%v for successful action %v", cell, keyspace, shard, n.Action)
n.Error = ""
n.State = ActionStateDone
}
j, err := n.ToJSON()
if err != nil {
if actionError != nil {
// this will be masked
log.Warningf("node.ToJSON failed: %v", err)
return actionError
}
return err
}
err = ts.UnlockSrvShardForAction(ctx, cell, keyspace, shard, lockPath, j)
if actionError != nil {
if err != nil {
// this will be masked
log.Warningf("UnlockSrvShardForAction failed: %v", err)
}
return actionError
}
return err
}

Просмотреть файл

@ -31,7 +31,7 @@ func (agent *ActionAgent) maybeRebuildKeyspace(cell, keyspace string) {
return
}
if err := topotools.RebuildKeyspace(agent.batchCtx, logutil.NewConsoleLogger(), agent.TopoServer, keyspace, []string{cell}, false); err != nil {
if err := topotools.RebuildKeyspace(agent.batchCtx, logutil.NewConsoleLogger(), agent.TopoServer, keyspace, []string{cell}); err != nil {
log.Warningf("RebuildKeyspace(%v,%v) failed: %v, may need to run 'vtctl RebuildKeyspaceGraph %v')", cell, keyspace, err, keyspace)
}
}

Просмотреть файл

@ -19,7 +19,6 @@ import (
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
"github.com/youtube/vitess/go/vt/topo"
"github.com/youtube/vitess/go/vt/topo/topoproto"
"github.com/youtube/vitess/go/vt/topotools"
"github.com/youtube/vitess/go/vt/topotools/events"
"golang.org/x/net/context"
@ -134,7 +133,7 @@ func (agent *ActionAgent) finalizeTabletExternallyReparented(ctx context.Context
var errs concurrency.AllErrorRecorder
oldMasterAlias := si.MasterAlias
// Update the tablet records and serving graph for the old and new master concurrently.
// Update the tablet records concurrently.
event.DispatchUpdate(ev, "updating old and new master tablet records")
log.Infof("finalizeTabletExternallyReparented: updating tablet records")
wg.Add(1)
@ -193,8 +192,11 @@ func (agent *ActionAgent) finalizeTabletExternallyReparented(ctx context.Context
// didn't get modified between the time when we read it and the time when we
// write it back. Now we use an update loop pattern to do that instead.
event.DispatchUpdate(ev, "updating global shard record")
log.Infof("finalizeTabletExternallyReparented: updating global shard record")
si, err = agent.TopoServer.UpdateShardFields(ctx, tablet.Keyspace, tablet.Shard, func(shard *topodatapb.Shard) error {
log.Infof("finalizeTabletExternallyReparented: updating global shard record if needed")
_, err = agent.TopoServer.UpdateShardFields(ctx, tablet.Keyspace, tablet.Shard, func(shard *topodatapb.Shard) error {
if topoproto.TabletAliasEqual(shard.MasterAlias, tablet.Alias) {
return topo.ErrNoUpdateNeeded
}
shard.MasterAlias = tablet.Alias
return nil
})
@ -202,17 +204,6 @@ func (agent *ActionAgent) finalizeTabletExternallyReparented(ctx context.Context
return err
}
// We already took care of updating the serving graph for the old and new masters.
// All that's left now is in case of a cross-cell reparent, we need to update the
// master cell setting in the SrvShard records of all cells.
if oldMasterAlias == nil || oldMasterAlias.Cell != tablet.Alias.Cell {
event.DispatchUpdate(ev, "rebuilding shard serving graph")
log.Infof("finalizeTabletExternallyReparented: updating SrvShard in all cells for cross-cell reparent")
if err := topotools.UpdateAllSrvShards(ctx, agent.TopoServer, si); err != nil {
return err
}
}
event.DispatchUpdate(ev, "finished")
return nil
}

Просмотреть файл

@ -45,7 +45,6 @@ type Tee struct {
keyspaceLockPaths map[string]string
shardLockPaths map[string]string
srvShardLockPaths map[string]string
}
// when reading a version from 'readFrom', we also read another version
@ -76,7 +75,6 @@ func NewTee(primary, secondary topo.Impl, reverseLockOrder bool) *Tee {
tabletVersionMapping: make(map[topodatapb.TabletAlias]versionMapping),
keyspaceLockPaths: make(map[string]string),
shardLockPaths: make(map[string]string),
srvShardLockPaths: make(map[string]string),
}
}
@ -511,87 +509,6 @@ func (tee *Tee) DeleteKeyspaceReplication(ctx context.Context, cell, keyspace st
// Serving Graph management, per cell.
//
// LockSrvShardForAction is part of the topo.Server interface
func (tee *Tee) LockSrvShardForAction(ctx context.Context, cell, keyspace, shard, contents string) (string, error) {
// lock lockFirst
pLockPath, err := tee.lockFirst.LockSrvShardForAction(ctx, cell, keyspace, shard, contents)
if err != nil {
return "", err
}
// lock lockSecond
sLockPath, err := tee.lockSecond.LockSrvShardForAction(ctx, cell, keyspace, shard, contents)
if err != nil {
if err := tee.lockFirst.UnlockSrvShardForAction(ctx, cell, keyspace, shard, pLockPath, "{}"); err != nil {
log.Warningf("Failed to unlock lockFirst shard after failed lockSecond lock for %v/%v/%v", cell, keyspace, shard)
}
return "", err
}
// remember both locks, keyed by lockFirst lock path
tee.mu.Lock()
tee.srvShardLockPaths[pLockPath] = sLockPath
tee.mu.Unlock()
return pLockPath, nil
}
// UnlockSrvShardForAction is part of the topo.Server interface
func (tee *Tee) UnlockSrvShardForAction(ctx context.Context, cell, keyspace, shard, lockPath, results string) error {
// get from map
tee.mu.Lock() // not using defer for unlock, to minimize lock time
sLockPath, ok := tee.srvShardLockPaths[lockPath]
if !ok {
tee.mu.Unlock()
return fmt.Errorf("no lockPath %v in srvShardLockPaths", lockPath)
}
delete(tee.srvShardLockPaths, lockPath)
tee.mu.Unlock()
// unlock lockSecond, then lockFirst
serr := tee.lockSecond.UnlockSrvShardForAction(ctx, cell, keyspace, shard, sLockPath, results)
perr := tee.lockFirst.UnlockSrvShardForAction(ctx, cell, keyspace, shard, lockPath, results)
if serr != nil {
if perr != nil {
log.Warningf("Secondary UnlockSrvShardForAction(%v/%v/%v, %v) failed: %v", cell, keyspace, shard, sLockPath, serr)
}
return serr
}
return perr
}
// UpdateSrvShard is part of the topo.Server interface
func (tee *Tee) UpdateSrvShard(ctx context.Context, cell, keyspace, shard string, srvShard *topodatapb.SrvShard) error {
if err := tee.primary.UpdateSrvShard(ctx, cell, keyspace, shard, srvShard); err != nil {
return err
}
if err := tee.secondary.UpdateSrvShard(ctx, cell, keyspace, shard, srvShard); err != nil {
// not critical enough to fail
log.Warningf("secondary.UpdateSrvShard(%v, %v, %v) failed: %v", cell, keyspace, shard, err)
}
return nil
}
// GetSrvShard is part of the topo.Server interface
func (tee *Tee) GetSrvShard(ctx context.Context, cell, keyspace, shard string) (*topodatapb.SrvShard, error) {
return tee.readFrom.GetSrvShard(ctx, cell, keyspace, shard)
}
// DeleteSrvShard is part of the topo.Server interface
func (tee *Tee) DeleteSrvShard(ctx context.Context, cell, keyspace, shard string) error {
err := tee.primary.DeleteSrvShard(ctx, cell, keyspace, shard)
if err != nil && err != topo.ErrNoNode {
return err
}
if err := tee.secondary.DeleteSrvShard(ctx, cell, keyspace, shard); err != nil {
// not critical enough to fail
log.Warningf("secondary.DeleteSrvShard(%v, %v, %v) failed: %v", cell, keyspace, shard, err)
}
return err
}
// UpdateSrvKeyspace is part of the topo.Server interface
func (tee *Tee) UpdateSrvKeyspace(ctx context.Context, cell, keyspace string, srvKeyspace *topodatapb.SrvKeyspace) error {
if err := tee.primary.UpdateSrvKeyspace(ctx, cell, keyspace, srvKeyspace); err != nil {

Просмотреть файл

@ -99,13 +99,3 @@ func TestShardLock(t *testing.T) {
ts := newFakeTeeServer(t)
test.CheckShardLock(ctx, t, ts)
}
func TestSrvShardLock(t *testing.T) {
ctx := context.Background()
if testing.Short() {
t.Skip("skipping wait-based test in short mode.")
}
ts := newFakeTeeServer(t)
test.CheckSrvShardLock(ctx, t, ts)
}

Просмотреть файл

@ -196,17 +196,6 @@ type Impl interface {
// Serving Graph management, per cell.
//
// LockSrvShardForAction locks the serving shard in order to
// perform the action described by contents. It will wait for
// the lock until at most ctx.Done(). The wait can be interrupted
// by cancelling the context. It returns the lock path.
//
// Can return ErrTimeout or ErrInterrupted.
LockSrvShardForAction(ctx context.Context, cell, keyspace, shard, contents string) (string, error)
// UnlockSrvShardForAction unlocks a serving shard.
UnlockSrvShardForAction(ctx context.Context, cell, keyspace, shard, lockPath, results string) error
// WatchSrvKeyspace returns a channel that receives notifications
// every time the SrvKeyspace for the given keyspace / cell changes.
// It should receive a notification with the initial value fairly
@ -224,18 +213,6 @@ type Impl interface {
// notification, but the content hasn't changed).
WatchSrvKeyspace(ctx context.Context, cell, keyspace string) (notifications <-chan *topodatapb.SrvKeyspace, err error)
// UpdateSrvShard updates the serving records for a cell,
// keyspace, shard.
UpdateSrvShard(ctx context.Context, cell, keyspace, shard string, srvShard *topodatapb.SrvShard) error
// GetSrvShard reads a SrvShard record.
// Can return ErrNoNode.
GetSrvShard(ctx context.Context, cell, keyspace, shard string) (*topodatapb.SrvShard, error)
// DeleteSrvShard deletes a SrvShard record.
// Can return ErrNoNode.
DeleteSrvShard(ctx context.Context, cell, keyspace, shard string) error
// UpdateSrvKeyspace updates the serving records for a cell, keyspace.
UpdateSrvKeyspace(ctx context.Context, cell, keyspace string, srvKeyspace *topodatapb.SrvKeyspace) error
@ -313,7 +290,6 @@ type Server struct {
type SrvTopoServer interface {
GetSrvKeyspaceNames(ctx context.Context, cell string) ([]string, error)
GetSrvKeyspace(ctx context.Context, cell, keyspace string) (*topodatapb.SrvKeyspace, error)
GetSrvShard(ctx context.Context, cell, keyspace, shard string) (*topodatapb.SrvShard, error)
WatchVSchema(ctx context.Context, keyspace string) (notifications <-chan *vschemapb.Keyspace, err error)
}

Просмотреть файл

@ -140,36 +140,11 @@ func (ft FakeTopo) DeleteKeyspaceReplication(ctx context.Context, cell, keyspace
return errNotImplemented
}
// LockSrvShardForAction implements topo.Server.
func (ft FakeTopo) LockSrvShardForAction(ctx context.Context, cell, keyspace, shard, contents string) (string, error) {
return "", errNotImplemented
}
// UnlockSrvShardForAction implements topo.Server.
func (ft FakeTopo) UnlockSrvShardForAction(ctx context.Context, cell, keyspace, shard, lockPath, results string) error {
return errNotImplemented
}
// WatchSrvKeyspace implements topo.Server.WatchSrvKeyspace
func (ft FakeTopo) WatchSrvKeyspace(ctx context.Context, cell, keyspace string) (<-chan *topodatapb.SrvKeyspace, error) {
return nil, errNotImplemented
}
// UpdateSrvShard implements topo.Server.
func (ft FakeTopo) UpdateSrvShard(ctx context.Context, cell, keyspace, shard string, srvShard *topodatapb.SrvShard) error {
return errNotImplemented
}
// GetSrvShard implements topo.Server.
func (ft FakeTopo) GetSrvShard(ctx context.Context, cell, keyspace, shard string) (*topodatapb.SrvShard, error) {
return nil, errNotImplemented
}
// DeleteSrvShard implements topo.Server.
func (ft FakeTopo) DeleteSrvShard(ctx context.Context, cell, keyspace, shard string) error {
return errNotImplemented
}
// UpdateSrvKeyspace implements topo.Server.
func (ft FakeTopo) UpdateSrvKeyspace(ctx context.Context, cell, keyspace string, srvKeyspace *topodatapb.SrvKeyspace) error {
return errNotImplemented

Просмотреть файл

@ -63,7 +63,6 @@ func CheckKeyspace(ctx context.Context, t *testing.T, ts topo.Impl) {
Keyspace: "test_keyspace3",
},
},
SplitShardCount: 64,
}
if err := ts.CreateKeyspace(ctx, "test_keyspace2", k); err != nil {
t.Errorf("CreateKeyspace: %v", err)

Просмотреть файл

@ -211,101 +211,3 @@ func checkShardLockUnblocks(ctx context.Context, t *testing.T, ts topo.Impl) {
t.Fatalf("unlocking timed out")
}
}
// CheckSrvShardLock tests we can take a SrvShard lock
func CheckSrvShardLock(ctx context.Context, t *testing.T, ts topo.Impl) {
checkSrvShardLockGeneral(ctx, t, ts)
checkSrvShardLockUnblocks(ctx, t, ts)
}
func checkSrvShardLockGeneral(ctx context.Context, t *testing.T, ts topo.Impl) {
cell := getLocalCell(ctx, t, ts)
// make sure we can create the lock even if no directory exists
lockPath, err := ts.LockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", "fake-content")
if err != nil {
t.Fatalf("LockSrvShardForAction: %v", err)
}
if err := ts.UnlockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", lockPath, "fake-results"); err != nil {
t.Fatalf("UnlockShardForAction: %v", err)
}
// now take the lock again after the root exists
lockPath, err = ts.LockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", "fake-content")
if err != nil {
t.Fatalf("LockSrvShardForAction: %v", err)
}
// test we can't take the lock again
fastCtx, cancel := context.WithTimeout(ctx, timeUntilLockIsTaken)
if _, err := ts.LockSrvShardForAction(fastCtx, cell, "test_keyspace", "10-20", "unused-fake-content"); err != topo.ErrTimeout {
t.Fatalf("LockSrvShardForAction(again): %v", err)
}
cancel()
// test we can interrupt taking the lock
interruptCtx, cancel := context.WithCancel(ctx)
go func() {
time.Sleep(timeUntilLockIsTaken)
cancel()
}()
if _, err := ts.LockSrvShardForAction(interruptCtx, cell, "test_keyspace", "10-20", "unused-fake-content"); err != topo.ErrInterrupted {
t.Fatalf("LockSrvShardForAction(interrupted): %v", err)
}
// unlock now
if err := ts.UnlockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", lockPath, "fake-results"); err != nil {
t.Fatalf("UnlockSrvShardForAction(): %v", err)
}
// test we can't unlock again
if err := ts.UnlockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", lockPath, "fake-results"); err == nil {
t.Error("UnlockSrvShardForAction(again) worked")
}
}
// checkSrvShardLockUnblocks makes sure that a routine waiting on a lock
// is unblocked when another routine frees the lock
func checkSrvShardLockUnblocks(ctx context.Context, t *testing.T, ts topo.Impl) {
cell := getLocalCell(ctx, t, ts)
unblock := make(chan struct{})
finished := make(chan struct{})
// as soon as we're unblocked, we try to lock the shard
go func() {
<-unblock
lockPath, err := ts.LockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", "fake-content")
if err != nil {
t.Fatalf("LockSrvShardForAction(test, test_keyspace, 10-20) failed: %v", err)
}
if err = ts.UnlockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", lockPath, "fake-results"); err != nil {
t.Fatalf("UnlockSrvShardForAction(test, test_keyspace, 10-20): %v", err)
}
close(finished)
}()
// lock the shard
lockPath2, err := ts.LockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", "fake-content")
if err != nil {
t.Fatalf("LockSrvShardForAction(test, test_keyspace, 10-20) failed: %v", err)
}
// unblock the go routine so it starts waiting
close(unblock)
// sleep for a while so we're sure the go routine is blocking
time.Sleep(timeUntilLockIsTaken)
if err = ts.UnlockSrvShardForAction(ctx, cell, "test_keyspace", "10-20", lockPath2, "fake-results"); err != nil {
t.Fatalf("UnlockSrvShardForAction(test, test_keyspace, 10-20): %v", err)
}
timeout := time.After(10 * time.Second)
select {
case <-finished:
case <-timeout:
t.Fatalf("unlocking timed out")
}
}

Просмотреть файл

@ -19,25 +19,6 @@ import (
func CheckServingGraph(ctx context.Context, t *testing.T, ts topo.Impl) {
cell := getLocalCell(ctx, t, ts)
// test cell/keyspace/shard entries (SrvShard)
srvShard := &topodatapb.SrvShard{
Name: "-10",
KeyRange: newKeyRange("-10"),
MasterCell: "test",
}
if err := ts.UpdateSrvShard(ctx, cell, "test_keyspace", "-10", srvShard); err != nil {
t.Fatalf("UpdateSrvShard(1): %v", err)
}
if _, err := ts.GetSrvShard(ctx, cell, "test_keyspace", "666"); err != topo.ErrNoNode {
t.Errorf("GetSrvShard(invalid): %v", err)
}
if s, err := ts.GetSrvShard(ctx, cell, "test_keyspace", "-10"); err != nil ||
s.Name != "-10" ||
!key.KeyRangeEqual(s.KeyRange, newKeyRange("-10")) ||
s.MasterCell != "test" {
t.Errorf("GetSrvShard(valid): %v", err)
}
// test cell/keyspace entries (SrvKeyspace)
srvKeyspace := topodatapb.SrvKeyspace{
Partitions: []*topodatapb.SrvKeyspace_KeyspacePartition{
@ -105,7 +86,7 @@ func CheckServingGraph(ctx context.Context, t *testing.T, ts topo.Impl) {
// Delete the SrvKeyspace.
if err := ts.DeleteSrvKeyspace(ctx, cell, "unknown_keyspace_so_far"); err != nil {
t.Fatalf("DeleteSrvShard: %v", err)
t.Fatalf("DeleteSrvKeyspace: %v", err)
}
if _, err := ts.GetSrvKeyspace(ctx, cell, "unknown_keyspace_so_far"); err != topo.ErrNoNode {
t.Errorf("GetSrvKeyspace(deleted) got %v, want ErrNoNode", err)
@ -188,7 +169,7 @@ func CheckWatchSrvKeyspace(ctx context.Context, t *testing.T, ts topo.Impl) {
}
// re-create the value, a bit different, should get a notification
srvKeyspace.SplitShardCount = 2
srvKeyspace.ShardingColumnName = "test_column2"
if err := ts.UpdateSrvKeyspace(ctx, cell, keyspace, srvKeyspace); err != nil {
t.Fatalf("UpdateSrvKeyspace failed: %v", err)
}

Просмотреть файл

@ -8,9 +8,7 @@ import (
"bytes"
"encoding/hex"
"fmt"
"sync"
"github.com/youtube/vitess/go/vt/concurrency"
"github.com/youtube/vitess/go/vt/logutil"
"github.com/youtube/vitess/go/vt/tabletmanager/actionnode"
"github.com/youtube/vitess/go/vt/topo"
@ -21,14 +19,14 @@ import (
)
// RebuildKeyspace rebuilds the serving graph data while locking out other changes.
func RebuildKeyspace(ctx context.Context, log logutil.Logger, ts topo.Server, keyspace string, cells []string, rebuildSrvShards bool) error {
func RebuildKeyspace(ctx context.Context, log logutil.Logger, ts topo.Server, keyspace string, cells []string) error {
node := actionnode.RebuildKeyspace()
lockPath, err := node.LockKeyspace(ctx, ts, keyspace)
if err != nil {
return err
}
err = rebuildKeyspace(ctx, log, ts, keyspace, cells, rebuildSrvShards)
err = rebuildKeyspace(ctx, log, ts, keyspace, cells)
return node.UnlockKeyspace(ctx, ts, keyspace, lockPath, err)
}
@ -45,7 +43,6 @@ func findCellsForRebuild(ki *topo.KeyspaceInfo, shardMap map[string]*topo.ShardI
ShardingColumnName: ki.ShardingColumnName,
ShardingColumnType: ki.ShardingColumnType,
ServedFrom: ki.ComputeCellServedFrom(cell),
SplitShardCount: ki.SplitShardCount,
}
}
}
@ -58,7 +55,7 @@ func findCellsForRebuild(ki *topo.KeyspaceInfo, shardMap map[string]*topo.ShardI
//
// Take data from the global keyspace and rebuild the local serving
// copies in each cell.
func rebuildKeyspace(ctx context.Context, log logutil.Logger, ts topo.Server, keyspace string, cells []string, rebuildSrvShards bool) error {
func rebuildKeyspace(ctx context.Context, log logutil.Logger, ts topo.Server, keyspace string, cells []string) error {
log.Infof("rebuildKeyspace %v", keyspace)
ki, err := ts.GetKeyspace(ctx, keyspace)
@ -66,41 +63,9 @@ func rebuildKeyspace(ctx context.Context, log logutil.Logger, ts topo.Server, ke
return err
}
var shardCache map[string]*topo.ShardInfo
if rebuildSrvShards {
shards, err := ts.GetShardNames(ctx, keyspace)
if err != nil {
return nil
}
// Rebuild all shards in parallel, save the shards
shardCache = make(map[string]*topo.ShardInfo)
wg := sync.WaitGroup{}
mu := sync.Mutex{}
rec := concurrency.FirstErrorRecorder{}
for _, shard := range shards {
wg.Add(1)
go func(shard string) {
if shardInfo, err := RebuildShard(ctx, log, ts, keyspace, shard, cells); err != nil {
rec.RecordError(fmt.Errorf("RebuildShard failed: %v/%v %v", keyspace, shard, err))
} else {
mu.Lock()
shardCache[shard] = shardInfo
mu.Unlock()
}
wg.Done()
}(shard)
}
wg.Wait()
if rec.HasErrors() {
return rec.Error()
}
} else {
shardCache, err = ts.FindAllShardsInKeyspace(ctx, keyspace)
if err != nil {
return err
}
shards, err := ts.FindAllShardsInKeyspace(ctx, keyspace)
if err != nil {
return err
}
// Build the list of cells to work on: we get the union
@ -110,7 +75,7 @@ func rebuildKeyspace(ctx context.Context, log logutil.Logger, ts topo.Server, ke
// key: cell
// value: topo.SrvKeyspace object being built
srvKeyspaceMap := make(map[string]*topodatapb.SrvKeyspace)
findCellsForRebuild(ki, shardCache, cells, srvKeyspaceMap)
findCellsForRebuild(ki, shards, cells, srvKeyspaceMap)
// Then we add the cells from the keyspaces we might be 'ServedFrom'.
for _, ksf := range ki.ServedFroms {
@ -122,13 +87,13 @@ func rebuildKeyspace(ctx context.Context, log logutil.Logger, ts topo.Server, ke
}
// for each entry in the srvKeyspaceMap map, we do the following:
// - read the SrvShard structures for each shard / cell
// - get the Shard structures for each shard / cell
// - if not present, build an empty one from global Shard
// - compute the union of the db types (replica, master, ...)
// - sort the shards in the list by range
// - check the ranges are compatible (no hole, covers everything)
for cell, srvKeyspace := range srvKeyspaceMap {
for _, si := range shardCache {
for _, si := range shards {
servedTypes := si.GetServedTypesPerCell(cell)
// for each type this shard is supposed to serve,

Просмотреть файл

@ -1,94 +0,0 @@
// Copyright 2012, Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package topotools
import (
"sync"
"github.com/youtube/vitess/go/trace"
"github.com/youtube/vitess/go/vt/concurrency"
"github.com/youtube/vitess/go/vt/logutil"
"github.com/youtube/vitess/go/vt/topo"
"golang.org/x/net/context"
topodatapb "github.com/youtube/vitess/go/vt/proto/topodata"
)
// RebuildShard updates the SrvShard objects and underlying serving graph.
//
// Re-read from TopologyServer to make sure we are using the side
// effects of all actions.
//
// This function will start each cell over from the beginning on ErrBadVersion,
// so it doesn't need a lock on the shard.
func RebuildShard(ctx context.Context, log logutil.Logger, ts topo.Server, keyspace, shard string, cells []string) (*topo.ShardInfo, error) {
log.Infof("RebuildShard %v/%v", keyspace, shard)
span := trace.NewSpanFromContext(ctx)
span.StartLocal("topotools.RebuildShard")
defer span.Finish()
ctx = trace.NewContext(ctx, span)
// read the existing shard info. It has to exist.
shardInfo, err := ts.GetShard(ctx, keyspace, shard)
if err != nil {
return nil, err
}
// rebuild all cells in parallel
wg := sync.WaitGroup{}
rec := concurrency.AllErrorRecorder{}
for _, cell := range shardInfo.Cells {
// skip this cell if we shouldn't rebuild it
if !topo.InCellList(cell, cells) {
continue
}
wg.Add(1)
go func(cell string) {
defer wg.Done()
rec.RecordError(rebuildCellSrvShard(ctx, log, ts, shardInfo, cell))
}(cell)
}
wg.Wait()
return shardInfo, rec.Error()
}
// rebuildCellSrvShard computes and writes the serving graph data to a
// single cell
func rebuildCellSrvShard(ctx context.Context, log logutil.Logger, ts topo.Server, si *topo.ShardInfo, cell string) (err error) {
log.Infof("rebuildCellSrvShard %v/%v in cell %v", si.Keyspace(), si.ShardName(), cell)
return UpdateSrvShard(ctx, ts, cell, si)
}
// UpdateSrvShard creates the SrvShard object based on the global ShardInfo,
// and writes it to the given cell.
func UpdateSrvShard(ctx context.Context, ts topo.Server, cell string, si *topo.ShardInfo) error {
srvShard := &topodatapb.SrvShard{
Name: si.ShardName(),
KeyRange: si.KeyRange,
}
if si.MasterAlias != nil {
srvShard.MasterCell = si.MasterAlias.Cell
}
return ts.UpdateSrvShard(ctx, cell, si.Keyspace(), si.ShardName(), srvShard)
}
// UpdateAllSrvShards calls UpdateSrvShard for all cells concurrently.
func UpdateAllSrvShards(ctx context.Context, ts topo.Server, si *topo.ShardInfo) error {
wg := sync.WaitGroup{}
errs := concurrency.AllErrorRecorder{}
for _, cell := range si.Cells {
wg.Add(1)
go func(cell string) {
errs.RecordError(UpdateSrvShard(ctx, ts, cell, si))
wg.Done()
}(cell)
}
wg.Wait()
return errs.Error()
}

Просмотреть файл

@ -1,83 +0,0 @@
// Copyright 2014, Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package topotools_test
import (
"fmt"
"testing"
"golang.org/x/net/context"
"github.com/youtube/vitess/go/vt/logutil"
"github.com/youtube/vitess/go/vt/topo"
"github.com/youtube/vitess/go/vt/zktopo/zktestserver"
. "github.com/youtube/vitess/go/vt/topotools"
topodatapb "github.com/youtube/vitess/go/vt/proto/topodata"
)
const (
testShard = "0"
testKeyspace = "test_keyspace"
)
func addTablet(ctx context.Context, t *testing.T, ts topo.Server, uid int, cell string, tabletType topodatapb.TabletType) *topo.TabletInfo {
tablet := &topodatapb.Tablet{
Alias: &topodatapb.TabletAlias{Cell: cell, Uid: uint32(uid)},
Hostname: fmt.Sprintf("%vbsr%v", cell, uid),
Ip: fmt.Sprintf("212.244.218.%v", uid),
PortMap: map[string]int32{
"vt": 3333 + 10*int32(uid),
"mysql": 3334 + 10*int32(uid),
},
Keyspace: testKeyspace,
Type: tabletType,
Shard: testShard,
}
if err := ts.CreateTablet(ctx, tablet); err != nil {
t.Fatalf("CreateTablet: %v", err)
}
ti, err := ts.GetTablet(ctx, tablet.Alias)
if err != nil {
t.Fatalf("GetTablet: %v", err)
}
return ti
}
func TestRebuildShard(t *testing.T) {
ctx := context.Background()
cells := []string{"test_cell"}
logger := logutil.NewMemoryLogger()
// Set up topology.
ts := zktestserver.New(t, cells)
si, err := GetOrCreateShard(ctx, ts, testKeyspace, testShard)
if err != nil {
t.Fatalf("GetOrCreateShard: %v", err)
}
si.Cells = append(si.Cells, cells[0])
si.MasterAlias = &topodatapb.TabletAlias{Cell: cells[0], Uid: 1}
if err := ts.UpdateShard(ctx, si); err != nil {
t.Fatalf("UpdateShard: %v", err)
}
addTablet(ctx, t, ts, 1, cells[0], topodatapb.TabletType_MASTER)
addTablet(ctx, t, ts, 2, cells[0], topodatapb.TabletType_REPLICA)
// Do a rebuild.
if _, err := RebuildShard(ctx, logger, ts, testKeyspace, testShard, cells); err != nil {
t.Fatalf("RebuildShard: %v", err)
}
srvShard, err := ts.GetSrvShard(ctx, cells[0], testKeyspace, testShard)
if err != nil {
t.Fatalf("GetSrvShard: %v", err)
}
if srvShard.MasterCell != cells[0] {
t.Errorf("Invalid cell name, got %v expected %v", srvShard.MasterCell, cells[0])
}
}

Просмотреть файл

@ -52,7 +52,6 @@ func RestartSlavesExternal(ts topo.Server, log logutil.Logger, slaveTabletMap, m
// the old master can be annoying if left
// around in the replication graph, so if we
// can't restart it, we just make it spare.
// We don't rebuild the Shard just yet though.
log.Warningf("Old master %v is not restarting in time, forcing it to spare: %v", ti.Alias, err)
ti.Type = topodatapb.TabletType_SPARE

Просмотреть файл

@ -12,7 +12,7 @@ level. In particular, it cannot depend on:
topotools is used by wrangler, so it ends up in all tools using
wrangler (vtctl, vtctld, ...). It is also included by vttablet, so it contains:
- most of the logic to rebuild a shard serving graph (helthcheck module)
- most of the logic to create a shard / keyspace (tablet's init code)
- some of the logic to perform a TabletExternallyReparented (RPC call
to master vttablet to let it know it's the master).

Просмотреть файл

@ -237,11 +237,6 @@ func (f *fakeVTGateService) GetSrvKeyspace(ctx context.Context, keyspace string)
return &topodatapb.SrvKeyspace{}, nil
}
// GetSrvShard is part of the VTGateService interface
func (f *fakeVTGateService) GetSrvShard(ctx context.Context, keyspace, shard string) (*topodatapb.SrvShard, error) {
return &topodatapb.SrvShard{}, nil
}
// HandlePanic is part of the VTGateService interface
func (f *fakeVTGateService) HandlePanic(err *error) {
if x := recover(); x != nil {

Просмотреть файл

@ -144,7 +144,7 @@ var commands = []commandGroup{
"[-hostname <hostname>] [-ip-addr <ip addr>] [-mysql-port <mysql port>] [-vt-port <vt port>] [-grpc-port <grpc port>] <tablet alias> ",
"Updates the IP address and port numbers of a tablet."},
{"DeleteTablet", commandDeleteTablet,
"[-allow_master] [-skip_rebuild] <tablet alias> ...",
"[-allow_master] <tablet alias> ...",
"Deletes tablet(s) from the topology."},
{"SetReadOnly", commandSetReadOnly,
"<tablet alias>",
@ -197,9 +197,6 @@ var commands = []commandGroup{
{"GetShard", commandGetShard,
"<keyspace/shard>",
"Outputs a JSON structure that contains information about the Shard."},
{"RebuildShardGraph", commandRebuildShardGraph,
"[-cells=a,b] <keyspace/shard> ... ",
"Rebuilds the replication graph and shard serving data in ZooKeeper or etcd. This may trigger an update to all connected clients."},
{"TabletExternallyReparented", commandTabletExternallyReparented,
"<tablet alias>",
"Changes metadata in the topology server to acknowledge a shard master change performed by an external tool. See the Reparenting guide for more information:" +
@ -248,7 +245,7 @@ var commands = []commandGroup{
{
"Keyspaces", []command{
{"CreateKeyspace", commandCreateKeyspace,
"[-sharding_column_name=name] [-sharding_column_type=type] [-served_from=tablettype1:ks1,tablettype2,ks2,...] [-split_shard_count=N] [-force] <keyspace name>",
"[-sharding_column_name=name] [-sharding_column_type=type] [-served_from=tablettype1:ks1,tablettype2,ks2,...] [-force] <keyspace name>",
"Creates the specified keyspace."},
{"DeleteKeyspace", commandDeleteKeyspace,
"[-recursive] <keyspace>",
@ -263,14 +260,14 @@ var commands = []commandGroup{
"",
"Outputs a sorted list of all keyspaces."},
{"SetKeyspaceShardingInfo", commandSetKeyspaceShardingInfo,
"[-force] [-split_shard_count=N] <keyspace name> [<column name>] [<column type>]",
"[-force] <keyspace name> [<column name>] [<column type>]",
"Updates the sharding information for a keyspace."},
{"SetKeyspaceServedFrom", commandSetKeyspaceServedFrom,
"[-source=<source keyspace name>] [-remove] [-cells=c1,c2,...] <keyspace name> <tablet type>",
"Changes the ServedFromMap manually. This command is intended for emergency fixes. This field is automatically set when you call the *MigrateServedFrom* command. This command does not rebuild the serving graph."},
{"RebuildKeyspaceGraph", commandRebuildKeyspaceGraph,
"[-cells=a,b] [-rebuild_srv_shards] <keyspace> ...",
"Rebuilds the serving data for the keyspace and, optionally, all shards in the specified keyspace. This command may trigger an update to all connected clients."},
"[-cells=a,b] <keyspace> ...",
"Rebuilds the serving data for the keyspace. This command may trigger an update to all connected clients."},
{"ValidateKeyspace", commandValidateKeyspace,
"[-ping-tablets] <keyspace name>",
"Validates that all nodes reachable from the specified keyspace are consistent."},
@ -705,7 +702,6 @@ func commandUpdateTabletAddrs(ctx context.Context, wr *wrangler.Wrangler, subFla
func commandDeleteTablet(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
allowMaster := subFlags.Bool("allow_master", false, "Allows for the master tablet of a shard to be deleted. Use with caution.")
skipRebuild := subFlags.Bool("skip_rebuild", false, "Skips rebuilding the shard serving graph after deleting the tablet")
if err := subFlags.Parse(args); err != nil {
return err
}
@ -718,7 +714,7 @@ func commandDeleteTablet(ctx context.Context, wr *wrangler.Wrangler, subFlags *f
return err
}
for _, tabletAlias := range tabletAliases {
if err := wr.DeleteTablet(ctx, tabletAlias, *allowMaster, *skipRebuild); err != nil {
if err := wr.DeleteTablet(ctx, tabletAlias, *allowMaster); err != nil {
return err
}
}
@ -1092,32 +1088,6 @@ func commandGetShard(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.
return printJSON(wr.Logger(), shardInfo)
}
func commandRebuildShardGraph(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
cells := subFlags.String("cells", "", "Specifies a comma-separated list of cells to update")
if err := subFlags.Parse(args); err != nil {
return err
}
if subFlags.NArg() == 0 {
return fmt.Errorf("The <keyspace/shard> argument must be used to identify at least one keyspace and shard when calling the RebuildShardGraph command.")
}
var cellArray []string
if *cells != "" {
cellArray = strings.Split(*cells, ",")
}
keyspaceShards, err := shardParamsToKeyspaceShards(ctx, wr, subFlags.Args())
if err != nil {
return err
}
for _, ks := range keyspaceShards {
if _, err := wr.RebuildShardGraph(ctx, ks.Keyspace, ks.Shard, cellArray); err != nil {
return err
}
}
return nil
}
func commandTabletExternallyReparented(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
if err := subFlags.Parse(args); err != nil {
return err
@ -1495,7 +1465,6 @@ func commandDeleteShard(ctx context.Context, wr *wrangler.Wrangler, subFlags *fl
func commandCreateKeyspace(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
shardingColumnName := subFlags.String("sharding_column_name", "", "Specifies the column to use for sharding operations")
shardingColumnType := subFlags.String("sharding_column_type", "", "Specifies the type of the column to use for sharding operations")
splitShardCount := subFlags.Int("split_shard_count", 0, "Specifies the number of shards to use for data splits")
force := subFlags.Bool("force", false, "Proceeds even if the keyspace already exists")
var servedFrom flagutil.StringMapValue
subFlags.Var(&servedFrom, "served_from", "Specifies a comma-separated list of dbtype:keyspace pairs used to serve traffic")
@ -1514,7 +1483,6 @@ func commandCreateKeyspace(ctx context.Context, wr *wrangler.Wrangler, subFlags
ki := &topodatapb.Keyspace{
ShardingColumnName: *shardingColumnName,
ShardingColumnType: kit,
SplitShardCount: int32(*splitShardCount),
}
if len(servedFrom) > 0 {
for name, value := range servedFrom {
@ -1588,7 +1556,6 @@ func commandGetKeyspaces(ctx context.Context, wr *wrangler.Wrangler, subFlags *f
func commandSetKeyspaceShardingInfo(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
force := subFlags.Bool("force", false, "Updates fields even if they are already set. Use caution before calling this command.")
splitShardCount := subFlags.Int("split_shard_count", 0, "Specifies the number of shards to use for data splits")
if err := subFlags.Parse(args); err != nil {
return err
}
@ -1616,7 +1583,7 @@ func commandSetKeyspaceShardingInfo(ctx context.Context, wr *wrangler.Wrangler,
return fmt.Errorf("Both <column name> and <column type> must be set, or both must be unset.")
}
return wr.SetKeyspaceShardingInfo(ctx, keyspace, columnName, kit, int32(*splitShardCount), *force)
return wr.SetKeyspaceShardingInfo(ctx, keyspace, columnName, kit, *force)
}
func commandSetKeyspaceServedFrom(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
@ -1644,7 +1611,6 @@ func commandSetKeyspaceServedFrom(ctx context.Context, wr *wrangler.Wrangler, su
func commandRebuildKeyspaceGraph(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
cells := subFlags.String("cells", "", "Specifies a comma-separated list of cells to update")
rebuildSrvShards := subFlags.Bool("rebuild_srv_shards", false, "Indicates whether all SrvShard objects should also be rebuilt. The default value is <code>false</code>.")
if err := subFlags.Parse(args); err != nil {
return err
}
@ -1662,7 +1628,7 @@ func commandRebuildKeyspaceGraph(ctx context.Context, wr *wrangler.Wrangler, sub
return err
}
for _, keyspace := range keyspaces {
if err := wr.RebuildKeyspaceGraph(ctx, keyspace, cellArray, *rebuildSrvShards); err != nil {
if err := wr.RebuildKeyspaceGraph(ctx, keyspace, cellArray); err != nil {
return err
}
}

Просмотреть файл

@ -10,8 +10,6 @@ import (
"golang.org/x/net/context"
"github.com/youtube/vitess/go/vt/logutil"
"github.com/youtube/vitess/go/vt/topotools"
"github.com/youtube/vitess/go/vt/wrangler"
"github.com/youtube/vitess/go/vt/zktopo/zktestserver"
@ -61,7 +59,6 @@ func TestAPI(t *testing.T) {
KeyRange: &topodatapb.KeyRange{Start: nil, End: []byte{0x80}},
PortMap: map[string]int32{"vt": 200},
})
topotools.RebuildShard(ctx, logutil.NewConsoleLogger(), ts, "ks1", "-80", cells)
// Populate fake actions.
actionRepo.RegisterKeyspaceAction("TestKeyspaceAction",

Просмотреть файл

@ -65,17 +65,6 @@ func handleExplorerRedirect(ctx context.Context, ts topo.Server, r *http.Request
return "", errors.New("keyspace and cell are required for this redirect")
}
return appPrefix + "#/keyspaces/", nil
case "srv_shard":
if keyspace == "" || shard == "" || cell == "" {
return "", errors.New("keyspace, shard, and cell are required for this redirect")
}
return appPrefix + fmt.Sprintf("#/shard/%s/%s", keyspace, shard), nil
case "srv_type":
tabletType := r.FormValue("tablet_type")
if keyspace == "" || shard == "" || cell == "" || tabletType == "" {
return "", errors.New("keyspace, shard, cell, and tablet_type are required for this redirect")
}
return appPrefix + fmt.Sprintf("#/shard/%s/%s", keyspace, shard), nil
case "tablet":
alias := r.FormValue("alias")
if alias == "" {

Просмотреть файл

@ -25,13 +25,11 @@ func TestHandleExplorerRedirect(t *testing.T) {
}
table := map[string]string{
"/explorers/redirect?type=keyspace&keyspace=test_keyspace": "/app/#/keyspaces/",
"/explorers/redirect?type=shard&keyspace=test_keyspace&shard=-80": "/app/#/shard/test_keyspace/-80",
"/explorers/redirect?type=srv_keyspace&keyspace=test_keyspace&cell=cell1": "/app/#/keyspaces/",
"/explorers/redirect?type=srv_shard&keyspace=test_keyspace&shard=-80&cell=cell1": "/app/#/shard/test_keyspace/-80",
"/explorers/redirect?type=srv_type&keyspace=test_keyspace&shard=-80&cell=cell1&tablet_type=replica": "/app/#/shard/test_keyspace/-80",
"/explorers/redirect?type=tablet&alias=cell1-123": "/app/#/shard/test_keyspace/123-456",
"/explorers/redirect?type=replication&keyspace=test_keyspace&shard=-80&cell=cell1": "/app/#/shard/test_keyspace/-80",
"/explorers/redirect?type=keyspace&keyspace=test_keyspace": "/app/#/keyspaces/",
"/explorers/redirect?type=shard&keyspace=test_keyspace&shard=-80": "/app/#/shard/test_keyspace/-80",
"/explorers/redirect?type=srv_keyspace&keyspace=test_keyspace&cell=cell1": "/app/#/keyspaces/",
"/explorers/redirect?type=tablet&alias=cell1-123": "/app/#/shard/test_keyspace/123-456",
"/explorers/redirect?type=replication&keyspace=test_keyspace&shard=-80&cell=cell1": "/app/#/shard/test_keyspace/-80",
}
for input, want := range table {

Просмотреть файл

@ -90,7 +90,7 @@ func InitVtctld(ts topo.Server) {
actionRepo.RegisterTabletAction("DeleteTablet", acl.ADMIN,
func(ctx context.Context, wr *wrangler.Wrangler, tabletAlias *topodatapb.TabletAlias, r *http.Request) (string, error) {
return "", wr.DeleteTablet(ctx, tabletAlias, false, false)
return "", wr.DeleteTablet(ctx, tabletAlias, false)
})
actionRepo.RegisterTabletAction("ReloadSchema", acl.ADMIN,

Просмотреть файл

@ -48,7 +48,6 @@ type ResilientSrvTopoServer struct {
mutex sync.RWMutex
srvKeyspaceNamesCache map[string]*srvKeyspaceNamesEntry
srvKeyspaceCache map[string]*srvKeyspaceEntry
srvShardCache map[string]*srvShardEntry
}
type srvKeyspaceNamesEntry struct {
@ -107,21 +106,6 @@ func (ske *srvKeyspaceEntry) setValueLocked(ctx context.Context, value *topodata
}
}
type srvShardEntry struct {
// unmutable values
cell string
keyspace string
shard string
// the mutex protects any access to this structure (read or write)
mutex sync.Mutex
insertionTime time.Time
value *topodatapb.SrvShard
lastError error
lastErrorCtx context.Context
}
// NewResilientSrvTopoServer creates a new ResilientSrvTopoServer
// based on the provided topo.Server.
func NewResilientSrvTopoServer(base topo.Server, counterPrefix string) *ResilientSrvTopoServer {
@ -132,7 +116,6 @@ func NewResilientSrvTopoServer(base topo.Server, counterPrefix string) *Resilien
srvKeyspaceNamesCache: make(map[string]*srvKeyspaceNamesEntry),
srvKeyspaceCache: make(map[string]*srvKeyspaceEntry),
srvShardCache: make(map[string]*srvShardEntry),
}
}
@ -288,59 +271,6 @@ func (server *ResilientSrvTopoServer) GetSrvKeyspace(ctx context.Context, cell,
return entry.value, entry.lastError
}
// GetSrvShard returns SrvShard object for the given cell, keyspace, and shard.
func (server *ResilientSrvTopoServer) GetSrvShard(ctx context.Context, cell, keyspace, shard string) (*topodatapb.SrvShard, error) {
server.counts.Add(queryCategory, 1)
// find the entry in the cache, add it if not there
key := cell + "." + keyspace + "." + shard
server.mutex.Lock()
entry, ok := server.srvShardCache[key]
if !ok {
entry = &srvShardEntry{
cell: cell,
keyspace: keyspace,
shard: shard,
}
server.srvShardCache[key] = entry
}
server.mutex.Unlock()
// Lock the entry, and do everything holding the lock. This
// means two concurrent requests will only issue one
// underlying query.
entry.mutex.Lock()
defer entry.mutex.Unlock()
// If the entry is fresh enough, return it
if time.Now().Sub(entry.insertionTime) < server.cacheTTL {
return entry.value, entry.lastError
}
// not in cache or too old, get the real value
newCtx, cancel := context.WithTimeout(context.Background(), *srvTopoTimeout)
defer cancel()
result, err := server.topoServer.GetSrvShard(newCtx, cell, keyspace, shard)
if err != nil {
if entry.insertionTime.IsZero() {
server.counts.Add(errorCategory, 1)
log.Errorf("GetSrvShard(%v, %v, %v, %v) failed: %v (no cached value, caching and returning error)", newCtx, cell, keyspace, shard, err)
} else {
server.counts.Add(cachedCategory, 1)
log.Warningf("GetSrvShard(%v, %v, %v, %v) failed: %v (returning cached value: %v %v)", newCtx, cell, keyspace, shard, err, entry.value, entry.lastError)
return entry.value, entry.lastError
}
}
// save the value we got and the current time in the cache
entry.insertionTime = time.Now()
entry.value = result
entry.lastError = err
entry.lastErrorCtx = newCtx
return result, err
}
// The next few structures and methods are used to get a displayable
// version of the cache in a status page
@ -429,54 +359,10 @@ func (skcsl SrvKeyspaceCacheStatusList) Swap(i, j int) {
skcsl[i], skcsl[j] = skcsl[j], skcsl[i]
}
// SrvShardCacheStatus is the current value for a SrvShard object
type SrvShardCacheStatus struct {
Cell string
Keyspace string
Shard string
Value *topodatapb.SrvShard
LastError error
LastErrorCtx context.Context
}
// StatusAsHTML returns an HTML version of our status.
// It works best if there is data in the cache.
func (st *SrvShardCacheStatus) StatusAsHTML() template.HTML {
if st.Value == nil {
return template.HTML("No Data")
}
result := "<b>Name:</b>&nbsp;" + st.Value.Name + "<br>"
result += "<b>KeyRange:</b>&nbsp;" + st.Value.KeyRange.String() + "<br>"
result += "<b>MasterCell:</b>&nbsp;" + st.Value.MasterCell + "<br>"
return template.HTML(result)
}
// SrvShardCacheStatusList is used for sorting
type SrvShardCacheStatusList []*SrvShardCacheStatus
// Len is part of sort.Interface
func (sscsl SrvShardCacheStatusList) Len() int {
return len(sscsl)
}
// Less is part of sort.Interface
func (sscsl SrvShardCacheStatusList) Less(i, j int) bool {
return sscsl[i].Cell+"."+sscsl[i].Keyspace <
sscsl[j].Cell+"."+sscsl[j].Keyspace
}
// Swap is part of sort.Interface
func (sscsl SrvShardCacheStatusList) Swap(i, j int) {
sscsl[i], sscsl[j] = sscsl[j], sscsl[i]
}
// ResilientSrvTopoServerCacheStatus has the full status of the cache
type ResilientSrvTopoServerCacheStatus struct {
SrvKeyspaceNames SrvKeyspaceNamesCacheStatusList
SrvKeyspaces SrvKeyspaceCacheStatusList
SrvShards SrvShardCacheStatusList
}
// CacheStatus returns a displayable version of the cache
@ -507,25 +393,11 @@ func (server *ResilientSrvTopoServer) CacheStatus() *ResilientSrvTopoServerCache
entry.mutex.RUnlock()
}
for _, entry := range server.srvShardCache {
entry.mutex.Lock()
result.SrvShards = append(result.SrvShards, &SrvShardCacheStatus{
Cell: entry.cell,
Keyspace: entry.keyspace,
Shard: entry.shard,
Value: entry.value,
LastError: entry.lastError,
LastErrorCtx: entry.lastErrorCtx,
})
entry.mutex.Unlock()
}
server.mutex.Unlock()
// do the sorting without the mutex
sort.Sort(result.SrvKeyspaceNames)
sort.Sort(result.SrvKeyspaces)
sort.Sort(result.SrvShards)
return result
}

Просмотреть файл

@ -49,16 +49,6 @@ func (ft *fakeTopo) WatchSrvKeyspace(ctx context.Context, cell, keyspace string)
return nil, fmt.Errorf("Unknown keyspace")
}
func (ft *fakeTopo) GetSrvShard(ctx context.Context, cell, keyspace, shard string) (*topodatapb.SrvShard, error) {
ft.callCount++
if keyspace != ft.keyspace {
return nil, fmt.Errorf("Unknown keyspace")
}
return &topodatapb.SrvShard{
Name: shard,
}, nil
}
// TestGetSrvKeyspace will test we properly return updated SrvKeyspace.
func TestGetSrvKeyspace(t *testing.T) {
ft := &fakeTopo{keyspace: "test_ks"}
@ -127,33 +117,6 @@ func TestGetSrvKeyspace(t *testing.T) {
}
}
// TestCacheWithErrors will test we properly return cached errors.
func TestCacheWithErrors(t *testing.T) {
ft := &fakeTopo{keyspace: "test_ks"}
rsts := NewResilientSrvTopoServer(topo.Server{Impl: ft}, "TestCacheWithErrors")
// ask for the known keyspace, that populates the cache
_, err := rsts.GetSrvShard(context.Background(), "", "test_ks", "shard_0")
if err != nil {
t.Fatalf("GetSrvShard got unexpected error: %v", err)
}
// now make the topo server fail, and ask again, should get cached
// value, not even ask underlying guy
ft.keyspace = "another_test_ks"
_, err = rsts.GetSrvShard(context.Background(), "", "test_ks", "shard_0")
if err != nil {
t.Fatalf("GetSrvShard got unexpected error: %v", err)
}
// now reduce TTL to nothing, so we won't use cache, and ask again
rsts.cacheTTL = 0
_, err = rsts.GetSrvShard(context.Background(), "", "test_ks", "shard_0")
if err != nil {
t.Fatalf("GetSrvShard got unexpected error: %v", err)
}
}
// TestSrvKeyspaceCacheWithErrors will test we properly return cached errors for GetSrvKeyspace.
func TestSrvKeyspaceCacheWithErrors(t *testing.T) {
ft := &fakeTopo{keyspace: "test_ks"}
@ -174,40 +137,6 @@ func TestSrvKeyspaceCacheWithErrors(t *testing.T) {
}
}
// TestCachedErrors will test we properly return cached errors.
func TestCachedErrors(t *testing.T) {
ft := &fakeTopo{keyspace: "test_ks"}
rsts := NewResilientSrvTopoServer(topo.Server{Impl: ft}, "TestCachedErrors")
// ask for an unknown keyspace, should get an error
_, err := rsts.GetSrvShard(context.Background(), "", "unknown_ks", "shard_0")
if err == nil {
t.Fatalf("First GetSrvShard didn't return an error")
}
if ft.callCount != 1 {
t.Fatalf("GetSrvShard didn't get called 1 but %v times", ft.callCount)
}
// ask again, should get an error and use cache
_, err = rsts.GetSrvShard(context.Background(), "", "unknown_ks", "shard_0")
if err == nil {
t.Fatalf("Second GetSrvShard didn't return an error")
}
if ft.callCount != 1 {
t.Fatalf("GetSrvShard was called again: %v times", ft.callCount)
}
// ask again after expired cache, should get an error
rsts.cacheTTL = 0
_, err = rsts.GetSrvShard(context.Background(), "", "unknown_ks", "shard_0")
if err == nil {
t.Fatalf("Third GetSrvShard didn't return an error")
}
if ft.callCount != 2 {
t.Fatalf("GetSrvShard was not called again: %v times", ft.callCount)
}
}
// TestSrvKeyspaceCachedErrors will test we properly return cached errors for SrvKeyspace.
func TestSrvKeyspaceCachedErrors(t *testing.T) {
ft := &fakeTopo{keyspace: "test_ks"}

Просмотреть файл

@ -253,11 +253,6 @@ func (sct *sandboxTopo) WatchVSchema(ctx context.Context, keyspace string) (noti
return result, nil
}
// GetSrvShard is part of SrvTopoServer.
func (sct *sandboxTopo) GetSrvShard(ctx context.Context, cell, keyspace, shard string) (*topodatapb.SrvShard, error) {
return nil, fmt.Errorf("Unsupported")
}
func sandboxDialer(ctx context.Context, tablet *topodatapb.Tablet, keyspace, shard string, tabletType topodatapb.TabletType, timeout time.Duration) (tabletconn.TabletConn, error) {
sand := getSandbox(keyspace)
sand.sandmu.Lock()

Просмотреть файл

@ -748,11 +748,6 @@ func (vtg *VTGate) GetSrvKeyspace(ctx context.Context, keyspace string) (*topoda
return vtg.resolver.toposerv.GetSrvKeyspace(ctx, vtg.resolver.cell, keyspace)
}
// GetSrvShard is part of the vtgate service API.
func (vtg *VTGate) GetSrvShard(ctx context.Context, keyspace, shard string) (*topodatapb.SrvShard, error) {
return vtg.resolver.toposerv.GetSrvShard(ctx, vtg.resolver.cell, keyspace, shard)
}
// GetGatewayCacheStatus returns a displayable version of the Gateway cache.
func (vtg *VTGate) GetGatewayCacheStatus() GatewayTabletCacheStatusList {
return vtg.resolver.GetGatewayCacheStatus()

Просмотреть файл

@ -688,11 +688,6 @@ func (f *fakeVTGateService) GetSrvKeyspace(ctx context.Context, keyspace string)
return getSrvKeyspaceResult, nil
}
// GetSrvShard is part of the VTGateService interface
func (f *fakeVTGateService) GetSrvShard(ctx context.Context, keyspace, shard string) (*topodatapb.SrvShard, error) {
panic(fmt.Errorf("GetSrvShard is not tested here"))
}
// CreateFakeServer returns the fake server for the tests
func CreateFakeServer(t *testing.T) vtgateservice.VTGateService {
return &fakeVTGateService{
@ -2210,5 +2205,4 @@ var getSrvKeyspaceResult = &topodatapb.SrvKeyspace{
Keyspace: "other_keyspace",
},
},
SplitShardCount: 128,
}

Просмотреть файл

@ -57,10 +57,6 @@ type VTGateService interface {
// Topology support
GetSrvKeyspace(ctx context.Context, keyspace string) (*topodatapb.SrvKeyspace, error)
// GetSrvShard is not part of the public API, but might be used
// by some implementations.
GetSrvShard(ctx context.Context, keyspace, shard string) (*topodatapb.SrvShard, error)
// HandlePanic should be called with defer at the beginning of each
// RPC implementation method, before calling any of the previous methods
HandlePanic(err *error)

Просмотреть файл

@ -214,17 +214,6 @@ func (_mr *_MockVTGateServiceRecorder) GetSrvKeyspace(arg0, arg1 interface{}) *g
return _mr.mock.ctrl.RecordCall(_mr.mock, "GetSrvKeyspace", arg0, arg1)
}
func (_m *MockVTGateService) GetSrvShard(ctx context.Context, keyspace string, shard string) (*topodata.SrvShard, error) {
ret := _m.ctrl.Call(_m, "GetSrvShard", ctx, keyspace, shard)
ret0, _ := ret[0].(*topodata.SrvShard)
ret1, _ := ret[1].(error)
return ret0, ret1
}
func (_mr *_MockVTGateServiceRecorder) GetSrvShard(arg0, arg1, arg2 interface{}) *gomock.Call {
return _mr.mock.ctrl.RecordCall(_mr.mock, "GetSrvShard", arg0, arg1, arg2)
}
func (_m *MockVTGateService) HandlePanic(err *error) {
_m.ctrl.Call(_m, "HandlePanic", err)
}

Просмотреть файл

@ -135,10 +135,10 @@ func (tc *splitCloneTestCase) setUp(v3 bool) {
if err := tc.ts.CreateShard(ctx, "ks", "80-"); err != nil {
tc.t.Fatalf("CreateShard(\"-80\") failed: %v", err)
}
if err := tc.wi.wr.SetKeyspaceShardingInfo(ctx, "ks", "keyspace_id", topodatapb.KeyspaceIdType_UINT64, 4, false); err != nil {
if err := tc.wi.wr.SetKeyspaceShardingInfo(ctx, "ks", "keyspace_id", topodatapb.KeyspaceIdType_UINT64, false); err != nil {
tc.t.Fatalf("SetKeyspaceShardingInfo failed: %v", err)
}
if err := tc.wi.wr.RebuildKeyspaceGraph(ctx, "ks", nil, true); err != nil {
if err := tc.wi.wr.RebuildKeyspaceGraph(ctx, "ks", nil); err != nil {
tc.t.Fatalf("RebuildKeyspaceGraph failed: %v", err)
}

Просмотреть файл

@ -222,10 +222,10 @@ func testSplitDiff(t *testing.T, v3 bool) {
t.Fatalf("CreateShard(\"-80\") failed: %v", err)
}
wi.wr.SetSourceShards(ctx, "ks", "-40", []*topodatapb.TabletAlias{sourceRdonly1.Tablet.Alias}, nil)
if err := wi.wr.SetKeyspaceShardingInfo(ctx, "ks", "keyspace_id", topodatapb.KeyspaceIdType_UINT64, 4, false); err != nil {
if err := wi.wr.SetKeyspaceShardingInfo(ctx, "ks", "keyspace_id", topodatapb.KeyspaceIdType_UINT64, false); err != nil {
t.Fatalf("SetKeyspaceShardingInfo failed: %v", err)
}
if err := wi.wr.RebuildKeyspaceGraph(ctx, "ks", nil, true); err != nil {
if err := wi.wr.RebuildKeyspaceGraph(ctx, "ks", nil); err != nil {
t.Fatalf("RebuildKeyspaceGraph failed: %v", err)
}

Просмотреть файл

@ -122,8 +122,6 @@ func FindWorkerTablet(ctx context.Context, wr *wrangler.Wrangler, cleaner *wrang
}
// Record a clean-up action to take the tablet back to rdonly.
// We will alter this one later on and let the tablet go back to
// 'spare' if we have stopped replication for too long on it.
wrangler.RecordChangeSlaveTypeAction(cleaner, tabletAlias, topodatapb.TabletType_WORKER, topodatapb.TabletType_RDONLY)
return tabletAlias, nil
}

Просмотреть файл

@ -94,8 +94,8 @@ func newFakeTMCTopo(ts topo.Server) tmclient.TabletManagerClient {
}
// ChangeType is part of the tmclient.TabletManagerClient interface.
func (client *fakeTMCTopo) ChangeType(ctx context.Context, tablet *topodatapb.Tablet, dbType topodatapb.TabletType) error {
_, err := client.server.UpdateTabletFields(ctx, tablet.Alias, func(t *topodatapb.Tablet) error {
func (f *fakeTMCTopo) ChangeType(ctx context.Context, tablet *topodatapb.Tablet, dbType topodatapb.TabletType) error {
_, err := f.server.UpdateTabletFields(ctx, tablet.Alias, func(t *topodatapb.Tablet) error {
t.Type = dbType
return nil
})

Просмотреть файл

@ -148,10 +148,10 @@ func TestVerticalSplitClone(t *testing.T) {
}
// add the topo and schema data we'll need
if err := wi.wr.RebuildKeyspaceGraph(ctx, "source_ks", nil, true); err != nil {
if err := wi.wr.RebuildKeyspaceGraph(ctx, "source_ks", nil); err != nil {
t.Fatalf("RebuildKeyspaceGraph failed: %v", err)
}
if err := wi.wr.RebuildKeyspaceGraph(ctx, "destination_ks", nil, true); err != nil {
if err := wi.wr.RebuildKeyspaceGraph(ctx, "destination_ks", nil); err != nil {
t.Fatalf("RebuildKeyspaceGraph failed: %v", err)
}

Просмотреть файл

@ -126,10 +126,10 @@ func TestVerticalSplitDiff(t *testing.T) {
wi.wr.SetSourceShards(ctx, "destination_ks", "0", []*topodatapb.TabletAlias{sourceRdonly1.Tablet.Alias}, []string{"moving.*", "view1"})
// add the topo and schema data we'll need
if err := wi.wr.RebuildKeyspaceGraph(ctx, "source_ks", nil, true); err != nil {
if err := wi.wr.RebuildKeyspaceGraph(ctx, "source_ks", nil); err != nil {
t.Fatalf("RebuildKeyspaceGraph failed: %v", err)
}
if err := wi.wr.RebuildKeyspaceGraph(ctx, "destination_ks", nil, true); err != nil {
if err := wi.wr.RebuildKeyspaceGraph(ctx, "destination_ks", nil); err != nil {
t.Fatalf("RebuildKeyspaceGraph failed: %v", err)
}

Просмотреть файл

@ -127,10 +127,8 @@ func RecordChangeSlaveTypeAction(cleaner *Cleaner, tabletAlias *topodatapb.Table
if err != nil {
return err
}
if from != topodatapb.TabletType_UNKNOWN {
if ti.Type != from {
return fmt.Errorf("tablet %v is not of the right type (got %v expected %v), not changing it to %v", topoproto.TabletAliasString(tabletAlias), ti.Type, from, to)
}
if ti.Type != from {
return fmt.Errorf("tablet %v is not of the right type (got %v expected %v), not changing it to %v", topoproto.TabletAliasString(tabletAlias), ti.Type, from, to)
}
if !topo.IsTrivialTypeChange(ti.Type, to) {
return fmt.Errorf("tablet %v type change %v -> %v is not an allowed transition for ChangeSlaveType", topoproto.TabletAliasString(tabletAlias), ti.Type, to)

Просмотреть файл

@ -36,19 +36,19 @@ func (wr *Wrangler) unlockKeyspace(ctx context.Context, keyspace string, actionN
// SetKeyspaceShardingInfo locks a keyspace and sets its ShardingColumnName
// and ShardingColumnType
func (wr *Wrangler) SetKeyspaceShardingInfo(ctx context.Context, keyspace, shardingColumnName string, shardingColumnType topodatapb.KeyspaceIdType, splitShardCount int32, force bool) error {
func (wr *Wrangler) SetKeyspaceShardingInfo(ctx context.Context, keyspace, shardingColumnName string, shardingColumnType topodatapb.KeyspaceIdType, force bool) error {
actionNode := actionnode.SetKeyspaceShardingInfo()
lockPath, err := wr.lockKeyspace(ctx, keyspace, actionNode)
if err != nil {
return err
}
err = wr.setKeyspaceShardingInfo(ctx, keyspace, shardingColumnName, shardingColumnType, splitShardCount, force)
err = wr.setKeyspaceShardingInfo(ctx, keyspace, shardingColumnName, shardingColumnType, force)
return wr.unlockKeyspace(ctx, keyspace, actionNode, lockPath, err)
}
func (wr *Wrangler) setKeyspaceShardingInfo(ctx context.Context, keyspace, shardingColumnName string, shardingColumnType topodatapb.KeyspaceIdType, splitShardCount int32, force bool) error {
func (wr *Wrangler) setKeyspaceShardingInfo(ctx context.Context, keyspace, shardingColumnName string, shardingColumnType topodatapb.KeyspaceIdType, force bool) error {
ki, err := wr.ts.GetKeyspace(ctx, keyspace)
if err != nil {
return err
@ -72,7 +72,6 @@ func (wr *Wrangler) setKeyspaceShardingInfo(ctx context.Context, keyspace, shard
ki.ShardingColumnName = shardingColumnName
ki.ShardingColumnType = shardingColumnType
ki.SplitShardCount = splitShardCount
return wr.ts.UpdateKeyspace(ctx, ki)
}
@ -169,7 +168,7 @@ func (wr *Wrangler) MigrateServedTypes(ctx context.Context, keyspace, shard stri
// rebuild the keyspace serving graph if there was no error
if !rec.HasErrors() {
rec.RecordError(wr.RebuildKeyspaceGraph(ctx, keyspace, cells, false))
rec.RecordError(wr.RebuildKeyspaceGraph(ctx, keyspace, cells))
}
// Send a refresh to the tablets we just disabled, iff:
@ -619,7 +618,7 @@ func (wr *Wrangler) MigrateServedFrom(ctx context.Context, keyspace, shard strin
// rebuild the keyspace serving graph if there was no error
if rec.Error() == nil {
rec.RecordError(wr.RebuildKeyspaceGraph(ctx, keyspace, cells, false))
rec.RecordError(wr.RebuildKeyspaceGraph(ctx, keyspace, cells))
}
return rec.Error()

Просмотреть файл

@ -13,15 +13,9 @@ import (
"golang.org/x/net/context"
)
// RebuildShardGraph rebuilds the serving and replication rollup data data while locking
// out other changes.
func (wr *Wrangler) RebuildShardGraph(ctx context.Context, keyspace, shard string, cells []string) (*topo.ShardInfo, error) {
return topotools.RebuildShard(ctx, wr.logger, wr.ts, keyspace, shard, cells)
}
// RebuildKeyspaceGraph rebuilds the serving graph data while locking out other changes.
func (wr *Wrangler) RebuildKeyspaceGraph(ctx context.Context, keyspace string, cells []string, rebuildSrvShards bool) error {
return topotools.RebuildKeyspace(ctx, wr.logger, wr.ts, keyspace, cells, rebuildSrvShards)
func (wr *Wrangler) RebuildKeyspaceGraph(ctx context.Context, keyspace string, cells []string) error {
return topotools.RebuildKeyspace(ctx, wr.logger, wr.ts, keyspace, cells)
}
func strInList(sl []string, s string) bool {
@ -101,7 +95,7 @@ func (wr *Wrangler) RebuildReplicationGraph(ctx context.Context, cells []string,
wg.Add(1)
go func(keyspace string) {
defer wg.Done()
if err := wr.RebuildKeyspaceGraph(ctx, keyspace, nil, true); err != nil {
if err := wr.RebuildKeyspaceGraph(ctx, keyspace, nil); err != nil {
mu.Lock()
hasErr = true
mu.Unlock()

Просмотреть файл

@ -320,11 +320,7 @@ func (wr *Wrangler) initShardMasterLocked(ctx context.Context, ev *events.Repare
return fmt.Errorf("failed to create database: %v", err)
}
// Then we rebuild the entire serving graph for the shard,
// to account for all changes.
event.DispatchUpdate(ev, "rebuilding shard graph")
_, err = wr.RebuildShardGraph(ctx, keyspace, shard, nil)
return err
return nil
}
// PlannedReparentShard will make the provided tablet the master for the shard,
@ -453,12 +449,7 @@ func (wr *Wrangler) plannedReparentShardLocked(ctx context.Context, ev *events.R
return err
}
// Then we rebuild the entire serving graph for the shard,
// to account for all changes.
wr.logger.Infof("rebuilding shard graph")
event.DispatchUpdate(ev, "rebuilding shard serving graph")
_, err = wr.RebuildShardGraph(ctx, keyspace, shard, nil)
return err
return nil
}
// EmergencyReparentShard will make the provided tablet the master for
@ -651,10 +642,5 @@ func (wr *Wrangler) emergencyReparentShardLocked(ctx context.Context, ev *events
return err
}
// Then we rebuild the entire serving graph for the shard,
// to account for all changes.
wr.logger.Infof("rebuilding shard graph")
event.DispatchUpdate(ev, "rebuilding shard serving graph")
_, err = wr.RebuildShardGraph(ctx, keyspace, shard, nil)
return err
return nil
}

Просмотреть файл

@ -182,10 +182,6 @@ func (wr *Wrangler) DeleteShard(ctx context.Context, keyspace, shard string, rec
if err := wr.ts.DeleteShardReplication(ctx, cell, keyspace, shard); err != nil && err != topo.ErrNoNode {
wr.Logger().Warningf("Cannot delete ShardReplication in cell %v for %v/%v: %v", cell, keyspace, shard, err)
}
if err := wr.ts.DeleteSrvShard(ctx, cell, keyspace, shard); err != nil && err != topo.ErrNoNode {
wr.Logger().Warningf("Cannot delete SrvShard in cell %v for %v/%v: %v", cell, keyspace, shard, err)
}
}
return wr.ts.DeleteShard(ctx, keyspace, shard)
@ -250,13 +246,6 @@ func (wr *Wrangler) removeShardCell(ctx context.Context, keyspace, shard, cell s
return fmt.Errorf("error deleting ShardReplication object in cell %v: %v", cell, err)
}
// Rebuild the shard serving graph to reflect the tablets we deleted.
// This must be done before removing the cell from the global shard record,
// since this cell will be skipped by all future rebuilds.
if _, err := wr.RebuildShardGraph(ctx, keyspace, shard, []string{cell}); err != nil {
return fmt.Errorf("can't rebuild serving graph for shard %v/%v in cell %v: %v", keyspace, shard, cell, err)
}
// we keep going
case topo.ErrNoNode:
// no ShardReplication object, we keep going

Просмотреть файл

@ -94,14 +94,12 @@ func (wr *Wrangler) InitTablet(ctx context.Context, tablet *topodatapb.Tablet, a
// DeleteTablet removes a tablet from a shard.
// - if allowMaster is set, we can Delete a master tablet (and clear
// its record from the Shard record if it was the master).
// - if skipRebuild is set, we do not rebuild the serving graph.
func (wr *Wrangler) DeleteTablet(ctx context.Context, tabletAlias *topodatapb.TabletAlias, allowMaster, skipRebuild bool) error {
func (wr *Wrangler) DeleteTablet(ctx context.Context, tabletAlias *topodatapb.TabletAlias, allowMaster bool) error {
// load the tablet, see if we'll need to rebuild
ti, err := wr.ts.GetTablet(ctx, tabletAlias)
if err != nil {
return err
}
rebuildRequired := ti.IsInServingGraph()
wasMaster := ti.Type == topodatapb.TabletType_MASTER
if wasMaster && !allowMaster {
return fmt.Errorf("cannot delete tablet %v as it is a master, use allow_master flag", topoproto.TabletAliasString(tabletAlias))
@ -139,17 +137,7 @@ func (wr *Wrangler) DeleteTablet(ctx context.Context, tabletAlias *topodatapb.Ta
}
}
// and rebuild the original shard if needed
if !rebuildRequired {
wr.Logger().Infof("Rebuild not required")
return nil
}
if skipRebuild {
wr.Logger().Warningf("Rebuild required, but skipping it")
return nil
}
_, err = wr.RebuildShardGraph(ctx, ti.Keyspace, ti.Shard, []string{ti.Alias.Cell})
return err
return nil
}
// ChangeSlaveType changes the type of tablet and recomputes all
@ -168,43 +156,8 @@ func (wr *Wrangler) ChangeSlaveType(ctx context.Context, tabletAlias *topodatapb
return fmt.Errorf("tablet %v type change %v -> %v is not an allowed transition for ChangeSlaveType", tabletAlias, ti.Type, tabletType)
}
// ask the tablet to make the change
if err := wr.tmc.ChangeType(ctx, ti.Tablet, tabletType); err != nil {
return err
}
// if the tablet was or is serving, rebuild the serving graph
if ti.IsInServingGraph() || topo.IsInServingGraph(tabletType) {
if _, err := wr.RebuildShardGraph(ctx, ti.Tablet.Keyspace, ti.Tablet.Shard, []string{ti.Tablet.Alias.Cell}); err != nil {
return err
}
}
return nil
}
// same as ChangeType, but assume we already have the shard lock,
// and do not have the option to force anything.
func (wr *Wrangler) changeTypeInternal(ctx context.Context, tabletAlias *topodatapb.TabletAlias, dbType topodatapb.TabletType) error {
ti, err := wr.ts.GetTablet(ctx, tabletAlias)
if err != nil {
return err
}
rebuildRequired := ti.IsInServingGraph()
// change the type
if err := wr.tmc.ChangeType(ctx, ti.Tablet, dbType); err != nil {
return err
}
// rebuild if necessary
if rebuildRequired {
_, err = wr.RebuildShardGraph(ctx, ti.Keyspace, ti.Shard, []string{ti.Alias.Cell})
if err != nil {
return err
}
}
return nil
// and ask the tablet to make the change
return wr.tmc.ChangeType(ctx, ti.Tablet, tabletType)
}
// ExecuteFetchAsDba executes a query remotely using the DBA pool

Просмотреть файл

@ -131,27 +131,3 @@ func (zkts *Server) LockShardForAction(ctx context.Context, keyspace, shard, con
func (zkts *Server) UnlockShardForAction(ctx context.Context, keyspace, shard, lockPath, results string) error {
return zkts.unlockForAction(lockPath, results)
}
// LockSrvShardForAction is part of topo.Server interface
func (zkts *Server) LockSrvShardForAction(ctx context.Context, cell, keyspace, shard, contents string) (string, error) {
// Action paths end in a trailing slash to that when we create
// sequential nodes, they are created as children, not siblings.
actionDir := path.Join(zkPathForVtShard(cell, keyspace, shard), "action")
// if we can't create the lock file because the directory doesn't exist,
// create it
p, err := zkts.lockForAction(ctx, actionDir+"/", contents)
if err != nil && zookeeper.IsError(err, zookeeper.ZNONODE) {
_, err = zk.CreateRecursive(zkts.zconn, actionDir, "", 0, zookeeper.WorldACL(zookeeper.PERM_ALL))
if err != nil && !zookeeper.IsError(err, zookeeper.ZNODEEXISTS) {
return "", err
}
p, err = zkts.lockForAction(ctx, actionDir+"/", contents)
}
return p, err
}
// UnlockSrvShardForAction is part of topo.Server interface
func (zkts *Server) UnlockSrvShardForAction(ctx context.Context, cell, keyspace, shard, lockPath, results string) error {
return zkts.unlockForAction(lockPath, results)
}

Просмотреть файл

@ -9,7 +9,6 @@ import (
"fmt"
"path"
"sort"
"strings"
"time"
log "github.com/golang/glog"
@ -39,68 +38,6 @@ func zkPathForVtKeyspace(cell, keyspace string) string {
return path.Join(zkPathForCell(cell), keyspace)
}
func zkPathForVtShard(cell, keyspace, shard string) string {
return path.Join(zkPathForVtKeyspace(cell, keyspace), shard)
}
func zkPathForVtName(cell, keyspace, shard string, tabletType topodatapb.TabletType) string {
return path.Join(zkPathForVtShard(cell, keyspace, shard), strings.ToLower(tabletType.String()))
}
// UpdateSrvShard is part of the topo.Server interface
func (zkts *Server) UpdateSrvShard(ctx context.Context, cell, keyspace, shard string, srvShard *topodatapb.SrvShard) error {
path := zkPathForVtShard(cell, keyspace, shard)
data, err := json.MarshalIndent(srvShard, "", " ")
if err != nil {
return err
}
// Update or create unconditionally.
if _, err = zk.CreateRecursive(zkts.zconn, path, string(data), 0, zookeeper.WorldACL(zookeeper.PERM_ALL)); err != nil {
if zookeeper.IsError(err, zookeeper.ZNODEEXISTS) {
// Node already exists - just stomp away. Multiple writers shouldn't be here.
// We use RetryChange here because it won't update the node unnecessarily.
f := func(oldValue string, oldStat zk.Stat) (string, error) {
return string(data), nil
}
err = zkts.zconn.RetryChange(path, 0, zookeeper.WorldACL(zookeeper.PERM_ALL), f)
}
}
return err
}
// GetSrvShard is part of the topo.Server interface
func (zkts *Server) GetSrvShard(ctx context.Context, cell, keyspace, shard string) (*topodatapb.SrvShard, error) {
path := zkPathForVtShard(cell, keyspace, shard)
data, _, err := zkts.zconn.Get(path)
if err != nil {
if zookeeper.IsError(err, zookeeper.ZNONODE) {
err = topo.ErrNoNode
}
return nil, err
}
srvShard := &topodatapb.SrvShard{}
if len(data) > 0 {
if err := json.Unmarshal([]byte(data), srvShard); err != nil {
return nil, fmt.Errorf("SrvShard unmarshal failed: %v %v", data, err)
}
}
return srvShard, nil
}
// DeleteSrvShard is part of the topo.Server interface
func (zkts *Server) DeleteSrvShard(ctx context.Context, cell, keyspace, shard string) error {
path := zkPathForVtShard(cell, keyspace, shard)
err := zkts.zconn.Delete(path, -1)
if err != nil {
if zookeeper.IsError(err, zookeeper.ZNONODE) {
err = topo.ErrNoNode
}
return err
}
return nil
}
// UpdateSrvKeyspace is part of the topo.Server interface
func (zkts *Server) UpdateSrvKeyspace(ctx context.Context, cell, keyspace string, srvKeyspace *topodatapb.SrvKeyspace) error {
path := zkPathForVtKeyspace(cell, keyspace)

Просмотреть файл

@ -17,8 +17,6 @@ import (
type TestServer struct {
topo.Impl
localCells []string
HookLockSrvShardForAction func()
}
// newTestServer returns a new TestServer (with the required paths created)
@ -46,12 +44,3 @@ func New(t *testing.T, cells []string) topo.Server {
func (s *TestServer) GetKnownCells(ctx context.Context) ([]string, error) {
return s.localCells, nil
}
// LockSrvShardForAction should override the function defined by the underlying
// topo.Server.
func (s *TestServer) LockSrvShardForAction(ctx context.Context, cell, keyspace, shard, contents string) (string, error) {
if s.HookLockSrvShardForAction != nil {
s.HookLockSrvShardForAction()
}
return s.Impl.LockSrvShardForAction(ctx, cell, keyspace, shard, contents)
}

Просмотреть файл

@ -1,33 +0,0 @@
// Copyright 2014, Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package zktestserver
import (
"testing"
"github.com/youtube/vitess/go/vt/topo"
"golang.org/x/net/context"
)
// TestHookLockSrvShardForAction makes sure that changes to the upstream
// topo.Server interface don't break our hook. For example, if someone changes
// the function name in the interface and all its call sites, but doesn't change
// the name of our override to match.
func TestHookLockSrvShardForAction(t *testing.T) {
cells := []string{"test_cell"}
ts := New(t, cells)
triggered := false
ts.Impl.(*TestServer).HookLockSrvShardForAction = func() {
triggered = true
}
ctx := context.Background()
topo.Server(ts).LockSrvShardForAction(ctx, cells[0], "keyspace", "shard", "contents")
if !triggered {
t.Errorf("HookLockSrvShardForAction wasn't triggered")
}
}

Просмотреть файл

@ -74,17 +74,6 @@ func TestShardLock(t *testing.T) {
test.CheckShardLock(ctx, t, ts)
}
func TestSrvShardLock(t *testing.T) {
ctx := context.Background()
if testing.Short() {
t.Skip("skipping wait-based test in short mode.")
}
ts := newTestServer(t, []string{"test"})
defer ts.Close()
test.CheckSrvShardLock(ctx, t, ts)
}
func TestVSchema(t *testing.T) {
ctx := context.Background()
ts := newTestServer(t, []string{"test"})

Просмотреть файл

@ -442,7 +442,6 @@ public abstract class RpcClientTest {
.setTabletType(TabletType.MASTER)
.setKeyspace("other_keyspace")
.build())
.setSplitShardCount(128)
.build();
SrvKeyspace actual = conn.getSrvKeyspace(ctx, "big");
Assert.assertEquals(expected, actual);

Просмотреть файл

@ -12,9 +12,6 @@ namespace Vitess\Proto\Topodata {
/** @var int - \Vitess\Proto\Topodata\KeyspaceIdType */
public $sharding_column_type = null;
/** @var int */
public $split_shard_count = null;
/** @var \Vitess\Proto\Topodata\Keyspace\ServedFrom[] */
public $served_froms = array();
@ -43,14 +40,6 @@ namespace Vitess\Proto\Topodata {
$f->reference = '\Vitess\Proto\Topodata\KeyspaceIdType';
$descriptor->addField($f);
// OPTIONAL INT32 split_shard_count = 3
$f = new \DrSlump\Protobuf\Field();
$f->number = 3;
$f->name = "split_shard_count";
$f->type = \DrSlump\Protobuf::TYPE_INT32;
$f->rule = \DrSlump\Protobuf::RULE_OPTIONAL;
$descriptor->addField($f);
// REPEATED MESSAGE served_froms = 4
$f = new \DrSlump\Protobuf\Field();
$f->number = 4;
@ -141,43 +130,6 @@ namespace Vitess\Proto\Topodata {
return $this->_set(2, $value);
}
/**
* Check if <split_shard_count> has a value
*
* @return boolean
*/
public function hasSplitShardCount(){
return $this->_has(3);
}
/**
* Clear <split_shard_count> value
*
* @return \Vitess\Proto\Topodata\Keyspace
*/
public function clearSplitShardCount(){
return $this->_clear(3);
}
/**
* Get <split_shard_count> value
*
* @return int
*/
public function getSplitShardCount(){
return $this->_get(3);
}
/**
* Set <split_shard_count> value
*
* @param int $value
* @return \Vitess\Proto\Topodata\Keyspace
*/
public function setSplitShardCount( $value){
return $this->_set(3, $value);
}
/**
* Check if <served_froms> has a value
*

Просмотреть файл

@ -18,9 +18,6 @@ namespace Vitess\Proto\Topodata {
/** @var \Vitess\Proto\Topodata\SrvKeyspace\ServedFrom[] */
public $served_from = array();
/** @var int */
public $split_shard_count = null;
/** @var \Closure[] */
protected static $__extensions = array();
@ -64,14 +61,6 @@ namespace Vitess\Proto\Topodata {
$f->reference = '\Vitess\Proto\Topodata\SrvKeyspace\ServedFrom';
$descriptor->addField($f);
// OPTIONAL INT32 split_shard_count = 5
$f = new \DrSlump\Protobuf\Field();
$f->number = 5;
$f->name = "split_shard_count";
$f->type = \DrSlump\Protobuf::TYPE_INT32;
$f->rule = \DrSlump\Protobuf::RULE_OPTIONAL;
$descriptor->addField($f);
foreach (self::$__extensions as $cb) {
$descriptor->addField($cb(), true);
}
@ -266,43 +255,6 @@ namespace Vitess\Proto\Topodata {
public function addServedFrom(\Vitess\Proto\Topodata\SrvKeyspace\ServedFrom $value){
return $this->_add(4, $value);
}
/**
* Check if <split_shard_count> has a value
*
* @return boolean
*/
public function hasSplitShardCount(){
return $this->_has(5);
}
/**
* Clear <split_shard_count> value
*
* @return \Vitess\Proto\Topodata\SrvKeyspace
*/
public function clearSplitShardCount(){
return $this->_clear(5);
}
/**
* Get <split_shard_count> value
*
* @return int
*/
public function getSplitShardCount(){
return $this->_get(5);
}
/**
* Set <split_shard_count> value
*
* @param int $value
* @return \Vitess\Proto\Topodata\SrvKeyspace
*/
public function setSplitShardCount( $value){
return $this->_set(5, $value);
}
}
}

Просмотреть файл

@ -1,170 +0,0 @@
<?php
// DO NOT EDIT! Generated by Protobuf-PHP protoc plugin 1.0
// Source: topodata.proto
namespace Vitess\Proto\Topodata {
class SrvShard extends \DrSlump\Protobuf\Message {
/** @var string */
public $name = null;
/** @var \Vitess\Proto\Topodata\KeyRange */
public $key_range = null;
/** @var string */
public $master_cell = null;
/** @var \Closure[] */
protected static $__extensions = array();
public static function descriptor()
{
$descriptor = new \DrSlump\Protobuf\Descriptor(__CLASS__, 'topodata.SrvShard');
// OPTIONAL STRING name = 1
$f = new \DrSlump\Protobuf\Field();
$f->number = 1;
$f->name = "name";
$f->type = \DrSlump\Protobuf::TYPE_STRING;
$f->rule = \DrSlump\Protobuf::RULE_OPTIONAL;
$descriptor->addField($f);
// OPTIONAL MESSAGE key_range = 2
$f = new \DrSlump\Protobuf\Field();
$f->number = 2;
$f->name = "key_range";
$f->type = \DrSlump\Protobuf::TYPE_MESSAGE;
$f->rule = \DrSlump\Protobuf::RULE_OPTIONAL;
$f->reference = '\Vitess\Proto\Topodata\KeyRange';
$descriptor->addField($f);
// OPTIONAL STRING master_cell = 3
$f = new \DrSlump\Protobuf\Field();
$f->number = 3;
$f->name = "master_cell";
$f->type = \DrSlump\Protobuf::TYPE_STRING;
$f->rule = \DrSlump\Protobuf::RULE_OPTIONAL;
$descriptor->addField($f);
foreach (self::$__extensions as $cb) {
$descriptor->addField($cb(), true);
}
return $descriptor;
}
/**
* Check if <name> has a value
*
* @return boolean
*/
public function hasName(){
return $this->_has(1);
}
/**
* Clear <name> value
*
* @return \Vitess\Proto\Topodata\SrvShard
*/
public function clearName(){
return $this->_clear(1);
}
/**
* Get <name> value
*
* @return string
*/
public function getName(){
return $this->_get(1);
}
/**
* Set <name> value
*
* @param string $value
* @return \Vitess\Proto\Topodata\SrvShard
*/
public function setName( $value){
return $this->_set(1, $value);
}
/**
* Check if <key_range> has a value
*
* @return boolean
*/
public function hasKeyRange(){
return $this->_has(2);
}
/**
* Clear <key_range> value
*
* @return \Vitess\Proto\Topodata\SrvShard
*/
public function clearKeyRange(){
return $this->_clear(2);
}
/**
* Get <key_range> value
*
* @return \Vitess\Proto\Topodata\KeyRange
*/
public function getKeyRange(){
return $this->_get(2);
}
/**
* Set <key_range> value
*
* @param \Vitess\Proto\Topodata\KeyRange $value
* @return \Vitess\Proto\Topodata\SrvShard
*/
public function setKeyRange(\Vitess\Proto\Topodata\KeyRange $value){
return $this->_set(2, $value);
}
/**
* Check if <master_cell> has a value
*
* @return boolean
*/
public function hasMasterCell(){
return $this->_has(3);
}
/**
* Clear <master_cell> value
*
* @return \Vitess\Proto\Topodata\SrvShard
*/
public function clearMasterCell(){
return $this->_clear(3);
}
/**
* Get <master_cell> value
*
* @return string
*/
public function getMasterCell(){
return $this->_get(3);
}
/**
* Set <master_cell> value
*
* @param string $value
* @return \Vitess\Proto\Topodata\SrvShard
*/
public function setMasterCell( $value){
return $this->_set(3, $value);
}
}
}

Просмотреть файл

@ -418,7 +418,6 @@ class VTGateConnTest extends \PHPUnit_Framework_TestCase
$served_from->setTabletType(Proto\Topodata\TabletType::MASTER);
$served_from->setKeyspace('other_keyspace');
$expected->addServedFrom($served_from);
$expected->setSplitShardCount(128);
$actual = $conn->getSrvKeyspace($ctx, "big");
$this->assertEquals($expected, $actual);

Просмотреть файл

@ -198,9 +198,8 @@ message Keyspace {
// UNSET if the keyspace is not sharded
KeyspaceIdType sharding_column_type = 2;
// SplitShardCount stores the number of jobs to run to be sure to
// always have at most one job per shard (used during resharding).
int32 split_shard_count = 3;
// OBSOLETE int32 split_shard_count = 3;
reserved 3;
// ServedFrom indicates a relationship between a TabletType and the
// keyspace name that's serving it.
@ -220,8 +219,6 @@ message Keyspace {
repeated ServedFrom served_froms = 4;
}
// Replication graph information
// ShardReplication describes the MySQL replication relationships
// whithin a cell.
message ShardReplication {
@ -236,19 +233,7 @@ message ShardReplication {
repeated Node nodes = 1;
}
// Serving graph information
// SrvShard is a rollup node for the shard itself.
message SrvShard {
// Copied from Shard.
string name = 1;
KeyRange key_range = 2;
// The cell that master tablet resides in.
string master_cell = 3;
}
// ShardReference is used as a pointer from a SrvKeyspace to a SrvShard
// ShardReference is used as a pointer from a SrvKeyspace to a Shard
message ShardReference {
// Copied from Shard.
string name = 1;
@ -282,5 +267,6 @@ message SrvKeyspace {
string sharding_column_name = 2;
KeyspaceIdType sharding_column_type = 3;
repeated ServedFrom served_from = 4;
int32 split_shard_count = 5;
// OBSOLETE int32 split_shard_count = 5;
reserved 5;
}

Просмотреть файл

@ -554,9 +554,6 @@ class Proto3Connection(object):
}
result['Partitions'] = pmap
if sk.split_shard_count:
result['SplitShardCount'] = sk.split_shard_count
return result
def keyspace_from_response(self, name, response):

Просмотреть файл

@ -20,7 +20,7 @@ DESCRIPTOR = _descriptor.FileDescriptor(
name='topodata.proto',
package='topodata',
syntax='proto3',
serialized_pb=_b('\n\x0etopodata.proto\x12\x08topodata\"&\n\x08KeyRange\x12\r\n\x05start\x18\x01 \x01(\x0c\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x0c\"(\n\x0bTabletAlias\x12\x0c\n\x04\x63\x65ll\x18\x01 \x01(\t\x12\x0b\n\x03uid\x18\x02 \x01(\r\"\x90\x03\n\x06Tablet\x12$\n\x05\x61lias\x18\x01 \x01(\x0b\x32\x15.topodata.TabletAlias\x12\x10\n\x08hostname\x18\x02 \x01(\t\x12\n\n\x02ip\x18\x03 \x01(\t\x12/\n\x08port_map\x18\x04 \x03(\x0b\x32\x1d.topodata.Tablet.PortMapEntry\x12\x10\n\x08keyspace\x18\x05 \x01(\t\x12\r\n\x05shard\x18\x06 \x01(\t\x12%\n\tkey_range\x18\x07 \x01(\x0b\x32\x12.topodata.KeyRange\x12\"\n\x04type\x18\x08 \x01(\x0e\x32\x14.topodata.TabletType\x12\x18\n\x10\x64\x62_name_override\x18\t \x01(\t\x12(\n\x04tags\x18\n \x03(\x0b\x32\x1a.topodata.Tablet.TagsEntry\x1a.\n\x0cPortMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a+\n\tTagsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01J\x04\x08\x0b\x10\x0c\"\xcb\x04\n\x05Shard\x12+\n\x0cmaster_alias\x18\x01 \x01(\x0b\x32\x15.topodata.TabletAlias\x12%\n\tkey_range\x18\x02 \x01(\x0b\x32\x12.topodata.KeyRange\x12\x30\n\x0cserved_types\x18\x03 \x03(\x0b\x32\x1a.topodata.Shard.ServedType\x12\x32\n\rsource_shards\x18\x04 \x03(\x0b\x32\x1b.topodata.Shard.SourceShard\x12\r\n\x05\x63\x65lls\x18\x05 \x03(\t\x12\x36\n\x0ftablet_controls\x18\x06 \x03(\x0b\x32\x1d.topodata.Shard.TabletControl\x1a\x46\n\nServedType\x12)\n\x0btablet_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\r\n\x05\x63\x65lls\x18\x02 \x03(\t\x1ar\n\x0bSourceShard\x12\x0b\n\x03uid\x18\x01 \x01(\r\x12\x10\n\x08keyspace\x18\x02 \x01(\t\x12\r\n\x05shard\x18\x03 \x01(\t\x12%\n\tkey_range\x18\x04 \x01(\x0b\x32\x12.topodata.KeyRange\x12\x0e\n\x06tables\x18\x05 \x03(\t\x1a\x84\x01\n\rTabletControl\x12)\n\x0btablet_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\r\n\x05\x63\x65lls\x18\x02 \x03(\t\x12\x1d\n\x15\x64isable_query_service\x18\x03 \x01(\x08\x12\x1a\n\x12\x62lacklisted_tables\x18\x04 \x03(\t\"\x8a\x02\n\x08Keyspace\x12\x1c\n\x14sharding_column_name\x18\x01 \x01(\t\x12\x36\n\x14sharding_column_type\x18\x02 \x01(\x0e\x32\x18.topodata.KeyspaceIdType\x12\x19\n\x11split_shard_count\x18\x03 \x01(\x05\x12\x33\n\x0cserved_froms\x18\x04 \x03(\x0b\x32\x1d.topodata.Keyspace.ServedFrom\x1aX\n\nServedFrom\x12)\n\x0btablet_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\r\n\x05\x63\x65lls\x18\x02 \x03(\t\x12\x10\n\x08keyspace\x18\x03 \x01(\t\"w\n\x10ShardReplication\x12.\n\x05nodes\x18\x01 \x03(\x0b\x32\x1f.topodata.ShardReplication.Node\x1a\x33\n\x04Node\x12+\n\x0ctablet_alias\x18\x01 \x01(\x0b\x32\x15.topodata.TabletAlias\"T\n\x08SrvShard\x12\x0c\n\x04name\x18\x01 \x01(\t\x12%\n\tkey_range\x18\x02 \x01(\x0b\x32\x12.topodata.KeyRange\x12\x13\n\x0bmaster_cell\x18\x03 \x01(\t\"E\n\x0eShardReference\x12\x0c\n\x04name\x18\x01 \x01(\t\x12%\n\tkey_range\x18\x02 \x01(\x0b\x32\x12.topodata.KeyRange\"\xb1\x03\n\x0bSrvKeyspace\x12;\n\npartitions\x18\x01 \x03(\x0b\x32\'.topodata.SrvKeyspace.KeyspacePartition\x12\x1c\n\x14sharding_column_name\x18\x02 \x01(\t\x12\x36\n\x14sharding_column_type\x18\x03 \x01(\x0e\x32\x18.topodata.KeyspaceIdType\x12\x35\n\x0bserved_from\x18\x04 \x03(\x0b\x32 .topodata.SrvKeyspace.ServedFrom\x12\x19\n\x11split_shard_count\x18\x05 \x01(\x05\x1ar\n\x11KeyspacePartition\x12)\n\x0bserved_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\x32\n\x10shard_references\x18\x02 \x03(\x0b\x32\x18.topodata.ShardReference\x1aI\n\nServedFrom\x12)\n\x0btablet_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\x10\n\x08keyspace\x18\x02 \x01(\t*2\n\x0eKeyspaceIdType\x12\t\n\x05UNSET\x10\x00\x12\n\n\x06UINT64\x10\x01\x12\t\n\x05\x42YTES\x10\x02*\x8f\x01\n\nTabletType\x12\x0b\n\x07UNKNOWN\x10\x00\x12\n\n\x06MASTER\x10\x01\x12\x0b\n\x07REPLICA\x10\x02\x12\n\n\x06RDONLY\x10\x03\x12\t\n\x05\x42\x41TCH\x10\x03\x12\t\n\x05SPARE\x10\x04\x12\x10\n\x0c\x45XPERIMENTAL\x10\x05\x12\n\n\x06\x42\x41\x43KUP\x10\x06\x12\x0b\n\x07RESTORE\x10\x07\x12\n\n\x06WORKER\x10\x08\x1a\x02\x10\x01\x42\x1a\n\x18\x63om.youtube.vitess.protob\x06proto3')
serialized_pb=_b('\n\x0etopodata.proto\x12\x08topodata\"&\n\x08KeyRange\x12\r\n\x05start\x18\x01 \x01(\x0c\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x0c\"(\n\x0bTabletAlias\x12\x0c\n\x04\x63\x65ll\x18\x01 \x01(\t\x12\x0b\n\x03uid\x18\x02 \x01(\r\"\x90\x03\n\x06Tablet\x12$\n\x05\x61lias\x18\x01 \x01(\x0b\x32\x15.topodata.TabletAlias\x12\x10\n\x08hostname\x18\x02 \x01(\t\x12\n\n\x02ip\x18\x03 \x01(\t\x12/\n\x08port_map\x18\x04 \x03(\x0b\x32\x1d.topodata.Tablet.PortMapEntry\x12\x10\n\x08keyspace\x18\x05 \x01(\t\x12\r\n\x05shard\x18\x06 \x01(\t\x12%\n\tkey_range\x18\x07 \x01(\x0b\x32\x12.topodata.KeyRange\x12\"\n\x04type\x18\x08 \x01(\x0e\x32\x14.topodata.TabletType\x12\x18\n\x10\x64\x62_name_override\x18\t \x01(\t\x12(\n\x04tags\x18\n \x03(\x0b\x32\x1a.topodata.Tablet.TagsEntry\x1a.\n\x0cPortMapEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01\x1a+\n\tTagsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01J\x04\x08\x0b\x10\x0c\"\xcb\x04\n\x05Shard\x12+\n\x0cmaster_alias\x18\x01 \x01(\x0b\x32\x15.topodata.TabletAlias\x12%\n\tkey_range\x18\x02 \x01(\x0b\x32\x12.topodata.KeyRange\x12\x30\n\x0cserved_types\x18\x03 \x03(\x0b\x32\x1a.topodata.Shard.ServedType\x12\x32\n\rsource_shards\x18\x04 \x03(\x0b\x32\x1b.topodata.Shard.SourceShard\x12\r\n\x05\x63\x65lls\x18\x05 \x03(\t\x12\x36\n\x0ftablet_controls\x18\x06 \x03(\x0b\x32\x1d.topodata.Shard.TabletControl\x1a\x46\n\nServedType\x12)\n\x0btablet_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\r\n\x05\x63\x65lls\x18\x02 \x03(\t\x1ar\n\x0bSourceShard\x12\x0b\n\x03uid\x18\x01 \x01(\r\x12\x10\n\x08keyspace\x18\x02 \x01(\t\x12\r\n\x05shard\x18\x03 \x01(\t\x12%\n\tkey_range\x18\x04 \x01(\x0b\x32\x12.topodata.KeyRange\x12\x0e\n\x06tables\x18\x05 \x03(\t\x1a\x84\x01\n\rTabletControl\x12)\n\x0btablet_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\r\n\x05\x63\x65lls\x18\x02 \x03(\t\x12\x1d\n\x15\x64isable_query_service\x18\x03 \x01(\x08\x12\x1a\n\x12\x62lacklisted_tables\x18\x04 \x03(\t\"\xf5\x01\n\x08Keyspace\x12\x1c\n\x14sharding_column_name\x18\x01 \x01(\t\x12\x36\n\x14sharding_column_type\x18\x02 \x01(\x0e\x32\x18.topodata.KeyspaceIdType\x12\x33\n\x0cserved_froms\x18\x04 \x03(\x0b\x32\x1d.topodata.Keyspace.ServedFrom\x1aX\n\nServedFrom\x12)\n\x0btablet_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\r\n\x05\x63\x65lls\x18\x02 \x03(\t\x12\x10\n\x08keyspace\x18\x03 \x01(\tJ\x04\x08\x03\x10\x04\"w\n\x10ShardReplication\x12.\n\x05nodes\x18\x01 \x03(\x0b\x32\x1f.topodata.ShardReplication.Node\x1a\x33\n\x04Node\x12+\n\x0ctablet_alias\x18\x01 \x01(\x0b\x32\x15.topodata.TabletAlias\"E\n\x0eShardReference\x12\x0c\n\x04name\x18\x01 \x01(\t\x12%\n\tkey_range\x18\x02 \x01(\x0b\x32\x12.topodata.KeyRange\"\x9c\x03\n\x0bSrvKeyspace\x12;\n\npartitions\x18\x01 \x03(\x0b\x32\'.topodata.SrvKeyspace.KeyspacePartition\x12\x1c\n\x14sharding_column_name\x18\x02 \x01(\t\x12\x36\n\x14sharding_column_type\x18\x03 \x01(\x0e\x32\x18.topodata.KeyspaceIdType\x12\x35\n\x0bserved_from\x18\x04 \x03(\x0b\x32 .topodata.SrvKeyspace.ServedFrom\x1ar\n\x11KeyspacePartition\x12)\n\x0bserved_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\x32\n\x10shard_references\x18\x02 \x03(\x0b\x32\x18.topodata.ShardReference\x1aI\n\nServedFrom\x12)\n\x0btablet_type\x18\x01 \x01(\x0e\x32\x14.topodata.TabletType\x12\x10\n\x08keyspace\x18\x02 \x01(\tJ\x04\x08\x05\x10\x06*2\n\x0eKeyspaceIdType\x12\t\n\x05UNSET\x10\x00\x12\n\n\x06UINT64\x10\x01\x12\t\n\x05\x42YTES\x10\x02*\x8f\x01\n\nTabletType\x12\x0b\n\x07UNKNOWN\x10\x00\x12\n\n\x06MASTER\x10\x01\x12\x0b\n\x07REPLICA\x10\x02\x12\n\n\x06RDONLY\x10\x03\x12\t\n\x05\x42\x41TCH\x10\x03\x12\t\n\x05SPARE\x10\x04\x12\x10\n\x0c\x45XPERIMENTAL\x10\x05\x12\n\n\x06\x42\x41\x43KUP\x10\x06\x12\x0b\n\x07RESTORE\x10\x07\x12\n\n\x06WORKER\x10\x08\x1a\x02\x10\x01\x42\x1a\n\x18\x63om.youtube.vitess.protob\x06proto3')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
@ -45,8 +45,8 @@ _KEYSPACEIDTYPE = _descriptor.EnumDescriptor(
],
containing_type=None,
options=None,
serialized_start=2086,
serialized_end=2136,
serialized_start=1958,
serialized_end=2008,
)
_sym_db.RegisterEnumDescriptor(_KEYSPACEIDTYPE)
@ -100,8 +100,8 @@ _TABLETTYPE = _descriptor.EnumDescriptor(
],
containing_type=None,
options=_descriptor._ParseOptions(descriptor_pb2.EnumOptions(), _b('\020\001')),
serialized_start=2139,
serialized_end=2282,
serialized_start=2011,
serialized_end=2154,
)
_sym_db.RegisterEnumDescriptor(_TABLETTYPE)
@ -618,8 +618,8 @@ _KEYSPACE_SERVEDFROM = _descriptor.Descriptor(
extension_ranges=[],
oneofs=[
],
serialized_start=1282,
serialized_end=1370,
serialized_start=1255,
serialized_end=1343,
)
_KEYSPACE = _descriptor.Descriptor(
@ -644,14 +644,7 @@ _KEYSPACE = _descriptor.Descriptor(
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='split_shard_count', full_name='topodata.Keyspace.split_shard_count', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='served_froms', full_name='topodata.Keyspace.served_froms', index=3,
name='served_froms', full_name='topodata.Keyspace.served_froms', index=2,
number=4, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
@ -670,7 +663,7 @@ _KEYSPACE = _descriptor.Descriptor(
oneofs=[
],
serialized_start=1104,
serialized_end=1370,
serialized_end=1349,
)
@ -700,8 +693,8 @@ _SHARDREPLICATION_NODE = _descriptor.Descriptor(
extension_ranges=[],
oneofs=[
],
serialized_start=1440,
serialized_end=1491,
serialized_start=1419,
serialized_end=1470,
)
_SHARDREPLICATION = _descriptor.Descriptor(
@ -730,53 +723,8 @@ _SHARDREPLICATION = _descriptor.Descriptor(
extension_ranges=[],
oneofs=[
],
serialized_start=1372,
serialized_end=1491,
)
_SRVSHARD = _descriptor.Descriptor(
name='SrvShard',
full_name='topodata.SrvShard',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='topodata.SrvShard.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='key_range', full_name='topodata.SrvShard.key_range', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='master_cell', full_name='topodata.SrvShard.master_cell', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1493,
serialized_end=1577,
serialized_start=1351,
serialized_end=1470,
)
@ -813,8 +761,8 @@ _SHARDREFERENCE = _descriptor.Descriptor(
extension_ranges=[],
oneofs=[
],
serialized_start=1579,
serialized_end=1648,
serialized_start=1472,
serialized_end=1541,
)
@ -851,8 +799,8 @@ _SRVKEYSPACE_KEYSPACEPARTITION = _descriptor.Descriptor(
extension_ranges=[],
oneofs=[
],
serialized_start=1895,
serialized_end=2009,
serialized_start=1761,
serialized_end=1875,
)
_SRVKEYSPACE_SERVEDFROM = _descriptor.Descriptor(
@ -888,8 +836,8 @@ _SRVKEYSPACE_SERVEDFROM = _descriptor.Descriptor(
extension_ranges=[],
oneofs=[
],
serialized_start=2011,
serialized_end=2084,
serialized_start=1877,
serialized_end=1950,
)
_SRVKEYSPACE = _descriptor.Descriptor(
@ -927,13 +875,6 @@ _SRVKEYSPACE = _descriptor.Descriptor(
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='split_shard_count', full_name='topodata.SrvKeyspace.split_shard_count', index=4,
number=5, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
@ -946,8 +887,8 @@ _SRVKEYSPACE = _descriptor.Descriptor(
extension_ranges=[],
oneofs=[
],
serialized_start=1651,
serialized_end=2084,
serialized_start=1544,
serialized_end=1956,
)
_TABLET_PORTMAPENTRY.containing_type = _TABLET
@ -975,7 +916,6 @@ _KEYSPACE.fields_by_name['served_froms'].message_type = _KEYSPACE_SERVEDFROM
_SHARDREPLICATION_NODE.fields_by_name['tablet_alias'].message_type = _TABLETALIAS
_SHARDREPLICATION_NODE.containing_type = _SHARDREPLICATION
_SHARDREPLICATION.fields_by_name['nodes'].message_type = _SHARDREPLICATION_NODE
_SRVSHARD.fields_by_name['key_range'].message_type = _KEYRANGE
_SHARDREFERENCE.fields_by_name['key_range'].message_type = _KEYRANGE
_SRVKEYSPACE_KEYSPACEPARTITION.fields_by_name['served_type'].enum_type = _TABLETTYPE
_SRVKEYSPACE_KEYSPACEPARTITION.fields_by_name['shard_references'].message_type = _SHARDREFERENCE
@ -991,7 +931,6 @@ DESCRIPTOR.message_types_by_name['Tablet'] = _TABLET
DESCRIPTOR.message_types_by_name['Shard'] = _SHARD
DESCRIPTOR.message_types_by_name['Keyspace'] = _KEYSPACE
DESCRIPTOR.message_types_by_name['ShardReplication'] = _SHARDREPLICATION
DESCRIPTOR.message_types_by_name['SrvShard'] = _SRVSHARD
DESCRIPTOR.message_types_by_name['ShardReference'] = _SHARDREFERENCE
DESCRIPTOR.message_types_by_name['SrvKeyspace'] = _SRVKEYSPACE
DESCRIPTOR.enum_types_by_name['KeyspaceIdType'] = _KEYSPACEIDTYPE
@ -1095,13 +1034,6 @@ ShardReplication = _reflection.GeneratedProtocolMessageType('ShardReplication',
_sym_db.RegisterMessage(ShardReplication)
_sym_db.RegisterMessage(ShardReplication.Node)
SrvShard = _reflection.GeneratedProtocolMessageType('SrvShard', (_message.Message,), dict(
DESCRIPTOR = _SRVSHARD,
__module__ = 'topodata_pb2'
# @@protoc_insertion_point(class_scope:topodata.SrvShard)
))
_sym_db.RegisterMessage(SrvShard)
ShardReference = _reflection.GeneratedProtocolMessageType('ShardReference', (_message.Message,), dict(
DESCRIPTOR = _SHARDREFERENCE,
__module__ = 'topodata_pb2'

Просмотреть файл

@ -51,7 +51,6 @@ def setUpModule():
src_replica.init_tablet('replica', 'test_keyspace', '0')
src_rdonly.init_tablet('rdonly', 'test_keyspace', '0')
utils.run_vtctl(['RebuildShardGraph', 'test_keyspace/0'])
utils.validate_topology()
for t in [src_master, src_replica, src_rdonly]:

Просмотреть файл

@ -57,19 +57,17 @@ class K8sEnvironment(base_environment.BaseEnvironment):
'Invalid environment, no keyspaces found')
self.num_shards = []
self.shards = []
for keyspace in self.keyspaces:
keyspace_info = json.loads(self.vtctl_helper.execute_vtctl_command(
['GetKeyspace', keyspace]))
if not keyspace_info:
self.num_shards.append(1)
else:
self.num_shards.append(keyspace_info['split_shard_count'])
shards = json.loads(self.vtctl_helper.execute_vtctl_command(
['FindAllShardsInKeyspace', keyspace]))
self.shards.append(shards)
self.num_shards.append(len(shards))
# This assumes that all keyspaces use the same set of cells
# This assumes that all keyspaces/shards use the same set of cells
self.cells = json.loads(self.vtctl_helper.execute_vtctl_command(
['GetShard', '%s/%s' % (
self.keyspaces[0], utils.get_shard_name(0, self.num_shards[0]))]
['GetShard', '%s/%s' % (self.keyspaces[0], self.shards[0][0])]
))['cells']
self.primary_cells = self.cells

Просмотреть файл

@ -271,7 +271,6 @@ class TestKeyspace(unittest.TestCase):
# Create the serving/replication entries and check that they exist,
# so we can later check they're deleted.
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_delete_keyspace'])
utils.run_vtctl(['RebuildShardGraph', 'test_delete_keyspace/0'])
utils.run_vtctl(
['GetShardReplication', 'test_nj', 'test_delete_keyspace/0'])
utils.run_vtctl(['GetSrvKeyspace', 'test_nj', 'test_delete_keyspace'])
@ -307,8 +306,6 @@ class TestKeyspace(unittest.TestCase):
# Create the serving/replication entries and check that they exist,
# so we can later check they're deleted.
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_delete_keyspace'])
utils.run_vtctl(['RebuildShardGraph', 'test_delete_keyspace/0'])
utils.run_vtctl(['RebuildShardGraph', 'test_delete_keyspace/1'])
utils.run_vtctl(
['GetShardReplication', 'test_nj', 'test_delete_keyspace/0'])
utils.run_vtctl(
@ -320,7 +317,6 @@ class TestKeyspace(unittest.TestCase):
utils.run_vtctl(
['RemoveShardCell', '-recursive', 'test_delete_keyspace/0', 'test_nj'])
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_delete_keyspace'])
utils.run_vtctl(['RebuildShardGraph', 'test_delete_keyspace/0'])
utils.run_vtctl(['GetKeyspace', 'test_delete_keyspace'])
utils.run_vtctl(['GetShard', 'test_delete_keyspace/0'])
@ -341,7 +337,6 @@ class TestKeyspace(unittest.TestCase):
['InitTablet', '-port=1234', '-keyspace=test_delete_keyspace',
'-shard=0', 'test_nj-0000000100', 'replica'])
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_delete_keyspace'])
utils.run_vtctl(['RebuildShardGraph', 'test_delete_keyspace/0'])
utils.run_vtctl(
['GetShardReplication', 'test_nj', 'test_delete_keyspace/0'])
@ -350,8 +345,6 @@ class TestKeyspace(unittest.TestCase):
['RemoveKeyspaceCell', '-recursive', 'test_delete_keyspace',
'test_nj'])
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_delete_keyspace'])
utils.run_vtctl(['RebuildShardGraph', 'test_delete_keyspace/0'])
utils.run_vtctl(['RebuildShardGraph', 'test_delete_keyspace/1'])
utils.run_vtctl(
['GetShardReplication', 'test_ca', 'test_delete_keyspace/0'])

Просмотреть файл

@ -245,7 +245,6 @@ index by_msg (msg)
utils.run_vtctl(['CreateKeyspace',
'--sharding_column_name', 'custom_sharding_key',
'--sharding_column_type', keyspace_id_type,
'--split_shard_count', '4',
'test_keyspace'])
shard_0_master.init_tablet('master', 'test_keyspace', '-40')
@ -261,7 +260,7 @@ index by_msg (msg)
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_keyspace'], auto_log=True)
ks = utils.run_vtctl_json(['GetSrvKeyspace', 'test_nj', 'test_keyspace'])
self.assertEqual(ks['split_shard_count'], 4)
self.assertEqual(ks['sharding_column_name'], 'custom_sharding_key')
# create databases so vttablet can start behaving normally
for t in [shard_0_master, shard_0_replica, shard_0_rdonly,

Просмотреть файл

@ -146,9 +146,6 @@ class TestReparent(unittest.TestCase):
for t in [tablet_62044, tablet_41983, tablet_31981]:
t.wait_for_vttablet_state('NOT_SERVING')
# Recompute the shard layout node - until you do that, it might not be
# valid.
utils.run_vtctl(['RebuildShardGraph', 'test_keyspace/0'])
utils.validate_topology()
# Force the slaves to reparent assuming that all the datasets are
@ -221,9 +218,6 @@ class TestReparent(unittest.TestCase):
shard['cells'], ['test_nj', 'test_ny'],
'wrong list of cell in Shard: %s' % str(shard['cells']))
# Recompute the shard layout node - until you do that, it might not be
# valid.
utils.run_vtctl(['RebuildShardGraph', 'test_keyspace/' + shard_id])
utils.validate_topology()
# Force the slaves to reparent assuming that all the datasets are
@ -286,9 +280,6 @@ class TestReparent(unittest.TestCase):
self.assertEqual(shard['cells'], ['test_nj', 'test_ny'],
'wrong list of cell in Shard: %s' % str(shard['cells']))
# Recompute the shard layout node - until you do that, it might not be
# valid.
utils.run_vtctl(['RebuildShardGraph', 'test_keyspace/' + shard_id])
utils.validate_topology()
# Force the slaves to reparent assuming that all the datasets are
@ -367,9 +358,6 @@ class TestReparent(unittest.TestCase):
for t in [tablet_62044, tablet_41983, tablet_31981]:
t.wait_for_vttablet_state('NOT_SERVING')
# Recompute the shard layout node - until you do that, it might not be
# valid.
utils.run_vtctl(['RebuildShardGraph', 'test_keyspace/' + shard_id])
utils.validate_topology()
# Force the slaves to reparent assuming that all the datasets are
@ -510,7 +498,7 @@ class TestReparent(unittest.TestCase):
# make sure the master status page says it's the master
tablet_62044_master_status = tablet_62044.get_status()
self.assertIn('Serving graph: test_keyspace 0 master',
self.assertIn('Keyspace: test_keyspace Shard: 0 Tablet Type: MASTER',
tablet_62044_master_status)
# make sure the master health stream says it's the master too
@ -551,9 +539,6 @@ class TestReparent(unittest.TestCase):
for t in [tablet_62044, tablet_31981, tablet_41983]:
t.wait_for_vttablet_state('NOT_SERVING')
# Recompute the shard layout node - until you do that, it might not be
# valid.
utils.run_vtctl(['RebuildShardGraph', 'test_keyspace/' + shard_id])
utils.validate_topology()
# Force the slaves to reparent assuming that all the datasets are identical.

Просмотреть файл

@ -389,12 +389,10 @@ primary key (name)
utils.run_vtctl(['CreateKeyspace',
'--sharding_column_name', 'bad_column',
'--sharding_column_type', 'bytes',
'--split_shard_count', '2',
'test_keyspace'])
utils.run_vtctl(['SetKeyspaceShardingInfo', 'test_keyspace',
'custom_sharding_key', 'uint64'], expect_fail=True)
utils.run_vtctl(['SetKeyspaceShardingInfo',
'-force', '-split_shard_count', '4',
utils.run_vtctl(['SetKeyspaceShardingInfo', '-force',
'test_keyspace', 'custom_sharding_key', keyspace_id_type])
shard_0_master.init_tablet('master', 'test_keyspace', '-80')
@ -409,7 +407,7 @@ primary key (name)
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_keyspace'], auto_log=True)
ks = utils.run_vtctl_json(['GetSrvKeyspace', 'test_nj', 'test_keyspace'])
self.assertEqual(ks['split_shard_count'], 4)
self.assertEqual(ks['sharding_column_name'], 'custom_sharding_key')
# we set full_mycnf_args to True as a test in the KIT_BYTES case
full_mycnf_args = keyspace_id_type == keyrange_constants.KIT_BYTES
@ -438,6 +436,12 @@ primary key (name)
utils.run_vtctl(['InitShardMaster', 'test_keyspace/80-',
shard_1_master.tablet_alias], auto_log=True)
# check the shards
shards = utils.run_vtctl_json(['FindAllShardsInKeyspace', 'test_keyspace'])
self.assertIn('-80', shards, 'unexpected shards: %s' % str(shards))
self.assertIn('80-', shards, 'unexpected shards: %s' % str(shards))
self.assertEqual(len(shards), 2, 'unexpected shards: %s' % str(shards))
# create the tables
self._create_schema()
self._insert_startup_values()
@ -474,6 +478,12 @@ primary key (name)
utils.run_vtctl(['InitShardMaster', 'test_keyspace/c0-',
shard_3_master.tablet_alias], auto_log=True)
# check the shards
shards = utils.run_vtctl_json(['FindAllShardsInKeyspace', 'test_keyspace'])
for s in ['-80', '80-', '80-c0', 'c0-']:
self.assertIn(s, shards, 'unexpected shards: %s' % str(shards))
self.assertEqual(len(shards), 4, 'unexpected shards: %s' % str(shards))
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_keyspace'],
auto_log=True)
utils.check_srv_keyspace(

Просмотреть файл

@ -79,8 +79,7 @@ class TestTabletManager(unittest.TestCase):
utils.run_vtctl(['CreateKeyspace', '-force', 'test_keyspace'])
utils.run_vtctl(['createshard', '-force', 'test_keyspace/0'])
tablet_62344.init_tablet('master', 'test_keyspace', '0', parent=False)
utils.run_vtctl(
['RebuildKeyspaceGraph', '-rebuild_srv_shards', 'test_keyspace'])
utils.run_vtctl(['RebuildKeyspaceGraph', 'test_keyspace'])
utils.validate_topology()
# if these statements don't run before the tablet it will wedge
@ -145,7 +144,6 @@ class TestTabletManager(unittest.TestCase):
utils.run_vtctl(['CreateKeyspace', 'test_keyspace'])
tablet_62344.init_tablet('master', 'test_keyspace', '0')
utils.run_vtctl(['RebuildShardGraph', 'test_keyspace/0'])
utils.validate_topology()
tablet_62344.create_db('vt_test_keyspace')
tablet_62344.start_vttablet()

Просмотреть файл

@ -27,6 +27,7 @@
<li style="padding-bottom: 0"><a href="/user-guide/sharding-kubernetes.html">Sharding in Kubernetes (Codelab)</a></li>
</ul>
</li>
<li><a href="/user-guide/vitess-replication.html">Vitess and Replication</a></li>
<li><a href="/user-guide/transport-security-model.html">Transport Security Model</a></li>
</ul>
</li>

Просмотреть файл

@ -0,0 +1,16 @@
---
layout: doc
title: "Vitess and Replication"
description:
modified:
excerpt:
tags: []
image:
feature:
teaser:
thumb:
toc: true
share: false
---
{% include doc/VitessReplication.md %}