Merge pull request #8600 from planetscale/rn-inclusive-naming-2

Inclusive Naming: repo links, comments, couple of test files,
This commit is contained in:
Deepthi Sigireddi 2021-08-09 14:08:14 -07:00 коммит произвёл GitHub
Родитель 3a1939cfc7 46c72acfac
Коммит 5c7ee04b64
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
189 изменённых файлов: 1062 добавлений и 938 удалений

Просмотреть файл

@ -59,7 +59,7 @@ New committers can be nominated by any existing committer. Once they have been n
Nominees may decline their appointment as a committer. However, this is unusual, as the project does not expect any specific time or resource commitment from its community members. The intention behind the role of committer is to allow people to contribute to the project more easily, not to tie them in to the project in any formal way.
It is important to recognise that commitership is a privilege, not a right. That privilege must be earned and once earned it can be removed by the PMC for conduct inconsistent with the [Guiding Principles](https://github.com/vitessio/vitess/blob/master/GUIDING_PRINCIPLES.md) or if they drop below a level of commitment and engagement required to be a Committer, as determined by the PMC. The PMC also reserves the right to remove a person for any other reason inconsistent with the goals of Vitess.
It is important to recognise that commitership is a privilege, not a right. That privilege must be earned and once earned it can be removed by the PMC for conduct inconsistent with the [Guiding Principles](https://github.com/vitessio/vitess/blob/main/GUIDING_PRINCIPLES.md) or if they drop below a level of commitment and engagement required to be a Committer, as determined by the PMC. The PMC also reserves the right to remove a person for any other reason inconsistent with the goals of Vitess.
A committer who shows an above-average level of contribution to the project, particularly with respect to its strategic direction and long-term health, may be nominated to become a member of the PMC. This role is described below.
@ -83,7 +83,7 @@ Membership of the PMC is by invitation from the existing PMC members. A nominati
The number of PMC members should be limited to 7. This number is chosen to ensure that sufficient points of view are represented, while preserving the efficiency of the decision making process.
The PMC is responsible for maintaining the [Guiding Principles](https://github.com/vitessio/vitess/blob/master/GUIDING_PRINCIPLES.md) and the code of conduct. It is also responsible for ensuring that those rules and principles are followed.
The PMC is responsible for maintaining the [Guiding Principles](https://github.com/vitessio/vitess/blob/main/GUIDING_PRINCIPLES.md) and the code of conduct. It is also responsible for ensuring that those rules and principles are followed.
## PMC Chair
@ -106,7 +106,7 @@ The Slack channel list is the most appropriate place for a contributor to ask fo
Decisions about the future of the project are made by the PMC. New proposals and ideas can be brought to the PMCs attention through the Slack channel or by filing an issue. If necessary, the PMC will seek input from others to come to the final decision.
The PMCs decision is itself governed by the projects [Guiding Principles](https://github.com/vitessio/vitess/blob/master/GUIDING_PRINCIPLES.md), which shall be used to reach consensus. If a consensus cannot be reached, a simple majority voting process will be used to reach resolution. In case of a tie, the PMC chair has the casting vote.
The PMCs decision is itself governed by the projects [Guiding Principles](https://github.com/vitessio/vitess/blob/main/GUIDING_PRINCIPLES.md), which shall be used to reach consensus. If a consensus cannot be reached, a simple majority voting process will be used to reach resolution. In case of a tie, the PMC chair has the casting vote.
# Credits
The contents of this document are based on http://oss-watch.ac.uk/resources/meritocraticgovernancemodel by Ross Gardler and Gabriel Hanganu.

Просмотреть файл

@ -24,4 +24,4 @@ Vitess is driven by high technical standards, and these must be maintained. It i
* Diversity
* Inclusiveness
* Openness
* Adherence to the [Code of Conduct](https://github.com/vitessio/vitess/blob/master/CODE_OF_CONDUCT.md)
* Adherence to the [Code of Conduct](https://github.com/vitessio/vitess/blob/main/CODE_OF_CONDUCT.md)

Просмотреть файл

@ -22,10 +22,10 @@ since 2011, and has grown to encompass tens of thousands of MySQL nodes.
For more about Vitess, please visit [vitess.io](https://vitess.io).
Vitess has a growing community. You can view the list of adopters
[here](https://github.com/vitessio/vitess/blob/master/ADOPTERS.md).
[here](https://github.com/vitessio/vitess/blob/main/ADOPTERS.md).
## Reporting a Problem, Issue, or Bug
To report a problem, the best way to get attention is to create a GitHub [issue](.https://github.com/vitessio/vitess/issues ) using proper severity level based on this [guide](https://github.com/vitessio/vitess/blob/master/SEVERITY.md).
To report a problem, the best way to get attention is to create a GitHub [issue](.https://github.com/vitessio/vitess/issues ) using proper severity level based on this [guide](https://github.com/vitessio/vitess/blob/main/SEVERITY.md).
For topics that are better discussed live, please join the [Vitess Slack](https://vitess.io/slack) workspace.
You may post any questions on the #general channel or join some of the special-interest channels.

Просмотреть файл

@ -33,11 +33,11 @@ score >= 4; see below). If the fix relies on another upstream project's disclosu
will adjust the process as well. We will work with the upstream project to fit their timeline and
best protect our users.
#### Policy for master-only vulnerabilities
#### Policy for main-only vulnerabilities
If a security vulnerability affects master, but not a currently supported branch, then the following process will apply:
If a security vulnerability affects main, but not a currently supported branch, then the following process will apply:
* The fix will land in master.
* The fix will land in main.
* A courtesy notice will be posted in #developers on Vitess Slack.
#### Policy for unsupported releases

Просмотреть файл

@ -1,10 +1,10 @@
By default, the [Helm Charts](https://github.com/vitessio/vitess/tree/master/helm)
By default, the [Helm Charts](https://github.com/vitessio/vitess/tree/main/helm)
point to the `vitess/lite` image on [Docker Hub](https://hub.docker.com/u/vitess/).
We created the `lite` image as a stripped down version of our main image `base` such that Kubernetes pods can start faster.
The `lite` image does not change very often and is updated manually by the Vitess team with every release.
In contrast, the `base` image is updated automatically after every push to the GitHub master branch.
For more information on the different images we provide, please read the [`docker/README.md`](https://github.com/vitessio/vitess/tree/master/docker) file.
For more information on the different images we provide, please read the [`docker/README.md`](https://github.com/vitessio/vitess/tree/main/docker) file.
If your goal is run the latest Vitess code, the simplest solution is to use the bigger `base` image instead of `lite`.
@ -22,9 +22,9 @@ Then you can run our build script for the `lite` image which extracts the Vitess
1. Go to your `src/vitess.io/vitess` directory.
1. Usually, you won't need to [build your own bootstrap image](https://github.com/vitessio/vitess/blob/master/docker/bootstrap/README.md)
unless you edit [bootstrap.sh](https://github.com/vitessio/vitess/blob/master/bootstrap.sh)
or [vendor.json](https://github.com/vitessio/vitess/blob/master/vendor/vendor.json),
1. Usually, you won't need to [build your own bootstrap image](https://github.com/vitessio/vitess/blob/main/docker/bootstrap/README.md)
unless you edit [bootstrap.sh](https://github.com/vitessio/vitess/blob/main/bootstrap.sh)
or [vendor.json](https://github.com/vitessio/vitess/blob/main/vendor/vendor.json),
for example to add new dependencies. If you do need it then build the
bootstrap image, otherwise pull the image using one of the following
commands depending on the MySQL flavor you want:

Просмотреть файл

@ -11,7 +11,7 @@ Life of A Query
A query means a request for information from database and it involves four components in the case of Vitess, including the client application, VtGate, VtTablet and MySQL instance. This doc explains the interaction which happens between and within components.
![](https://raw.githubusercontent.com/vitessio/vitess/master/doc/life_of_a_query.png)
![](https://raw.githubusercontent.com/vitessio/vitess/main/doc/life_of_a_query.png)
At a very high level, as the graph shows, first the client sends a query to VtGate. VtGate then resolves the query and routes it to the right VtTablets. For each VtTablet that receives the query, it does necessary validations and passes the query to the underlying MySQL instance. After gathering results from MySQL, VtTablet sends the response back to VtGate. Once VtGate receives responses from all VtTablets, it sends the combined result to the client. In the presence of VtTablet errors, VtGate will retry the query if errors are recoverable and it only fails the query if either errors are unrecoverable or the maximum number of retries has been reached.
@ -19,13 +19,13 @@ At a very high level, as the graph shows, first the client sends a query to VtGa
A client application first sends an rpc with an embedded sql query to VtGate. VtGate's rpc server unmarshals this rpc request, calls the appropriate VtGate method and return its result back to client.
![](https://raw.githubusercontent.com/vitessio/vitess/master/doc/life_of_a_query_client_to_vtgate.png)
![](https://raw.githubusercontent.com/vitessio/vitess/main/doc/life_of_a_query_client_to_vtgate.png)
VtGate keeps an in-memory table that stores all available rpc methods for each service, e.g. VtGate uses "VTGate" as its service name and most of its methods defined in [go/vt/vtgate/vtgate.go](../go/vt/vtgate/vtgate.go) are used to serve rpc request.
## From VtGate to VtTablet
![](https://raw.githubusercontent.com/vitessio/vitess/master/doc/life_of_a_query_vtgate_to_vttablet.png)
![](https://raw.githubusercontent.com/vitessio/vitess/main/doc/life_of_a_query_vtgate_to_vttablet.png)
After receiving an rpc call from the client and one of its Execute* method being invoked, VtGate needs to figure out which shards should receive the query and send it to each of them. In addition, VtGate talks to the topo server to get necessary information to create a VtTablet connection for each shard. At this point, VtGate is able to send the query to the right VtTablets in parallel. VtGate also does retry if timeout happens or some VtTablets return recoverable errors.
@ -35,13 +35,13 @@ A ShardConn object represents a load balanced connection to a group of VtTablets
## From VtTablet to MySQL
![](https://raw.githubusercontent.com/vitessio/vitess/master/doc/life_of_a_query_vttablet_to_mysql.png)
![](https://raw.githubusercontent.com/vitessio/vitess/main/doc/life_of_a_query_vttablet_to_mysql.png)
Once VtTablet received an rpc call from VtGate, it does a few checks before passing the query to MySQL. First, it validates the current VtTablet state including the session id, then generates a query plan and applies predefined query rules and does ACL checks. It also checks whether the query hits the row cache and returns the result immediately if so. In addition, VtTablet consolidates duplicate queries from executing simultaneously and shares results between them. At this point, VtTablet has no way but pass the query down to MySQL layer and wait for the result.
## Putting it all together
![](https://raw.githubusercontent.com/vitessio/vitess/master/doc/life_of_a_query_all.png)
![](https://raw.githubusercontent.com/vitessio/vitess/main/doc/life_of_a_query_all.png)
## TopoServer

Просмотреть файл

@ -21,11 +21,11 @@ A boolean flag controlling whether the replication-lag-based throttling is enabl
* *tx-throttler-config*
A text-format representation of the [throttlerdata.Configuration](https://github.com/vitessio/vitess/blob/master/proto/throttlerdata.proto) protocol buffer
A text-format representation of the [throttlerdata.Configuration](https://github.com/vitessio/vitess/blob/main/proto/throttlerdata.proto) protocol buffer
that contains configuration options for the throttler.
The most important fields in that message are *target_replication_lag_sec* and
*max_replication_lag_sec* that specify the desired limits on the replication lag. See the comments in the protocol definition file for more details.
If this is not specified a [default](https://github.com/vitessio/vitess/tree/master/go/vt/vttablet/tabletserver/tabletenv/config.go) configuration will be used.
If this is not specified a [default](https://github.com/vitessio/vitess/tree/main/go/vt/vttablet/tabletserver/tabletenv/config.go) configuration will be used.
* *tx-throttler-healthcheck-cells*

Просмотреть файл

@ -106,7 +106,7 @@ For #1 and #2, the Rollback workflow is initiated. For #3, the commit is resumed
The following diagram illustrates the life-cycle of a Vitess transaction.
![](https://raw.githubusercontent.com/vitessio/vitess/master/doc/TxLifecycle.png)
![](https://raw.githubusercontent.com/vitessio/vitess/main/doc/TxLifecycle.png)
A transaction generally starts off as a single DB transaction. It becomes a distributed transaction as soon as more than one VTTablet is affected. If the app issues a rollback, then all participants are simply rolled back. If a BEC is issued, then all transactions are individually committed. These actions are the same irrespective of single or distributed transactions.
@ -132,7 +132,7 @@ In order to make 2PC work, the following pieces of functionality have to be buil
The diagram below show how the various components interact.
![](https://raw.githubusercontent.com/vitessio/vitess/master/doc/TxInteractions.png)
![](https://raw.githubusercontent.com/vitessio/vitess/main/doc/TxInteractions.png)
The detailed design explains all the functionalities and interactions.

Просмотреть файл

@ -6,7 +6,7 @@ The goal of this document is to describe the guiding principles that will be use
### Prerequisites
Before reading this doc you must be familiar with [vindexes](https://github.com/vitessio/vitess/blob/master/doc/V3VindexDesign.md), which is used as foundation for the arguments presented here.
Before reading this doc you must be familiar with [vindexes](https://github.com/vitessio/vitess/blob/main/doc/V3VindexDesign.md), which is used as foundation for the arguments presented here.
# Background
@ -1194,7 +1194,7 @@ The overall strategy is as follows:
In order to align ourselves with our priorities, well start off with a limited set of primitives, and then we can expand from there.
VTGate already has `Route` and `RouteMerge` as primitives. To this list, lets add `Join` and `LeftJoin`. Using these primitives, we should be able to cover priorities 1-3 (mentioned in the [Prioritization](https://github.com/vitessio/vitess/blob/master/doc/V3HighLevelDesign.md#prioritization) section). So, any constructs that will require VTGate to do additional work will not be supported. Heres a recap of what each primitive must do:
VTGate already has `Route` and `RouteMerge` as primitives. To this list, lets add `Join` and `LeftJoin`. Using these primitives, we should be able to cover priorities 1-3 (mentioned in the [Prioritization](https://github.com/vitessio/vitess/blob/main/doc/V3HighLevelDesign.md#prioritization) section). So, any constructs that will require VTGate to do additional work will not be supported. Heres a recap of what each primitive must do:
* `Route`: Sends a query to a single shard or unsharded keyspace.
* `RouteMerge`: Sends a (mostly) identical query to multiple shards and returns the combined results in no particular order.

Просмотреть файл

@ -2,7 +2,7 @@
# Introduction
This document builds on top of [The V3 high level design](https://github.com/vitessio/vitess/blob/master/doc/V3HighLevelDesign.md). It discusses implementation of subquery support in greater detail.
This document builds on top of [The V3 high level design](https://github.com/vitessio/vitess/blob/main/doc/V3HighLevelDesign.md). It discusses implementation of subquery support in greater detail.

Просмотреть файл

@ -34,7 +34,7 @@ there are some additional benefits:
underneath without changing much of the app.
The
[V3 design](https://github.com/vitessio/vitess/blob/master/doc/V3VindexDesign.md)
[V3 design](https://github.com/vitessio/vitess/blob/main/doc/V3VindexDesign.md)
is quite elaborate. If necessary, it will allow you to plug in custom indexes
and sharding schemes. However, it comes equipped with some pre-cooked recipes
that satisfy the immediate needs of the real-world:

Просмотреть файл

@ -10,7 +10,7 @@ One can think of a vindex as a table that looks like this:
create my_vdx(id int, keyspace_id varbinary(255)) // id can be of any type.
```
Looking at the vindex interface defined [here](https://github.com/vitessio/vitess/blob/master/go/vt/vtgate/vindexes/vindex.go), we can come up with SQL syntax that represents them:
Looking at the vindex interface defined [here](https://github.com/vitessio/vitess/blob/main/go/vt/vtgate/vindexes/vindex.go), we can come up with SQL syntax that represents them:
* Map: `select id, keyspace_id from my_vdx where id = :id`.
* Create: `insert into my_vdx values(:id, :keyspace_id)`.
* Delete: `delete from my_vdx where id = :id and keyspace_id :keyspace_id`.

Просмотреть файл

@ -70,4 +70,4 @@ data is stored only once, and fetched only if needed.
The following diagram illustrates where vitess fits in the spectrum of storage solutions:
![Spectrum](https://raw.github.com/vitessio/vitess/master/doc/VitessSpectrum.png)
![Spectrum](https://raw.github.com/vitessio/vitess/main/doc/VitessSpectrum.png)

Просмотреть файл

@ -13,7 +13,7 @@ backward-incompatible way -- for example, when removing deprecated interfaces.
Our public API includes (but is not limited to):
* The VTGate [RPC interfaces](https://github.com/vitessio/vitess/tree/master/proto).
* The VTGate [RPC interfaces](https://github.com/vitessio/vitess/tree/main/proto).
* The interfaces exposed by the VTGate client library in each language.
Care must also be taken when changing the format of any data stored by a live

Просмотреть файл

@ -107,7 +107,7 @@ If a scatter query is attempting to collect and process too many rows in memory
### Set Statement Support
Set statement support is added in Vitess. There are [some system variables](https://github.com/vitessio/vitess/blob/master/go/vt/sysvars/sysvars.go#L147,L190) which are disabled by default and can be enabled using flag `-enable_system_settings` on VTGate.These system variables are set on the backing MySQL instance, and will force the connection to be dedicated instead of part of the connection pool.
Set statement support is added in Vitess. There are [some system variables](https://github.com/vitessio/vitess/blob/main/go/vt/sysvars/sysvars.go#L147,L190) which are disabled by default and can be enabled using flag `-enable_system_settings` on VTGate.These system variables are set on the backing MySQL instance, and will force the connection to be dedicated instead of part of the connection pool.
* Disabled passthrough system variables by default. #6859
* Allow switching workload between OLAP and OLTP #4086 #6691

Просмотреть файл

@ -100,7 +100,7 @@ Vitess 9.0 is not compatible with the previous release of the Vitess Kubernetes
### Set Statement Support
Set statement support has been added in Vitess. There are [some system variables](https://github.com/vitessio/vitess/blob/master/go/vt/sysvars/sysvars.go#L147,L190) which are disabled by default and can be enabled using flag `-enable_system_settings` on VTGate. These system variables are set on the mysql server. Because they change the mysql session, using them leads to the Vitess connection no longer using the connection pool and forcing dedicated connections.
Set statement support has been added in Vitess. There are [some system variables](https://github.com/vitessio/vitess/blob/main/go/vt/sysvars/sysvars.go#L147,L190) which are disabled by default and can be enabled using flag `-enable_system_settings` on VTGate. These system variables are set on the mysql server. Because they change the mysql session, using them leads to the Vitess connection no longer using the connection pool and forcing dedicated connections.
### VReplication

Просмотреть файл

@ -1,11 +1,11 @@
# Vitess Docker Images
The Vitess Project publishes several Docker images in the [Docker Hub "vitess" repository](https://hub.docker.com/u/vitess/).
This file describes the purpose of the different images.
The Vitess Project publishes several Docker images in
the [Docker Hub "vitess" repository](https://hub.docker.com/u/vitess/). This file describes the purpose of the different
images.
**TL;DR:** Use the [vitess/lite](https://hub.docker.com/r/vitess/lite/) image for running Vitess.
Our Kubernetes Tutorial uses it as well.
Instead of using the `latest` tag, you can pin it to a known stable version e.g. `v4.0`.
**TL;DR:** Use the [vitess/lite](https://hub.docker.com/r/vitess/lite/) image for running Vitess. Our Kubernetes
Tutorial uses it as well. Instead of using the `latest` tag, you can pin it to a known stable version e.g. `v4.0`.
## Principles
@ -13,8 +13,10 @@ The structure of this directory and our Dockerfile files is guided by the follow
* The configuration of each Vitess image is in the directory `docker/<image>/`.
* Configurations for other images e.g. our internal tool Keytar (see below), can be in a different location.
* Images with more complex build steps have a `build.sh` script e.g. see [lite/build.sh](https://github.com/vitessio/vitess/blob/master/docker/lite/build.sh).
* Tags are used to provide (stable) versions e.g. see tag `v2.0` for the image [vitess/lite](https://hub.docker.com/r/vitess/lite/tags).
* Images with more complex build steps have a `build.sh` script e.g.
see [lite/build.sh](https://github.com/vitessio/vitess/blob/main/docker/lite/build.sh).
* Tags are used to provide (stable) versions e.g. see tag `v2.0` for the
image [vitess/lite](https://hub.docker.com/r/vitess/lite/tags).
* Where applicable, we provide a `latest` tag to reference the latest build of an image.
## Images
@ -29,14 +31,19 @@ Our list of images can be grouped into:
| Image | How (When) Updated | Description |
| --- | --- | --- |
| **bootstrap** | manual (after incompatible changes are made to [bootstrap.sh](https://github.com/vitessio/vitess/blob/master/bootstrap.sh) or [vendor/vendor.json](https://github.com/vitessio/vitess/blob/master/vendor/vendor.json) | Basis for all Vitess images. It is a snapshot of the checked out repository after running `./bootstrap.sh`. Used to cache dependencies. Avoids lengthy recompilation of dependencies if they did not change. Our internal test runner [`test.go`](https://github.com/vitessio/vitess/blob/master/test.go) uses it to test the code against different MySQL versions. |
| **base** | automatic (after every GitHub push to the master branch) | Contains all Vitess server binaries. Snapshot after running `make build`. |
| **root** | automatic (after every GitHub push to the master branch) | Same as **base** but with the default user set to "root". Required for Kubernetes. |
| **lite** | manual (updated with every Vitess release) | Stripped down version of **base** e.g. source code and build dependencies are removed. Default image in our Kubernetes templates for minimized startup time. |
| **
bootstrap** | manual (after incompatible changes are made to [bootstrap.sh](https://github.com/vitessio/vitess/blob/main/bootstrap.sh) or [vendor/vendor.json](https://github.com/vitessio/vitess/blob/main/vendor/vendor.json) | Basis for all Vitess images. It is a snapshot of the checked out repository after running `./bootstrap.sh`. Used to cache dependencies. Avoids lengthy recompilation of dependencies if they did not change. Our internal test runner [`test.go`](https://github.com/vitessio/vitess/blob/master/test.go) uses it to test the code against different MySQL versions. |
| **
base** | automatic (after every GitHub push to the master branch) | Contains all Vitess server binaries. Snapshot after running `make build`. |
| **root** | automatic (after every GitHub push to the master branch) | Same as **
base** but with the default user set to "root". Required for Kubernetes. |
| **lite** | manual (updated with every Vitess release) | Stripped down version of **
base** e.g. source code and build dependencies are removed. Default image in our Kubernetes templates for minimized startup time. |
All these Vitess images include a specific MySQL/MariaDB version ("flavor").
* We provide Dockerfile files for multiple flavors (`Dockerfile.<flavor>`).
* On Docker Hub we publish only images with MySQL 5.7 to minimize maintenance overhead and avoid confusion.
* We provide Dockerfile files for multiple flavors (`Dockerfile.<flavor>`).
* On Docker Hub we publish only images with MySQL 5.7 to minimize maintenance overhead and avoid confusion.
If you are looking for a stable version of Vitess, use the **lite** image with a fixed version. If you are looking for the latest Vitess code in binary form, use the "latest" tag of the **base** image.
If you are looking for a stable version of Vitess, use the **lite** image with a fixed version. If you are looking for
the latest Vitess code in binary form, use the "latest" tag of the **base** image.

Просмотреть файл

@ -15,7 +15,7 @@ The `vitess/bootstrap` image comes in different flavors:
**NOTE: Unlike the base image that builds Vitess itself, this bootstrap image
will NOT be rebuilt automatically on every push to the Vitess master branch.**
To build a new bootstrap image, use the [build.sh](https://github.com/vitessio/vitess/blob/master/docker/bootstrap/build.sh)
To build a new bootstrap image, use the [build.sh](https://github.com/vitessio/vitess/blob/main/docker/bootstrap/build.sh)
script.
First build the `common` image, then any flavors you want. For example:

Просмотреть файл

@ -72,7 +72,7 @@ func main() {
}
}
// Read it back from the master.
// Read it back from the primary.
fmt.Println("Reading from master...")
rows, err := db.Query("SELECT page, time_created_ns, message FROM messages")
if err != nil {
@ -94,7 +94,7 @@ func main() {
}
// Read from a replica.
// Note that this may be behind master due to replication lag.
// Note that this may be behind primary due to replication lag.
fmt.Println("Reading from replica...")
dbr, err := vitessdriver.Open(*server, "@replica")

Просмотреть файл

@ -517,7 +517,7 @@ func applyShardPatches(
}
func generateDefaultShard(tabAlias int, shard string, keyspaceData keyspaceInfo, opts vtOptions) string {
aliases := []int{tabAlias + 1} // master alias, e.g. 201
aliases := []int{tabAlias + 1} // primary alias, e.g. 201
for i := 0; i < keyspaceData.replicaTablets; i++ {
aliases = append(aliases, tabAlias+2+i) // replica aliases, e.g. 202, 203, ...
}
@ -546,7 +546,7 @@ func generateExternalmaster(
opts vtOptions,
) string {
aliases := []int{tabAlias + 1} // master alias, e.g. 201
aliases := []int{tabAlias + 1} // primary alias, e.g. 201
for i := 0; i < keyspaceData.replicaTablets; i++ {
aliases = append(aliases, tabAlias+2+i) // replica aliases, e.g. 202, 203, ...
}

Просмотреть файл

@ -319,7 +319,7 @@ func takeBackup(ctx context.Context, topoServer *topo.Server, backupStorage back
}
// Get the current primary replication position, and wait until we catch up
// to that point. We do this instead of looking at Seconds_Behind_Master
// to that point. We do this instead of looking at ReplicationLag
// because that value can
// sometimes lie and tell you there's 0 lag when actually replication is
// stopped. Also, if replication is making progress but is too slow to ever

Просмотреть файл

@ -71,8 +71,8 @@ func (rts rTablets) Less(i, j int) bool {
return false
}
// the type proto has MASTER first, so sort by that. Will show
// the MASTER first, then each replica type sorted by
// the type proto has PRIMARY first, so sort by that. Will show
// the PRIMARY first, then each replica type sorted by
// replication position.
if l.Tablet.Type < r.Tablet.Type {
return true
@ -101,7 +101,7 @@ func (rts rTablets) Less(i, j int) bool {
//
// The sorting order is:
// 1. Tablets that do not have a replication Status.
// 2. Any tablets of type MASTER.
// 2. Any tablets of type PRIMARY.
// 3. Remaining tablets sorted by comparing replication positions.
func SortedReplicatingTablets(tabletMap map[string]*topodatapb.Tablet, replicationStatuses map[string]*replicationdatapb.Status) []*ReplicatingTablet {
rtablets := make([]*ReplicatingTablet, 0, len(tabletMap))

Просмотреть файл

@ -92,7 +92,7 @@ type flavor interface {
setReplicationPositionCommands(pos Position) []string
// changeReplicationSourceArg returns the specific parameter to add to
// a "change master" command.
// a "change primary" command.
changeReplicationSourceArg() string
// status returns the result of the appropriate status command,

Просмотреть файл

@ -149,7 +149,7 @@ func (shard *Shard) Rdonly() *Vttablet {
}
// Replica get the last but one tablet which is replica
// Mostly we have either 3 tablet setup [master, replica, rdonly]
// Mostly we have either 3 tablet setup [primary, replica, rdonly]
func (shard *Shard) Replica() *Vttablet {
for idx, tablet := range shard.Vttablets {
if tablet.Type == "replica" && idx > 0 {

Просмотреть файл

@ -325,7 +325,7 @@ func TestShardCountForAllKeyspaces(t *testing.T) {
func testShardCountForKeyspace(t *testing.T, keyspace string, count int) {
srvKeyspace := getSrvKeyspace(t, cell, keyspace)
// for each served type MASTER REPLICA RDONLY, the shard ref count should match
// for each served type PRIMARY REPLICA RDONLY, the shard ref count should match
for _, partition := range srvKeyspace.Partitions {
if servedTypes[partition.ServedType] {
assert.Equal(t, len(partition.ShardReferences), count)
@ -342,7 +342,7 @@ func TestShardNameForAllKeyspaces(t *testing.T) {
func testShardNameForKeyspace(t *testing.T, keyspace string, shardNames []string) {
srvKeyspace := getSrvKeyspace(t, cell, keyspace)
// for each served type MASTER REPLICA RDONLY, the shard ref count should match
// for each served type PRIMARY REPLICA RDONLY, the shard ref count should match
for _, partition := range srvKeyspace.Partitions {
if servedTypes[partition.ServedType] {
for _, shardRef := range partition.ShardReferences {
@ -357,7 +357,7 @@ func TestKeyspaceToShardName(t *testing.T) {
var id []byte
srvKeyspace := getSrvKeyspace(t, cell, keyspaceShardedName)
// for each served type MASTER REPLICA RDONLY, the shard ref count should match
// for each served type PRIMARY REPLICA RDONLY, the shard ref count should match
for _, partition := range srvKeyspace.Partitions {
if partition.ServedType == topodata.TabletType_PRIMARY {
for _, shardRef := range partition.ShardReferences {

Просмотреть файл

@ -94,7 +94,7 @@ func initCluster(shardNames []string, totalTabletsRequired int) {
MySQLPort: clusterInstance.GetAndReservePort(),
Alias: fmt.Sprintf("%s-%010d", clusterInstance.Cell, tabletUID),
}
if i == 0 { // Make the first one as master
if i == 0 { // Make the first one as primary
tablet.Type = "master"
}
// Start Mysqlctl process

Просмотреть файл

@ -94,7 +94,7 @@ func initCluster(shardNames []string, totalTabletsRequired int) error {
MySQLPort: clusterInstance.GetAndReservePort(),
Alias: fmt.Sprintf("%s-%010d", clusterInstance.Cell, tabletUID),
}
if i == 0 { // Make the first one as master
if i == 0 { // Make the first one as primary
tablet.Type = "master"
}
// Start Mysqlctld process

Просмотреть файл

@ -43,7 +43,7 @@ func TestMasterToSpareStateChangeImpossible(t *testing.T) {
setupReparentCluster(t)
defer teardownCluster()
// We cannot change a master to spare
// We cannot change a primary to spare
out, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("ChangeTabletType", tab1.Alias, "spare")
require.Error(t, err, out)
require.Contains(t, out, "type change PRIMARY -> SPARE is not an allowed transition for ChangeTabletType")

Просмотреть файл

@ -203,7 +203,7 @@ func runHookAndAssert(t *testing.T, params []string, expectedStatus string, expe
}
func TestShardReplicationFix(t *testing.T) {
// make sure the replica is in the replication graph, 2 nodes: 1 master, 1 replica
// make sure the replica is in the replication graph, 2 nodes: 1 primary, 1 replica
defer cluster.PanicHandler(t)
result, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("GetShardReplication", cell, keyspaceShard)
require.Nil(t, err, "error should be Nil")

Просмотреть файл

@ -44,7 +44,7 @@ func TestLockAndUnlock(t *testing.T) {
require.Nil(t, err)
defer replicaConn.Close()
// first make sure that our writes to the master make it to the replica
// first make sure that our writes to the primary make it to the replica
exec(t, conn, "delete from t1")
exec(t, conn, "insert into t1(id, value) values(1,'a'), (2,'b')")
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")] [VARCHAR("b")]]`)
@ -52,7 +52,7 @@ func TestLockAndUnlock(t *testing.T) {
// now lock the replica
err = tmcLockTables(ctx, replicaTablet.GrpcPort)
require.Nil(t, err)
// make sure that writing to the master does not show up on the replica while locked
// make sure that writing to the primary does not show up on the replica while locked
exec(t, conn, "insert into t1(id, value) values(3,'c')")
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")] [VARCHAR("b")]]`)
@ -139,7 +139,7 @@ func TestLockAndTimeout(t *testing.T) {
require.Nil(t, err)
defer replicaConn.Close()
// first make sure that our writes to the master make it to the replica
// first make sure that our writes to the primary make it to the replica
exec(t, masterConn, "insert into t1(id, value) values(1,'a')")
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")]]`)
@ -147,7 +147,7 @@ func TestLockAndTimeout(t *testing.T) {
err = tmcLockTables(ctx, replicaTablet.GrpcPort)
require.Nil(t, err)
// make sure that writing to the master does not show up on the replica while locked
// make sure that writing to the primary does not show up on the replica while locked
exec(t, masterConn, "insert into t1(id, value) values(2,'b')")
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")]]`)

Просмотреть файл

@ -109,14 +109,14 @@ func TestTabletChange(t *testing.T) {
checkedExec(t, conn, "use @master")
checkedExec(t, conn, "set sql_mode = ''")
// this will create reserved connection on master on -80 and 80- shards.
// this will create reserved connection on primary on -80 and 80- shards.
checkedExec(t, conn, "select * from test")
// Change Master
err = clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "-keyspace_shard", fmt.Sprintf("%s/%s", keyspaceName, "-80"))
require.NoError(t, err)
// this should pass as there is new master tablet and is serving.
// this should pass as there is a new primary tablet and is serving.
_, err = exec(t, conn, "select * from test")
assert.NoError(t, err)
}

Просмотреть файл

@ -97,7 +97,7 @@ func (t *HorizontalReshardingTask) Run(parameters map[string]string) ([]*automat
newTasks = append(newTasks, splitDiffTask)
}
for _, servedType := range []string{"rdonly", "replica", "master"} {
for _, servedType := range []string{"rdonly", "replica", "primary"} {
migrateServedTypesTasks := NewTaskContainer()
for _, sourceShard := range sourceShards {
AddTask(migrateServedTypesTasks, "MigrateServedTypesTask", map[string]string{

Просмотреть файл

@ -97,7 +97,7 @@ func (t *VerticalSplitTask) Run(parameters map[string]string) ([]*automationpb.T
newTasks = append(newTasks, vSplitDiffTask)
}
for _, servedType := range []string{"rdonly", "replica", "master"} {
for _, servedType := range []string{"rdonly", "replica", "primary"} {
migrateServedTypesTasks := NewTaskContainer()
for _, shard := range shards {
AddTask(migrateServedTypesTasks, "MigrateServedFromTask", map[string]string{

Просмотреть файл

@ -50,7 +50,7 @@ func TestVerticalSplitTask(t *testing.T) {
vtworker.RegisterResult([]string{"VerticalSplitDiff", "--min_healthy_rdonly_tablets=1", "destination_keyspace/0"}, "", nil)
vtctld.RegisterResult([]string{"MigrateServedFrom", "destination_keyspace/0", "rdonly"}, "", nil)
vtctld.RegisterResult([]string{"MigrateServedFrom", "destination_keyspace/0", "replica"}, "", nil)
vtctld.RegisterResult([]string{"MigrateServedFrom", "destination_keyspace/0", "master"},
vtctld.RegisterResult([]string{"MigrateServedFrom", "destination_keyspace/0", "primary"},
"ALL_DONE",
nil)

Просмотреть файл

@ -23,7 +23,7 @@ import (
"vitess.io/vitess/go/vt/topo/topoproto"
)
// WaitForFilteredReplicationTask runs vtctl WaitForFilteredReplication to block until the destination master
// WaitForFilteredReplicationTask runs vtctl WaitForFilteredReplication to block until the destination primary
// (i.e. the receiving side of the filtered replication) has caught up to max_delay with the source shard.
type WaitForFilteredReplicationTask struct {
}

Просмотреть файл

@ -322,7 +322,7 @@ func (bls *Streamer) parseEvents(ctx context.Context, events <-chan mysql.Binlog
// tells us the size of the event header.
if format.IsZero() {
// The only thing that should come before the FORMAT_DESCRIPTION_EVENT
// is a fake ROTATE_EVENT, which the master sends to tell us the name
// is a fake ROTATE_EVENT, which the primary sends to tell us the name
// of the current log file.
if ev.IsRotate() {
continue

Просмотреть файл

@ -73,7 +73,7 @@ var (
//TODO(deepthi): change these vars back to unexported when discoveryGateway is removed
// AllowedTabletTypes is the list of allowed tablet types. e.g. {MASTER, REPLICA}
// AllowedTabletTypes is the list of allowed tablet types. e.g. {PRIMARY, REPLICA}
AllowedTabletTypes []topodata.TabletType
// TabletFilters are the keyspace|shard or keyrange filters to apply to the full set of tablets
TabletFilters flagutil.StringListValue
@ -192,12 +192,12 @@ type HealthCheck interface {
// GetHealthyTabletStats returns only the healthy tablets.
// The returned array is owned by the caller.
// For TabletType_PRIMARY, this will only return at most one entry,
// the most recent tablet of type master.
// the most recent tablet of type primary.
// This returns a copy of the data so that callers can access without
// synchronization
GetHealthyTabletStats(target *query.Target) []*TabletHealth
// Subscribe adds a listener. Used by vtgate buffer to learn about master changes.
// Subscribe adds a listener. Used by vtgate buffer to learn about primary changes.
Subscribe() chan *TabletHealth
// Unsubscribe removes a listener.
@ -261,7 +261,7 @@ type HealthCheckImpl struct {
// localCell.
// The localCell for this healthcheck
// callback.
// A function to call when there is a master change. Used to notify vtgate's buffer to stop buffering.
// A function to call when there is a primary change. Used to notify vtgate's buffer to stop buffering.
func NewHealthCheck(ctx context.Context, retryDelay, healthCheckTimeout time.Duration, topoServer *topo.Server, localCell, cellsToWatch string) *HealthCheckImpl {
log.Infof("loading tablets for cells: %v", cellsToWatch)
@ -467,7 +467,7 @@ func (hc *HealthCheckImpl) updateHealth(th *TabletHealth, prevTarget *query.Targ
if !trivialUpdate {
// We re-sort the healthy tablet list whenever we get a health update for tablets we can route to.
// Tablets from other cells for non-master targets should not trigger a re-sort;
// Tablets from other cells for non-primary targets should not trigger a re-sort;
// they should also be excluded from healthy list.
if th.Target.TabletType != topodata.TabletType_PRIMARY && hc.isIncluded(th.Target.TabletType, th.Tablet.Alias) {
hc.recomputeHealthy(targetKey)
@ -501,7 +501,7 @@ func (hc *HealthCheckImpl) recomputeHealthy(key keyspaceShardTabletType) {
hc.healthy[key] = FilterStatsByReplicationLag(allArray)
}
// Subscribe adds a listener. Used by vtgate buffer to learn about master changes.
// Subscribe adds a listener. Used by vtgate buffer to learn about primary changes.
func (hc *HealthCheckImpl) Subscribe() chan *TabletHealth {
hc.subMu.Lock()
defer hc.subMu.Unlock()
@ -590,7 +590,7 @@ func (hc *HealthCheckImpl) Close() error {
// GetHealthyTabletStats returns only the healthy tablets.
// The returned array is owned by the caller.
// For TabletType_PRIMARY, this will only return at most one entry,
// the most recent tablet of type master.
// the most recent tablet of type primary.
// This returns a copy of the data so that callers can access without
// synchronization
func (hc *HealthCheckImpl) GetHealthyTabletStats(target *query.Target) []*TabletHealth {
@ -606,7 +606,7 @@ func (hc *HealthCheckImpl) GetHealthyTabletStats(target *query.Target) []*Tablet
// getTabletStats returns all tablets for the given target.
// The returned array is owned by the caller.
// For TabletType_PRIMARY, this will only return at most one entry,
// the most recent tablet of type master.
// the most recent tablet of type primary.
func (hc *HealthCheckImpl) getTabletStats(target *query.Target) []*TabletHealth {
var result []*TabletHealth
hc.mu.Lock()

Просмотреть файл

@ -397,7 +397,7 @@ func TestHealthCheckErrorOnPrimaryAfterExternalReparent(t *testing.T) {
// Stream error from tablet 1
fc1.errCh <- fmt.Errorf("some stream error")
<-resultChan
// tablet 2 should still be the master
// tablet 2 should still be the primary
a = hc.GetHealthyTabletStats(&querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_PRIMARY})
mustMatch(t, health, a, "unexpected result")
}
@ -835,7 +835,7 @@ func TestGetHealthyTablets(t *testing.T) {
a = hc.GetHealthyTabletStats(&querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_REPLICA})
assert.Equal(t, 1, len(a), "Wrong number of results")
// second tablet turns into a master
// second tablet turns into a primary
shr2 = &querypb.StreamHealthResponse{
TabletAlias: tablet2.Alias,
Target: &querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_PRIMARY},
@ -859,11 +859,11 @@ func TestGetHealthyTablets(t *testing.T) {
Stats: &querypb.RealtimeStats{ReplicationLagSeconds: 0, CpuUsage: 0.2},
PrimaryTermStartTime: 10,
}}
// check we have a master now
// check we have a primary now
a = hc.GetHealthyTabletStats(&querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_PRIMARY})
mustMatch(t, want2, a, "unexpected result")
// reparent: old replica goes into master
// reparent: old replica goes into primary
shr = &querypb.StreamHealthResponse{
TabletAlias: tablet.Alias,
Target: &querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_PRIMARY},
@ -881,13 +881,13 @@ func TestGetHealthyTablets(t *testing.T) {
PrimaryTermStartTime: 20,
}}
// check we lost all replicas, and master is new one
// check we lost all replicas, and primary is new one
a = hc.GetHealthyTabletStats(&querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_REPLICA})
assert.Empty(t, a, "Wrong number of results")
a = hc.GetHealthyTabletStats(&querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_PRIMARY})
mustMatch(t, want, a, "unexpected result")
// old master sending an old ping should be ignored
// old primary sending an old ping should be ignored
input2 <- shr2
<-resultChan
a = hc.GetHealthyTabletStats(&querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_PRIMARY})
@ -899,7 +899,7 @@ func TestMasterInOtherCell(t *testing.T) {
hc := NewHealthCheck(context.Background(), 1*time.Millisecond, time.Hour, ts, "cell1", "cell1, cell2")
defer hc.Close()
// add a tablet as master in different cell
// add a tablet as primary in different cell
tablet := createTestTablet(1, "cell2", "host1")
tablet.Type = topodatapb.TabletType_PRIMARY
input := make(chan *querypb.StreamHealthResponse)
@ -939,13 +939,13 @@ func TestMasterInOtherCell(t *testing.T) {
case err := <-fc.cbErrCh:
require.Fail(t, "Unexpected error: %v", err)
case got := <-resultChan:
// check that we DO receive health check update for MASTER in other cell
// check that we DO receive health check update for PRIMARY in other cell
mustMatch(t, want, got, "Wrong TabletHealth data")
case <-ticker.C:
require.Fail(t, "Timed out waiting for HealthCheck update")
}
// check that MASTER tablet from other cell IS in healthy tablet list
// check that PRIMARY tablet from other cell IS in healthy tablet list
a := hc.GetHealthyTabletStats(&querypb.Target{Keyspace: "k", Shard: "s", TabletType: topodatapb.TabletType_PRIMARY})
require.Len(t, a, 1, "")
mustMatch(t, want, a[0], "Expecting healthy master")

Просмотреть файл

@ -145,9 +145,9 @@ type LegacyTabletStats struct {
// Serving describes if the tablet can be serving traffic.
Serving bool
// TabletExternallyReparentedTimestamp is the last timestamp
// that this tablet was either elected the master, or received
// that this tablet was either elected the primary, or received
// a TabletExternallyReparented event. It is set to 0 if the
// tablet doesn't think it's a master.
// tablet doesn't think it's a primary.
TabletExternallyReparentedTimestamp int64
// Stats is the current health status, as received by the
// StreamHealth RPC (replication lag, ...).
@ -293,7 +293,7 @@ type LegacyHealthCheck interface {
RegisterStats()
// SetListener sets the listener for healthcheck
// updates. sendDownEvents is used when a tablet changes type
// (from replica to master for instance). If the listener
// (from replica to primary for instance). If the listener
// wants two events (Up=false on old type, Up=True on new
// type), sendDownEvents should be set. Otherwise, the
// healthcheck will only send one event (Up=true on new type).
@ -498,7 +498,7 @@ func (hc *LegacyHealthCheckImpl) updateHealth(ts *LegacyTabletStats, conn querys
hc.listener.StatsUpdate(&oldts)
}
// Track how often a tablet gets promoted to master. It is used for
// Track how often a tablet gets promoted to primary. It is used for
// comparing against the variables in go/vtgate/buffer/variables.go.
if oldts.Target.TabletType != topodatapb.TabletType_PRIMARY && ts.Target.TabletType == topodatapb.TabletType_PRIMARY {
hcMasterPromotedCounters.Add([]string{ts.Target.Keyspace, ts.Target.Shard}, 1)

Просмотреть файл

@ -30,10 +30,10 @@ import (
// LegacyTabletStatsCache is a LegacyHealthCheckStatsListener that keeps both the
// current list of available LegacyTabletStats, and a serving list:
// - for master tablets, only the current master is kept.
// - for non-master tablets, we filter the list using FilterLegacyStatsByReplicationLag.
// - for primary tablets, only the current primary is kept.
// - for non-primary tablets, we filter the list using FilterLegacyStatsByReplicationLag.
// It keeps entries for all tablets in the cell(s) it's configured to serve for,
// and for the master independently of which cell it's in.
// and for the primary independently of which cell it's in.
// Note the healthy tablet computation is done when we receive a tablet
// update only, not at serving time.
// Also note the cache may not have the last entry received by the tablet.
@ -41,7 +41,7 @@ import (
// keep its new update.
type LegacyTabletStatsCache struct {
// cell is the cell we are keeping all tablets for.
// Note we keep track of all master tablets in all cells.
// Note we keep track of all primary tablets in all cells.
cell string
// ts is the topo server in use.
ts *topo.Server
@ -68,7 +68,7 @@ type legacyTabletStatsCacheEntry struct {
func (e *legacyTabletStatsCacheEntry) updateHealthyMapForMaster(ts *LegacyTabletStats) {
if ts.Up {
// We have an Up master.
// We have an Up primary.
if len(e.healthy) == 0 {
// We have a new Up server, just remember it.
e.healthy = append(e.healthy, ts)
@ -92,7 +92,7 @@ func (e *legacyTabletStatsCacheEntry) updateHealthyMapForMaster(ts *LegacyTablet
return
}
// We have a Down master, remove it only if it's exactly the same.
// We have a Down primary, remove it only if it's exactly the same.
if len(e.healthy) != 0 {
if ts.Key == e.healthy[0].Key {
// Same guy, remove it.
@ -285,7 +285,7 @@ func (tc *LegacyTabletStatsCache) GetTabletStats(keyspace, shard string, tabletT
// GetHealthyTabletStats returns only the healthy targets.
// The returned array is owned by the caller.
// For TabletType_PRIMARY, this will only return at most one entry,
// the most recent tablet of type master.
// the most recent tablet of type primary.
func (tc *LegacyTabletStatsCache) GetHealthyTabletStats(keyspace, shard string, tabletType topodatapb.TabletType) []LegacyTabletStats {
e := tc.getEntry(keyspace, shard, tabletType)
if e == nil {

Просмотреть файл

@ -190,7 +190,7 @@ func TestLegacyTabletStatsCache(t *testing.T) {
t.Errorf("unexpected result: %v", a)
}
// second tablet turns into a master, we receive down + up
// second tablet turns into a primary, we receive down + up
ts2.Serving = true
ts2.Up = false
tsc.StatsUpdate(ts2)
@ -205,13 +205,13 @@ func TestLegacyTabletStatsCache(t *testing.T) {
t.Errorf("unexpected result: %v", a)
}
// check we have a master now
// check we have a primary now
a = tsc.GetTabletStats("k", "s", topodatapb.TabletType_PRIMARY)
if len(a) != 1 || !ts2.DeepEqual(&a[0]) {
t.Errorf("unexpected result: %v", a)
}
// reparent: old replica goes into master
// reparent: old replica goes into primary
ts1.Up = false
tsc.StatsUpdate(ts1)
ts1.Up = true
@ -219,7 +219,7 @@ func TestLegacyTabletStatsCache(t *testing.T) {
ts1.TabletExternallyReparentedTimestamp = 20
tsc.StatsUpdate(ts1)
// check we lost all replicas, and master is new one
// check we lost all replicas, and primary is new one
a = tsc.GetTabletStats("k", "s", topodatapb.TabletType_REPLICA)
if len(a) != 0 {
t.Errorf("unexpected result: %v", a)
@ -229,7 +229,7 @@ func TestLegacyTabletStatsCache(t *testing.T) {
t.Errorf("unexpected result: %v", a)
}
// old master sending an old ping should be ignored
// old primary sending an old ping should be ignored
tsc.StatsUpdate(ts2)
a = tsc.GetHealthyTabletStats("k", "s", topodatapb.TabletType_PRIMARY)
if len(a) != 1 || !ts1.DeepEqual(&a[0]) {

Просмотреть файл

@ -59,9 +59,9 @@ type tabletHealthCheck struct {
// Serving describes if the tablet can be serving traffic.
Serving bool
// PrimaryTermStartTime is the last time at which
// this tablet was either elected the master, or received
// this tablet was either elected the primary, or received
// a TabletExternallyReparented event. It is set to 0 if the
// tablet doesn't think it's a master.
// tablet doesn't think it's a primary.
PrimaryTermStartTime int64
// Stats is the current health status, as received by the
// StreamHealth RPC (replication lag, ...).
@ -199,7 +199,7 @@ func (thc *tabletHealthCheck) processResponse(hc *HealthCheckImpl, shr *query.St
}
thc.setServingState(serving, reason)
// notify downstream for master change
// notify downstream for primary change
hc.updateHealth(thc.SimpleCopy(), prevTarget, trivialUpdate, thc.Serving)
return nil
}

Просмотреть файл

@ -170,7 +170,7 @@ func TestPickRespectsTabletType(t *testing.T) {
tp, err := NewTabletPicker(te.topoServ, te.cells, te.keyspace, te.shard, "replica,rdonly")
require.NoError(t, err)
// In 20 attempts, master tablet must be never picked
// In 20 attempts, primary tablet must be never picked
for i := 0; i < 20; i++ {
tablet, err := tp.PickForStreaming(context.Background())
require.NoError(t, err)

Просмотреть файл

@ -57,7 +57,7 @@ type BackupParams struct {
Concurrency int
// Extra env variables for pre-backup and post-backup transform hooks
HookExtraEnv map[string]string
// TopoServer, Keyspace and Shard are used to discover master tablet
// TopoServer, Keyspace and Shard are used to discover primary tablet
TopoServer *topo.Server
// Keyspace and Shard are used to infer the directory where backups should be stored
Keyspace string

Просмотреть файл

@ -35,8 +35,8 @@ func CreateMysqldAndMycnf(tabletUID uint32, mysqlSocket string, mysqlPort int32)
// because reusing server-ids is not safe.
//
// For example, if a tablet comes back with an empty data dir, it will restore
// from backup and then connect to the master. But if this tablet has the same
// server-id as before, and if this tablet was recently a master, then it can
// from backup and then connect to the primary. But if this tablet has the same
// server-id as before, and if this tablet was recently a primary, then it can
// lose data by skipping binlog events due to replicate-same-server-id=FALSE,
// which is the default setting.
if err := mycnf.RandomizeMysqlServerID(); err != nil {

Просмотреть файл

@ -219,7 +219,7 @@ const (
func redactPassword(input string) string {
i := strings.Index(input, masterPasswordStart)
// We have master password in the query, try to redact it
// We have primary password in the query, try to redact it
if i != -1 {
j := strings.Index(input[i+len(masterPasswordStart):], masterPasswordEnd)
if j == -1 {

Просмотреть файл

@ -92,7 +92,7 @@ func (mysqld *Mysqld) WaitForReparentJournal(ctx context.Context, timeCreatedNS
}
}
// Promote will promote this server to be the new master.
// Promote will promote this server to be the new primary.
func (mysqld *Mysqld) Promote(hookExtraEnv map[string]string) (mysql.Position, error) {
ctx := context.TODO()
conn, err := getPoolReconnect(ctx, mysqld.dbaPool)
@ -104,10 +104,10 @@ func (mysqld *Mysqld) Promote(hookExtraEnv map[string]string) (mysql.Position, e
// Since we handle replication, just stop it.
cmds := []string{
conn.StopReplicationCommand(),
"RESET SLAVE ALL", // "ALL" makes it forget master host:port.
// When using semi-sync and GTID, a replica first connects to the new master with a given GTID set,
"RESET SLAVE ALL", // "ALL" makes it forget primary host:port.
// When using semi-sync and GTID, a replica first connects to the new primary with a given GTID set,
// it can take a long time to scan the current binlog file to find the corresponding position.
// This can cause commits that occur soon after the master is promoted to take a long time waiting
// This can cause commits that occur soon after the primary is promoted to take a long time waiting
// for a semi-sync ACK, since replication is not fully set up.
// More details in: https://github.com/vitessio/vitess/issues/4161
"FLUSH BINARY LOGS",

Просмотреть файл

@ -38,7 +38,7 @@ import (
)
// WaitForReplicationStart waits until the deadline for replication to start.
// This validates the current master is correct and can be connected to.
// This validates the current primary is correct and can be connected to.
func WaitForReplicationStart(mysqld MysqlDaemon, replicaStartDeadline int) error {
var rowMap map[string]string
for replicaWait := 0; replicaWait < replicaStartDeadline; replicaWait++ {
@ -215,7 +215,7 @@ func (mysqld *Mysqld) WaitSourcePos(ctx context.Context, targetPos mysql.Positio
waitCommandName := "WaitUntilPositionCommand"
var query string
if targetPos.MatchesFlavor(mysql.FilePosFlavorID) {
// If we are the master, WaitUntilFilePositionCommand will fail.
// If we are the primary, WaitUntilFilePositionCommand will fail.
// But position is most likely reached. So, check the position
// first.
mpos, err := conn.PrimaryFilePosition()
@ -233,7 +233,7 @@ func (mysqld *Mysqld) WaitSourcePos(ctx context.Context, targetPos mysql.Positio
}
waitCommandName = "WaitUntilFilePositionCommand"
} else {
// If we are the master, WaitUntilPositionCommand will fail.
// If we are the primary, WaitUntilPositionCommand will fail.
// But position is most likely reached. So, check the position
// first.
mpos, err := conn.PrimaryPosition()
@ -280,7 +280,7 @@ func (mysqld *Mysqld) ReplicationStatus() (mysql.ReplicationStatus, error) {
return conn.ShowReplicationStatus()
}
// PrimaryStatus returns the master replication statuses
// PrimaryStatus returns the primary replication statuses
func (mysqld *Mysqld) PrimaryStatus(ctx context.Context) (mysql.PrimaryStatus, error) {
conn, err := getPoolReconnect(ctx, mysqld.dbaPool)
if err != nil {
@ -291,7 +291,7 @@ func (mysqld *Mysqld) PrimaryStatus(ctx context.Context) (mysql.PrimaryStatus, e
return conn.ShowPrimaryStatus()
}
// PrimaryPosition returns the master replication position.
// PrimaryPosition returns the primary replication position.
func (mysqld *Mysqld) PrimaryPosition() (mysql.Position, error) {
conn, err := getPoolReconnect(context.TODO(), mysqld.dbaPool)
if err != nil {
@ -316,7 +316,7 @@ func (mysqld *Mysqld) SetReplicationPosition(ctx context.Context, pos mysql.Posi
return mysqld.executeSuperQueryListConn(ctx, conn, cmds)
}
// SetReplicationSource makes the provided host / port the master. It optionally
// SetReplicationSource makes the provided host / port the primary. It optionally
// stops replication before, and starts it after.
func (mysqld *Mysqld) SetReplicationSource(ctx context.Context, masterHost string, masterPort int, replicationStopBefore bool, replicationStartAfter bool) error {
params, err := mysqld.dbcfgs.ReplConnector().MysqlParams()
@ -457,7 +457,7 @@ func (mysqld *Mysqld) DisableBinlogPlayback() error {
}
// SetSemiSyncEnabled enables or disables semi-sync replication for
// master and/or replica mode.
// primary and/or replica mode.
func (mysqld *Mysqld) SetSemiSyncEnabled(master, replica bool) error {
log.Infof("Setting semi-sync mode: master=%v, replica=%v", master, replica)
@ -479,7 +479,7 @@ func (mysqld *Mysqld) SetSemiSyncEnabled(master, replica bool) error {
return nil
}
// SemiSyncEnabled returns whether semi-sync is enabled for master or replica.
// SemiSyncEnabled returns whether semi-sync is enabled for primary or replica.
// If the semi-sync plugin is not loaded, we assume semi-sync is disabled.
func (mysqld *Mysqld) SemiSyncEnabled() (master, replica bool) {
vars, err := mysqld.fetchVariables(context.TODO(), "rpl_semi_sync_%_enabled")

Просмотреть файл

@ -70,7 +70,7 @@ func TestRedactPassword(t *testing.T) {
testRedacted(t, `START xxx USER = 'vt_repl', PASSWORD = 'AAA`,
`START xxx USER = 'vt_repl', PASSWORD = 'AAA`)
// both master password and password
// both primary password and password
testRedacted(t, `START xxx
MASTER_PASSWORD = 'AAA',
PASSWORD = 'BBB'

Просмотреть файл

@ -71,7 +71,7 @@ func NewUserPermission(fields []*querypb.Field, values []sqltypes.Value) *tablet
up.PasswordChecksum = crc64.Checksum(values[i].ToBytes(), hashTable)
case "password_last_changed":
// we skip this one, as the value may be
// different on master and replicas.
// different on primary and replicas.
default:
up.Privileges[field.Name] = values[i].ToString()
}

Просмотреть файл

@ -316,7 +316,7 @@ func Cli(command string, strict bool, instance string, destination string, owner
case registerCliCommand("repoint", "Classic file:pos relocation", `Make the given instance replicate from another instance without changing the binglog coordinates. Use with care`):
{
instanceKey, _ = inst.FigureInstanceKey(instanceKey, thisInstanceKey)
// destinationKey can be null, in which case the instance repoints to its existing master
// destinationKey can be null, in which case the instance repoints to its existing primary
instance, err := inst.Repoint(instanceKey, destinationKey, inst.GTIDHintNeutral)
if err != nil {
log.Fatale(err)

Просмотреть файл

@ -113,7 +113,7 @@ type Configuration struct {
DefaultInstancePort int // In case port was not specified on command line
SlaveLagQuery string // Synonym to ReplicationLagQuery
ReplicationLagQuery string // custom query to check on replica lg (e.g. heartbeat table). Must return a single row with a single numeric column, which is the lag.
ReplicationCredentialsQuery string // custom query to get replication credentials. Must return a single row, with two text columns: 1st is username, 2nd is password. This is optional, and can be used by orchestrator to configure replication after master takeover or setup of co-masters. You need to ensure the orchestrator user has the privileges to run this query
ReplicationCredentialsQuery string // custom query to get replication credentials. Must return a single row, with two text columns: 1st is username, 2nd is password. This is optional, and can be used by orchestrator to configure replication after primary takeover or setup of co-primary. You need to ensure the orchestrator user has the privileges to run this query
DiscoverByShowSlaveHosts bool // Attempt SHOW SLAVE HOSTS before PROCESSLIST
UseSuperReadOnly bool // Should orchestrator super_read_only any time it sets read_only
InstancePollSeconds uint // Number of seconds between instance reads
@ -137,7 +137,7 @@ type Configuration struct {
ProblemIgnoreHostnameFilters []string // Will minimize problem visualization for hostnames matching given regexp filters
VerifyReplicationFilters bool // Include replication filters check before approving topology refactoring
ReasonableMaintenanceReplicationLagSeconds int // Above this value move-up and move-below are blocked
CandidateInstanceExpireMinutes uint // Minutes after which a suggestion to use an instance as a candidate replica (to be preferably promoted on master failover) is expired.
CandidateInstanceExpireMinutes uint // Minutes after which a suggestion to use an instance as a candidate replica (to be preferably promoted on primary failover) is expired.
AuditLogFile string // Name of log file for audit operations. Disabled when empty.
AuditToSyslog bool // If true, audit messages are written to syslog
AuditToBackendDB bool // If true, audit messages are written to the backend DB's `audit` table (default: true)
@ -156,8 +156,8 @@ type Configuration struct {
AccessTokenUseExpirySeconds uint // Time by which an issued token must be used
AccessTokenExpiryMinutes uint // Time after which HTTP access token expires
ClusterNameToAlias map[string]string // map between regex matching cluster name to a human friendly alias
DetectClusterAliasQuery string // Optional query (executed on topology instance) that returns the alias of a cluster. Query will only be executed on cluster master (though until the topology's master is resovled it may execute on other/all replicas). If provided, must return one row, one column
DetectClusterDomainQuery string // Optional query (executed on topology instance) that returns the VIP/CNAME/Alias/whatever domain name for the master of this cluster. Query will only be executed on cluster master (though until the topology's master is resovled it may execute on other/all replicas). If provided, must return one row, one column
DetectClusterAliasQuery string // Optional query (executed on topology instance) that returns the alias of a cluster. Query will only be executed on cluster primary (though until the topology's primary is resovled it may execute on other/all replicas). If provided, must return one row, one column
DetectClusterDomainQuery string // Optional query (executed on topology instance) that returns the VIP/CNAME/Alias/whatever domain name for the primary of this cluster. Query will only be executed on cluster primary (though until the topology's primary is resovled it may execute on other/all replicas). If provided, must return one row, one column
DetectInstanceAliasQuery string // Optional query (executed on topology instance) that returns the alias of an instance. If provided, must return one row, one column
DetectPromotionRuleQuery string // Optional query (executed on topology instance) that returns the promotion rule of an instance. If provided, must return one row, one column.
DataCenterPattern string // Regexp pattern with one group, extracting the datacenter name from the hostname
@ -166,7 +166,7 @@ type Configuration struct {
DetectDataCenterQuery string // Optional query (executed on topology instance) that returns the data center of an instance. If provided, must return one row, one column. Overrides DataCenterPattern and useful for installments where DC cannot be inferred by hostname
DetectRegionQuery string // Optional query (executed on topology instance) that returns the region of an instance. If provided, must return one row, one column. Overrides RegionPattern and useful for installments where Region cannot be inferred by hostname
DetectPhysicalEnvironmentQuery string // Optional query (executed on topology instance) that returns the physical environment of an instance. If provided, must return one row, one column. Overrides PhysicalEnvironmentPattern and useful for installments where env cannot be inferred by hostname
DetectSemiSyncEnforcedQuery string // Optional query (executed on topology instance) to determine whether semi-sync is fully enforced for master writes (async fallback is not allowed under any circumstance). If provided, must return one row, one column, value 0 or 1.
DetectSemiSyncEnforcedQuery string // Optional query (executed on topology instance) to determine whether semi-sync is fully enforced for primary writes (async fallback is not allowed under any circumstance). If provided, must return one row, one column, value 0 or 1.
SupportFuzzyPoolHostnames bool // Should "submit-pool-instances" command be able to pass list of fuzzy instances (fuzzy means non-fqdn, but unique enough to recognize). Defaults 'true', implies more queries on backend db
InstancePoolExpiryMinutes uint // Time after which entries in database_instance_pool are expired (resubmit via `submit-pool-instances`)
PromotionIgnoreHostnameFilters []string // Orchestrator will not promote replicas with hostname matching pattern (via -c recovery; for example, avoid promoting dev-dedicated machines)
@ -198,7 +198,7 @@ type Configuration struct {
RecoveryPeriodBlockMinutes int // (supported for backwards compatibility but please use newer `RecoveryPeriodBlockSeconds` instead) The time for which an instance's recovery is kept "active", so as to avoid concurrent recoveries on smae instance as well as flapping
RecoveryPeriodBlockSeconds int // (overrides `RecoveryPeriodBlockMinutes`) The time for which an instance's recovery is kept "active", so as to avoid concurrent recoveries on smae instance as well as flapping
RecoveryIgnoreHostnameFilters []string // Recovery analysis will completely ignore hosts matching given patterns
RecoverMasterClusterFilters []string // Only do master recovery on clusters matching these regexp patterns (of course the ".*" pattern matches everything)
RecoverMasterClusterFilters []string // Only do primary recovery on clusters matching these regexp patterns (of course the ".*" pattern matches everything)
RecoverIntermediateMasterClusterFilters []string // Only do IM recovery on clusters matching these regexp patterns (of course the ".*" pattern matches everything)
ProcessesShellCommand string // Shell that executes command scripts
OnFailureDetectionProcesses []string // Processes to execute when detecting a failover scenario (before making a decision whether to failover or not). May and should use some of these placeholders: {failureType}, {instanceType}, {isMaster}, {isCoMaster}, {failureDescription}, {command}, {failedHost}, {failureCluster}, {failureClusterAlias}, {failureClusterDomain}, {failedPort}, {successorHost}, {successorPort}, {successorAlias}, {countReplicas}, {replicaHosts}, {isDowntimed}, {autoMasterRecovery}, {autoIntermediateMasterRecovery}
@ -206,35 +206,35 @@ type Configuration struct {
PreFailoverProcesses []string // Processes to execute before doing a failover (aborting operation should any once of them exits with non-zero code; order of execution undefined). May and should use some of these placeholders: {failureType}, {instanceType}, {isMaster}, {isCoMaster}, {failureDescription}, {command}, {failedHost}, {failureCluster}, {failureClusterAlias}, {failureClusterDomain}, {failedPort}, {countReplicas}, {replicaHosts}, {isDowntimed}
PostFailoverProcesses []string // Processes to execute after doing a failover (order of execution undefined). May and should use some of these placeholders: {failureType}, {instanceType}, {isMaster}, {isCoMaster}, {failureDescription}, {command}, {failedHost}, {failureCluster}, {failureClusterAlias}, {failureClusterDomain}, {failedPort}, {successorHost}, {successorPort}, {successorAlias}, {countReplicas}, {replicaHosts}, {isDowntimed}, {isSuccessful}, {lostReplicas}, {countLostReplicas}
PostUnsuccessfulFailoverProcesses []string // Processes to execute after a not-completely-successful failover (order of execution undefined). May and should use some of these placeholders: {failureType}, {instanceType}, {isMaster}, {isCoMaster}, {failureDescription}, {command}, {failedHost}, {failureCluster}, {failureClusterAlias}, {failureClusterDomain}, {failedPort}, {successorHost}, {successorPort}, {successorAlias}, {countReplicas}, {replicaHosts}, {isDowntimed}, {isSuccessful}, {lostReplicas}, {countLostReplicas}
PostMasterFailoverProcesses []string // Processes to execute after doing a master failover (order of execution undefined). Uses same placeholders as PostFailoverProcesses
PostIntermediateMasterFailoverProcesses []string // Processes to execute after doing a master failover (order of execution undefined). Uses same placeholders as PostFailoverProcesses
PostGracefulTakeoverProcesses []string // Processes to execute after runnign a graceful master takeover. Uses same placeholders as PostFailoverProcesses
PostMasterFailoverProcesses []string // Processes to execute after doing a primary failover (order of execution undefined). Uses same placeholders as PostFailoverProcesses
PostIntermediateMasterFailoverProcesses []string // Processes to execute after doing a primary failover (order of execution undefined). Uses same placeholders as PostFailoverProcesses
PostGracefulTakeoverProcesses []string // Processes to execute after running a graceful primary takeover. Uses same placeholders as PostFailoverProcesses
PostTakeMasterProcesses []string // Processes to execute after a successful Take-Master event has taken place
CoMasterRecoveryMustPromoteOtherCoMaster bool // When 'false', anything can get promoted (and candidates are prefered over others). When 'true', orchestrator will promote the other co-master or else fail
CoMasterRecoveryMustPromoteOtherCoMaster bool // When 'false', anything can get promoted (and candidates are prefered over others). When 'true', orchestrator will promote the other co-primary or else fail
DetachLostSlavesAfterMasterFailover bool // synonym to DetachLostReplicasAfterMasterFailover
DetachLostReplicasAfterMasterFailover bool // Should replicas that are not to be lost in master recovery (i.e. were more up-to-date than promoted replica) be forcibly detached
ApplyMySQLPromotionAfterMasterFailover bool // Should orchestrator take upon itself to apply MySQL master promotion: set read_only=0, detach replication, etc.
PreventCrossDataCenterMasterFailover bool // When true (default: false), cross-DC master failover are not allowed, orchestrator will do all it can to only fail over within same DC, or else not fail over at all.
PreventCrossRegionMasterFailover bool // When true (default: false), cross-region master failover are not allowed, orchestrator will do all it can to only fail over within same region, or else not fail over at all.
MasterFailoverLostInstancesDowntimeMinutes uint // Number of minutes to downtime any server that was lost after a master failover (including failed master & lost replicas). 0 to disable
DetachLostReplicasAfterMasterFailover bool // Should replicas that are not to be lost in primary recovery (i.e. were more up-to-date than promoted replica) be forcibly detached
ApplyMySQLPromotionAfterMasterFailover bool // Should orchestrator take upon itself to apply MySQL primary promotion: set read_only=0, detach replication, etc.
PreventCrossDataCenterMasterFailover bool // When true (default: false), cross-DC primary failover are not allowed, orchestrator will do all it can to only fail over within same DC, or else not fail over at all.
PreventCrossRegionMasterFailover bool // When true (default: false), cross-region primary failover are not allowed, orchestrator will do all it can to only fail over within same region, or else not fail over at all.
MasterFailoverLostInstancesDowntimeMinutes uint // Number of minutes to downtime any server that was lost after a primary failover (including failed primary & lost replicas). 0 to disable
MasterFailoverDetachSlaveMasterHost bool // synonym to MasterFailoverDetachReplicaMasterHost
MasterFailoverDetachReplicaMasterHost bool // Should orchestrator issue a detach-replica-master-host on newly promoted master (this makes sure the new master will not attempt to replicate old master if that comes back to life). Defaults 'false'. Meaningless if ApplyMySQLPromotionAfterMasterFailover is 'true'.
FailMasterPromotionOnLagMinutes uint // when > 0, fail a master promotion if the candidate replica is lagging >= configured number of minutes.
FailMasterPromotionIfSQLThreadNotUpToDate bool // when true, and a master failover takes place, if candidate master has not consumed all relay logs, promotion is aborted with error
DelayMasterPromotionIfSQLThreadNotUpToDate bool // when true, and a master failover takes place, if candidate master has not consumed all relay logs, delay promotion until the sql thread has caught up
MasterFailoverDetachReplicaMasterHost bool // Should orchestrator issue a detach-replica-master-host on newly promoted primary (this makes sure the new primary will not attempt to replicate old primary if that comes back to life). Defaults 'false'. Meaningless if ApplyMySQLPromotionAfterMasterFailover is 'true'.
FailMasterPromotionOnLagMinutes uint // when > 0, fail a primary promotion if the candidate replica is lagging >= configured number of minutes.
FailMasterPromotionIfSQLThreadNotUpToDate bool // when true, and a primary failover takes place, if candidate primary has not consumed all relay logs, promotion is aborted with error
DelayMasterPromotionIfSQLThreadNotUpToDate bool // when true, and a primary failover takes place, if candidate primary has not consumed all relay logs, delay promotion until the sql thread has caught up
PostponeSlaveRecoveryOnLagMinutes uint // Synonym to PostponeReplicaRecoveryOnLagMinutes
PostponeReplicaRecoveryOnLagMinutes uint // On crash recovery, replicas that are lagging more than given minutes are only resurrected late in the recovery process, after master/IM has been elected and processes executed. Value of 0 disables this feature
PostponeReplicaRecoveryOnLagMinutes uint // On crash recovery, replicas that are lagging more than given minutes are only resurrected late in the recovery process, after primary/IM has been elected and processes executed. Value of 0 disables this feature
OSCIgnoreHostnameFilters []string // OSC replicas recommendation will ignore replica hostnames matching given patterns
URLPrefix string // URL prefix to run orchestrator on non-root web path, e.g. /orchestrator to put it behind nginx.
DiscoveryIgnoreReplicaHostnameFilters []string // Regexp filters to apply to prevent auto-discovering new replicas. Usage: unreachable servers due to firewalls, applications which trigger binlog dumps
DiscoveryIgnoreMasterHostnameFilters []string // Regexp filters to apply to prevent auto-discovering a master. Usage: pointing your master temporarily to replicate seom data from external host
DiscoveryIgnoreMasterHostnameFilters []string // Regexp filters to apply to prevent auto-discovering a primary. Usage: pointing your primary temporarily to replicate seom data from external host
DiscoveryIgnoreHostnameFilters []string // Regexp filters to apply to prevent discovering instances of any kind
ConsulAddress string // Address where Consul HTTP api is found. Example: 127.0.0.1:8500
ConsulScheme string // Scheme (http or https) for Consul
ConsulAclToken string // ACL token used to write to Consul KV
ConsulCrossDataCenterDistribution bool // should orchestrator automatically auto-deduce all consul DCs and write KVs in all DCs
ZkAddress string // UNSUPPERTED YET. Address where (single or multiple) ZooKeeper servers are found, in `srv1[:port1][,srv2[:port2]...]` format. Default port is 2181. Example: srv-a,srv-b:12181,srv-c
KVClusterMasterPrefix string // Prefix to use for clusters' masters entries in KV stores (internal, consul, ZK), default: "mysql/master"
KVClusterMasterPrefix string // Prefix to use for clusters' primary's entries in KV stores (internal, consul, ZK), default: "mysql/master"
WebMessage string // If provided, will be shown on all web pages below the title bar
MaxConcurrentReplicaOperations int // Maximum number of concurrent operations on replicas
InstanceDBExecContextTimeoutSeconds int // Timeout on context used while calling ExecContext on instance database

Просмотреть файл

@ -530,7 +530,7 @@ func (this *HttpAPI) MoveUpReplicas(params martini.Params, r render.Render, req
Respond(r, &APIResponse{Code: OK, Message: fmt.Sprintf("Moved up %d replicas of %+v below %+v; %d errors: %+v", len(replicas), instanceKey, newMaster.Key, len(errs), errs), Details: replicas})
}
// Repoint positiones a replica under another (or same) master with exact same coordinates.
// Repoint positiones a replica under another (or same) primary with exact same coordinates.
// Useful for binlog servers
func (this *HttpAPI) Repoint(params martini.Params, r render.Render, req *http.Request, user auth.User) {
if !isAuthorizedForAction(req, user) {
@ -578,7 +578,7 @@ func (this *HttpAPI) RepointReplicas(params martini.Params, r render.Render, req
Respond(r, &APIResponse{Code: OK, Message: fmt.Sprintf("Repointed %d replicas of %+v", len(replicas), instanceKey), Details: replicas})
}
// MakeCoMaster attempts to make an instance co-master with its own master
// MakeCoMaster attempts to make an instance co-primary with its own primary
func (this *HttpAPI) MakeCoMaster(params martini.Params, r render.Render, req *http.Request, user auth.User) {
if !isAuthorizedForAction(req, user) {
Respond(r, &APIResponse{Code: ERROR, Message: "Unauthorized"})
@ -599,7 +599,7 @@ func (this *HttpAPI) MakeCoMaster(params martini.Params, r render.Render, req *h
Respond(r, &APIResponse{Code: OK, Message: fmt.Sprintf("Instance made co-master: %+v", instance.Key), Details: instance})
}
// ResetReplication makes a replica forget about its master, effectively breaking the replication
// ResetReplication makes a replica forget about its primary, effectively breaking the replication
func (this *HttpAPI) ResetReplication(params martini.Params, r render.Render, req *http.Request, user auth.User) {
if !isAuthorizedForAction(req, user) {
Respond(r, &APIResponse{Code: ERROR, Message: "Unauthorized"})
@ -620,7 +620,7 @@ func (this *HttpAPI) ResetReplication(params martini.Params, r render.Render, re
Respond(r, &APIResponse{Code: OK, Message: fmt.Sprintf("Replica reset on %+v", instance.Key), Details: instance})
}
// DetachReplicaMasterHost detaches a replica from its master by setting an invalid
// DetachReplicaMasterHost detaches a replica from its primary by setting an invalid
// (yet revertible) host name
func (this *HttpAPI) DetachReplicaMasterHost(params martini.Params, r render.Render, req *http.Request, user auth.User) {
if !isAuthorizedForAction(req, user) {
@ -643,7 +643,7 @@ func (this *HttpAPI) DetachReplicaMasterHost(params martini.Params, r render.Ren
}
// ReattachReplicaMasterHost reverts a detachReplicaMasterHost command
// by resoting the original master hostname in CHANGE MASTER TO
// by resetting the original primary hostname in CHANGE MASTER TO
func (this *HttpAPI) ReattachReplicaMasterHost(params martini.Params, r render.Render, req *http.Request, user auth.User) {
if !isAuthorizedForAction(req, user) {
Respond(r, &APIResponse{Code: ERROR, Message: "Unauthorized"})
@ -743,7 +743,7 @@ func (this *HttpAPI) ErrantGTIDResetMaster(params martini.Params, r render.Rende
Respond(r, &APIResponse{Code: OK, Message: fmt.Sprintf("Removed errant GTID on %+v and issued a RESET MASTER", instance.Key), Details: instance})
}
// ErrantGTIDInjectEmpty removes errant transactions by injecting and empty transaction on the cluster's master
// ErrantGTIDInjectEmpty removes errant transactions by injecting and empty transaction on the cluster's primary
func (this *HttpAPI) ErrantGTIDInjectEmpty(params martini.Params, r render.Render, req *http.Request, user auth.User) {
if !isAuthorizedForAction(req, user) {
Respond(r, &APIResponse{Code: ERROR, Message: "Unauthorized"})
@ -1637,7 +1637,7 @@ func (this *HttpAPI) UntagAll(params martini.Params, r render.Render, req *http.
Respond(r, &APIResponse{Code: OK, Message: fmt.Sprintf("%s removed from %+v instances", tag.TagName, len(*untagged)), Details: untagged.GetInstanceKeys()})
}
// Write a cluster's master (or all clusters masters) to kv stores.
// SubmitMastersToKvStores writes a cluster's primary (or all clusters primaries) to kv stores.
// This should generally only happen once in a lifetime of a cluster. Otherwise KV
// stores are updated via failovers.
func (this *HttpAPI) SubmitMastersToKvStores(params martini.Params, r render.Render, req *http.Request) {
@ -1654,7 +1654,7 @@ func (this *HttpAPI) SubmitMastersToKvStores(params martini.Params, r render.Ren
Respond(r, &APIResponse{Code: OK, Message: fmt.Sprintf("Submitted %d masters", submittedCount), Details: kvPairs})
}
// Clusters provides list of known masters
// Clusters provides list of known primaries
func (this *HttpAPI) Masters(params martini.Params, r render.Render, req *http.Request) {
instances, err := inst.ReadWriteableClustersMasters()
@ -1666,7 +1666,7 @@ func (this *HttpAPI) Masters(params martini.Params, r render.Render, req *http.R
r.JSON(http.StatusOK, instances)
}
// ClusterMaster returns the writable master of a given cluster
// ClusterMaster returns the writable primary of a given cluster
func (this *HttpAPI) ClusterMaster(params martini.Params, r render.Render, req *http.Request) {
clusterName, err := figureClusterName(getClusterHint(params))
if err != nil {
@ -2304,7 +2304,7 @@ func (this *HttpAPI) Recover(params martini.Params, r render.Render, req *http.R
Respond(r, &APIResponse{Code: OK, Message: fmt.Sprintf("Recovery executed on %+v", instanceKey), Details: *promotedInstanceKey})
}
// GracefulMasterTakeover gracefully fails over a master onto its single replica.
// GracefulMasterTakeover gracefully fails over a primary onto its single replica.
func (this *HttpAPI) gracefulMasterTakeover(params martini.Params, r render.Render, req *http.Request, user auth.User, auto bool) {
if !isAuthorizedForAction(req, user) {
Respond(r, &APIResponse{Code: ERROR, Message: "Unauthorized"})
@ -2329,19 +2329,19 @@ func (this *HttpAPI) gracefulMasterTakeover(params martini.Params, r render.Rend
Respond(r, &APIResponse{Code: OK, Message: "graceful-master-takeover: successor promoted", Details: topologyRecovery})
}
// GracefulMasterTakeover gracefully fails over a master, either:
// GracefulMasterTakeover gracefully fails over a primary, either:
// - onto its single replica, or
// - onto a replica indicated by the user
func (this *HttpAPI) GracefulMasterTakeover(params martini.Params, r render.Render, req *http.Request, user auth.User) {
this.gracefulMasterTakeover(params, r, req, user, false)
}
// GracefulMasterTakeoverAuto gracefully fails over a master onto a replica of orchestrator's choosing
// GracefulMasterTakeoverAuto gracefully fails over a primary onto a replica of orchestrator's choosing
func (this *HttpAPI) GracefulMasterTakeoverAuto(params martini.Params, r render.Render, req *http.Request, user auth.User) {
this.gracefulMasterTakeover(params, r, req, user, true)
}
// ForceMasterFailover fails over a master (even if there's no particular problem with the master)
// ForceMasterFailover fails over a primary (even if there's no particular problem with the primary)
func (this *HttpAPI) ForceMasterFailover(params martini.Params, r render.Render, req *http.Request, user auth.User) {
if !isAuthorizedForAction(req, user) {
Respond(r, &APIResponse{Code: ERROR, Message: "Unauthorized"})
@ -2364,7 +2364,7 @@ func (this *HttpAPI) ForceMasterFailover(params martini.Params, r render.Render,
}
}
// ForceMasterTakeover fails over a master (even if there's no particular problem with the master)
// ForceMasterTakeover fails over a primary (even if there's no particular problem with the primary)
func (this *HttpAPI) ForceMasterTakeover(params martini.Params, r render.Render, req *http.Request, user auth.User) {
if !isAuthorizedForAction(req, user) {
Respond(r, &APIResponse{Code: ERROR, Message: "Unauthorized"})

Просмотреть файл

@ -226,7 +226,7 @@ func (this *ReplicationAnalysis) AnalysisString() string {
return strings.Join(result, ", ")
}
// Get a string description of the analyzed instance type (master? co-master? intermediate-master?)
// Get a string description of the analyzed instance type (primary? co-primary? intermediate-primary?)
func (this *ReplicationAnalysis) GetAnalysisInstanceType() AnalysisInstanceType {
if this.IsCoMaster {
return AnalysisInstanceTypeCoMaster

Просмотреть файл

@ -60,7 +60,7 @@ type clusterAnalysis struct {
masterKey *InstanceKey
}
// GetReplicationAnalysis will check for replication problems (dead master; unreachable master; etc)
// GetReplicationAnalysis will check for replication problems (dead primary; unreachable primary; etc)
func GetReplicationAnalysis(clusterName string, hints *ReplicationAnalysisHints) ([]ReplicationAnalysis, error) {
result := []ReplicationAnalysis{}
@ -550,7 +550,7 @@ func GetReplicationAnalysis(clusterName string, hints *ReplicationAnalysisHints)
a.Description = "Master cannot be reached by orchestrator but it has replicating replicas; possibly a network/host issue"
//
} else if a.IsMaster && !a.LastCheckValid && a.LastCheckPartialSuccess && a.CountReplicasFailingToConnectToMaster > 0 && a.CountValidReplicas > 0 && a.CountValidReplicatingReplicas > 0 {
// there's partial success, but also at least one replica is failing to connect to master
// there's partial success, but also at least one replica is failing to connect to primary
a.Analysis = UnreachableMaster
a.Description = "Master cannot be reached by orchestrator but it has replicating replicas; possibly a network/host issue"
//
@ -624,9 +624,9 @@ func GetReplicationAnalysis(clusterName string, hints *ReplicationAnalysisHints)
//
} else if !a.IsMaster && a.LastCheckValid && a.CountReplicas > 1 && a.CountValidReplicatingReplicas == 0 &&
a.CountReplicasFailingToConnectToMaster > 0 && a.CountReplicasFailingToConnectToMaster == a.CountValidReplicas {
// All replicas are either failing to connect to master (and at least one of these have to exist)
// All replicas are either failing to connect to primary (and at least one of these have to exist)
// or completely dead.
// Must have at least two replicas to reach such conclusion -- do note that the intermediate master is still
// Must have at least two replicas to reach such conclusion -- do note that the intermediate primary is still
// reachable to orchestrator, so we base our conclusion on replicas only at this point.
a.Analysis = AllIntermediateMasterReplicasFailingToConnectOrDead
a.Description = "Intermediate master is reachable but all of its replicas are failing to connect"

Просмотреть файл

@ -24,7 +24,7 @@ import (
"vitess.io/vitess/go/vt/orchestrator/db"
)
// RegisterCandidateInstance markes a given instance as suggested for successoring a master in the event of failover.
// RegisterCandidateInstance markes a given instance as suggested for succeeding a primary in the event of failover.
func RegisterCandidateInstance(candidate *CandidateDatabaseInstance) error {
if candidate.LastSuggestedString == "" {
candidate = candidate.WithCurrentTime()
@ -50,7 +50,7 @@ func RegisterCandidateInstance(candidate *CandidateDatabaseInstance) error {
return ExecDBWriteFunc(writeFunc)
}
// ExpireCandidateInstances removes stale master candidate suggestions.
// ExpireCandidateInstances removes stale primary candidate suggestions.
func ExpireCandidateInstances() error {
writeFunc := func() error {
_, err := db.ExecOrchestrator(`

Просмотреть файл

@ -39,8 +39,8 @@ func getClusterMasterKVPair(clusterAlias string, masterKey *InstanceKey) *kv.KVP
return kv.NewKVPair(GetClusterMasterKVKey(clusterAlias), masterKey.StringCode())
}
// GetClusterMasterKVPairs returns all KV pairs associated with a master. This includes the
// full identity of the master as well as a breakdown by hostname, port, ipv4, ipv6
// GetClusterMasterKVPairs returns all KV pairs associated with a primary. This includes the
// full identity of the primary as well as a breakdown by hostname, port, ipv4, ipv6
func GetClusterMasterKVPairs(clusterAlias string, masterKey *InstanceKey) (kvPairs [](*kv.KVPair)) {
masterKVPair := getClusterMasterKVPair(clusterAlias, masterKey)
if masterKVPair == nil {
@ -81,7 +81,7 @@ func mappedClusterNameToAlias(clusterName string) string {
type ClusterInfo struct {
ClusterName string
ClusterAlias string // Human friendly alias
ClusterDomain string // CNAME/VIP/A-record/whatever of the master of this cluster
ClusterDomain string // CNAME/VIP/A-record/whatever of the primary of this cluster
CountInstances uint
HeuristicLag int64
HasAutomatedMasterRecovery bool

Просмотреть файл

@ -195,7 +195,7 @@ func ReplaceAliasClusterName(oldClusterName string, newClusterName string) (err
return err
}
// ReadUnambiguousSuggestedClusterAliases reads potential master hostname:port who have suggested cluster aliases,
// ReadUnambiguousSuggestedClusterAliases reads potential primary hostname:port who have suggested cluster aliases,
// where no one else shares said suggested cluster alias. Such hostname:port are likely true owners
// of the alias.
func ReadUnambiguousSuggestedClusterAliases() (result map[string]InstanceKey, err error) {

Просмотреть файл

@ -147,7 +147,7 @@ func expireLostInRecoveryDowntime() error {
for _, instance := range instances {
// We _may_ expire this downtime, but only after a minute
// This is a graceful period, during which other servers can claim ownership of the alias,
// or can update their own cluster name to match a new master's name
// or can update their own cluster name to match a new primary's name
if instance.ElapsedDowntime < time.Minute {
continue
}
@ -159,10 +159,10 @@ func expireLostInRecoveryDowntime() error {
// back, alive, replicating in some topology
endDowntime = true
} else if instance.ReplicationDepth == 0 {
// instance makes the appearance of a master
// instance makes the appearance of a primary
if unambiguousKey, ok := unambiguousAliases[instance.SuggestedClusterAlias]; ok {
if unambiguousKey.Equals(&instance.Key) {
// This instance seems to be a master, which is valid, and has a suggested alias,
// This instance seems to be a primary, which is valid, and has a suggested alias,
// and is the _only_ one to have this suggested alias (i.e. no one took its place)
endDowntime = true
}

Просмотреть файл

@ -65,7 +65,7 @@ func PromotionRule(tablet *topodatapb.Tablet) CandidatePromotionRule {
return curDurabilityPolicy.promotionRule(tablet)
}
// MasterSemiSync returns the master semi-sync setting for the instance.
// MasterSemiSync returns the primary semi-sync setting for the instance.
// 0 means none. Non-zero specifies the number of required ackers.
func MasterSemiSync(instanceKey InstanceKey) int {
return curDurabilityPolicy.masterSemiSync(instanceKey)

Просмотреть файл

@ -93,7 +93,7 @@ type Instance struct {
IsCoMaster bool
HasReplicationCredentials bool
ReplicationCredentialsAvailable bool
SemiSyncAvailable bool // when both semi sync plugins (master & replica) are loaded
SemiSyncAvailable bool // when both semi sync plugins (primary & replica) are loaded
SemiSyncEnforced bool
SemiSyncMasterEnabled bool
SemiSyncReplicaEnabled bool
@ -314,20 +314,20 @@ func (this *Instance) IsReplica() bool {
return this.MasterKey.Hostname != "" && this.MasterKey.Hostname != "_" && this.MasterKey.Port != 0 && (this.ReadBinlogCoordinates.LogFile != "" || this.UsingGTID())
}
// IsMaster makes simple heuristics to decide whether this instance is a master (not replicating from any other server),
// IsMaster makes simple heuristics to decide whether this instance is a primary (not replicating from any other server),
// either via traditional async/semisync replication or group replication
func (this *Instance) IsMaster() bool {
// If traditional replication is configured, it is for sure not a master
// If traditional replication is configured, it is for sure not a primary
if this.IsReplica() {
return false
}
// If traditional replication is not configured, and it is also not part of a replication group, this host is
// a master
// a primary
if !this.IsReplicationGroupMember() {
return true
}
// If traditional replication is not configured, and this host is part of a group, it is only considered a
// master if it has the role of group Primary. Otherwise it is not a master.
// primary if it has the role of group Primary. Otherwise it is not a primary.
if this.ReplicationGroupMemberRole == GroupReplicationMemberRolePrimary {
return true
}
@ -407,12 +407,12 @@ func (this *Instance) GetNextBinaryLog(binlogCoordinates BinlogCoordinates) (Bin
return binlogCoordinates.NextFileCoordinates()
}
// IsReplicaOf returns true if this instance claims to replicate from given master
// IsReplicaOf returns true if this instance claims to replicate from given primary
func (this *Instance) IsReplicaOf(master *Instance) bool {
return this.MasterKey.Equals(&master.Key)
}
// IsReplicaOf returns true if this i supposed master of given replica
// IsReplicaOf returns true if this i supposed primary of given replica
func (this *Instance) IsMasterOf(replica *Instance) bool {
return replica.IsReplicaOf(this)
}
@ -440,7 +440,7 @@ func (this *Instance) CanReplicateFrom(other *Instance) (bool, error) {
if !other.LogReplicationUpdatesEnabled {
return false, fmt.Errorf("instance does not have log_slave_updates enabled: %+v", other.Key)
}
// OK for a master to not have log_slave_updates
// OK for a primary to not have log_slave_updates
// Not OK for a replica, for it has to relay the logs.
}
if this.IsSmallerMajorVersion(other) && !this.IsBinlogServer() {

Просмотреть файл

@ -687,7 +687,7 @@ func ReadTopologyInstanceBufferable(instanceKey *InstanceKey, bufferWrites bool,
instance.SuggestedClusterAlias = fmt.Sprintf("%v:%v", tablet.Keyspace, tablet.Shard)
if instance.ReplicationDepth == 0 && config.Config.DetectClusterDomainQuery != "" {
// Only need to do on masters
// Only need to do on primary tablets
domainName := ""
if err := db.QueryRow(config.Config.DetectClusterDomainQuery).Scan(&domainName); err != nil {
domainName = ""
@ -719,7 +719,7 @@ Cleanup:
if instanceFound {
if instance.IsCoMaster {
// Take co-master into account, and avoid infinite loop
// Take co-primary into account, and avoid infinite loop
instance.AncestryUUID = fmt.Sprintf("%s,%s", instance.MasterUUID, instance.ServerUUID)
} else {
instance.AncestryUUID = fmt.Sprintf("%s,%s", instance.AncestryUUID, instance.ServerUUID)
@ -729,18 +729,18 @@ Cleanup:
instance.AncestryUUID = fmt.Sprintf("%s,%s", instance.AncestryUUID, instance.ReplicationGroupName)
instance.AncestryUUID = strings.Trim(instance.AncestryUUID, ",")
if instance.ExecutedGtidSet != "" && instance.masterExecutedGtidSet != "" {
// Compare master & replica GTID sets, but ignore the sets that present the master's UUID.
// This is because orchestrator may pool master and replica at an inconvenient timing,
// such that the replica may _seems_ to have more entries than the master, when in fact
// it's just that the master's probing is stale.
// Compare primary & replica GTID sets, but ignore the sets that present the primary's UUID.
// This is because orchestrator may pool primary and replica at an inconvenient timing,
// such that the replica may _seems_ to have more entries than the primary, when in fact
// it's just that the primary's probing is stale.
redactedExecutedGtidSet, _ := NewOracleGtidSet(instance.ExecutedGtidSet)
for _, uuid := range strings.Split(instance.AncestryUUID, ",") {
if uuid != instance.ServerUUID {
redactedExecutedGtidSet.RemoveUUID(uuid)
}
if instance.IsCoMaster && uuid == instance.ServerUUID {
// If this is a co-master, then this server is likely to show its own generated GTIDs as errant,
// because its co-master has not applied them yet
// If this is a co-primary, then this server is likely to show its own generated GTIDs as errant,
// because its co-primary has not applied them yet
redactedExecutedGtidSet.RemoveUUID(uuid)
}
}
@ -807,7 +807,7 @@ func ReadReplicationGroupPrimary(instance *Instance) (err error) {
return err
}
// ReadInstanceClusterAttributes will return the cluster name for a given instance by looking at its master
// ReadInstanceClusterAttributes will return the cluster name for a given instance by looking at its primary
// and getting it from there.
// It is a non-recursive function and so-called-recursion is performed upon periodic reading of
// instances.
@ -819,7 +819,7 @@ func ReadInstanceClusterAttributes(instance *Instance) (err error) {
var masterOrGroupPrimaryExecutedGtidSet string
masterOrGroupPrimaryDataFound := false
// Read the cluster_name of the _master_ or _group_primary_ of our instance, derive it from there.
// Read the cluster_name of the _primary_ or _group_primary_ of our instance, derive it from there.
query := `
select
cluster_name,
@ -833,8 +833,8 @@ func ReadInstanceClusterAttributes(instance *Instance) (err error) {
where hostname=? and port=?
`
// For instances that are part of a replication group, if the host is not the group's primary, we use the
// information from the group primary. If it is the group primary, we use the information of its master
// (if it has any). If it is not a group member, we use the information from the host's master.
// information from the group primary. If it is the group primary, we use the information of its primary
// (if it has any). If it is not a group member, we use the information from the host's primary.
if instance.IsReplicationGroupSecondary() {
masterOrGroupPrimaryInstanceKey = instance.ReplicationGroupPrimaryInstanceKey
} else {
@ -863,17 +863,17 @@ func ReadInstanceClusterAttributes(instance *Instance) (err error) {
}
clusterNameByInstanceKey := instance.Key.StringCode()
if clusterName == "" {
// Nothing from master; we set it to be named after the instance itself
// Nothing from primary; we set it to be named after the instance itself
clusterName = clusterNameByInstanceKey
}
isCoMaster := false
if masterOrGroupPrimaryInstanceKey.Equals(&instance.Key) {
// co-master calls for special case, in fear of the infinite loop
// co-primary calls for special case, in fear of the infinite loop
isCoMaster = true
clusterNameByCoMasterKey := instance.MasterKey.StringCode()
if clusterName != clusterNameByInstanceKey && clusterName != clusterNameByCoMasterKey {
// Can be caused by a co-master topology failover
// Can be caused by a co-primary topology failover
log.Errorf("ReadInstanceClusterAttributes: in co-master topology %s is not in (%s, %s). Forcing it to become one of them", clusterName, clusterNameByInstanceKey, clusterNameByCoMasterKey)
clusterName = math.TernaryString(instance.Key.SmallerThan(&instance.MasterKey), clusterNameByInstanceKey, clusterNameByCoMasterKey)
}
@ -1153,9 +1153,9 @@ func ReadClusterInstances(clusterName string) ([](*Instance), error) {
return readInstancesByCondition(condition, sqlutils.Args(clusterName), "")
}
// ReadClusterWriteableMaster returns the/a writeable master of this cluster
// Typically, the cluster name indicates the master of the cluster. However, in circular
// master-master replication one master can assume the name of the cluster, and it is
// ReadClusterWriteableMaster returns the/a writeable primary of this cluster
// Typically, the cluster name indicates the primary of the cluster. However, in circular
// primary-primary replication one primary can assume the name of the cluster, and it is
// not guaranteed that it is the writeable one.
func ReadClusterWriteableMaster(clusterName string) ([](*Instance), error) {
condition := `
@ -1166,9 +1166,9 @@ func ReadClusterWriteableMaster(clusterName string) ([](*Instance), error) {
return readInstancesByCondition(condition, sqlutils.Args(clusterName), "replication_depth asc")
}
// ReadClusterMaster returns the master of this cluster.
// - if the cluster has co-masters, the/a writable one is returned
// - if the cluster has a single master, that master is retuened whether it is read-only or writable.
// ReadClusterMaster returns the primary of this cluster.
// - if the cluster has co-primaries, the/a writable one is returned
// - if the cluster has a single primary, that primary is returned whether it is read-only or writable.
func ReadClusterMaster(clusterName string) ([](*Instance), error) {
condition := `
cluster_name = ?
@ -1177,7 +1177,7 @@ func ReadClusterMaster(clusterName string) ([](*Instance), error) {
return readInstancesByCondition(condition, sqlutils.Args(clusterName), "read_only asc, replication_depth asc")
}
// ReadWriteableClustersMasters returns writeable masters of all clusters, but only one
// ReadWriteableClustersMasters returns writeable primaries of all clusters, but only one
// per cluster, in similar logic to ReadClusterWriteableMaster
func ReadWriteableClustersMasters() (instances [](*Instance), err error) {
condition := `
@ -1204,7 +1204,7 @@ func ReadClusterAliasInstances(clusterAlias string) ([](*Instance), error) {
return readInstancesByCondition(condition, sqlutils.Args(clusterAlias), "")
}
// ReadReplicaInstances reads replicas of a given master
// ReadReplicaInstances reads replicas of a given primary
func ReadReplicaInstances(masterKey *InstanceKey) ([](*Instance), error) {
condition := `
master_host = ?
@ -1233,7 +1233,7 @@ func ReadReplicaInstancesIncludingBinlogServerSubReplicas(masterKey *InstanceKey
return replicas, err
}
// ReadBinlogServerReplicaInstances reads direct replicas of a given master that are binlog servers
// ReadBinlogServerReplicaInstances reads direct replicas of a given primary that are binlog servers
func ReadBinlogServerReplicaInstances(masterKey *InstanceKey) ([](*Instance), error) {
condition := `
master_host = ?
@ -1452,7 +1452,7 @@ func filterOSCInstances(instances [](*Instance)) [](*Instance) {
}
// GetClusterOSCReplicas returns a heuristic list of replicas which are fit as controll replicas for an OSC operation.
// These would be intermediate masters
// These would be intermediate primaries
func GetClusterOSCReplicas(clusterName string) ([](*Instance), error) {
var intermediateMasters [](*Instance)
result := [](*Instance){}
@ -1665,8 +1665,8 @@ func updateInstanceClusterName(instance *Instance) error {
return ExecDBWriteFunc(writeFunc)
}
// ReplaceClusterName replaces all occurances of oldClusterName with newClusterName
// It is called after a master failover
// ReplaceClusterName replaces all occurrences of oldClusterName with newClusterName
// It is called after a primary failover
func ReplaceClusterName(oldClusterName string, newClusterName string) error {
if oldClusterName == "" {
return log.Errorf("replaceClusterName: skipping empty oldClusterName")
@ -1723,7 +1723,7 @@ func ReviewUnseenInstances() error {
return err
}
// readUnseenMasterKeys will read list of masters that have never been seen, and yet whose replicas
// readUnseenMasterKeys will read list of primaries that have never been seen, and yet whose replicas
// seem to be replicating.
func readUnseenMasterKeys() ([]InstanceKey, error) {
res := []InstanceKey{}
@ -1775,8 +1775,8 @@ func InjectSeed(instanceKey *InstanceKey) error {
return err
}
// InjectUnseenMasters will review masters of instances that are known to be replicating, yet which are not listed
// in database_instance. Since their replicas are listed as replicating, we can assume that such masters actually do
// InjectUnseenMasters will review primaries of instances that are known to be replicating, yet which are not listed
// in database_instance. Since their replicas are listed as replicating, we can assume that such primaries actually do
// exist: we shall therefore inject them with minimal details into the database_instance table.
func InjectUnseenMasters() error {
@ -2045,7 +2045,7 @@ func ReadClustersInfo(clusterName string) ([]ClusterInfo, error) {
return clusters, err
}
// Get a listing of KVPair for clusters masters, for all clusters or for a specific cluster.
// Get a listing of KVPair for clusters primaries, for all clusters or for a specific cluster.
func GetMastersKVPairs(clusterName string) (kvPairs [](*kv.KVPair), err error) {
clusterAliasMap := make(map[string]string)

Просмотреть файл

@ -129,10 +129,10 @@ func ASCIITopology(clusterName string, historyTimestampPattern string, tabulated
// Get entries:
var entries []string
if masterInstance != nil {
// Single master
// Single primary
entries = getASCIITopologyEntry(0, masterInstance, replicationMap, historyTimestampPattern == "", fillerCharacter, tabulated, printTags)
} else {
// Co-masters? For visualization we put each in its own branch while ignoring its other co-masters.
// Co-primaries? For visualization we put each in its own branch while ignoring its other co-primaries.
for _, instance := range instances {
if instance.IsCoMaster {
entries = append(entries, getASCIITopologyEntry(1, instance, replicationMap, historyTimestampPattern == "", fillerCharacter, tabulated, printTags)...)
@ -179,13 +179,13 @@ func shouldPostponeRelocatingReplica(replica *Instance, postponedFunctionsContai
}
// GetInstanceMaster synchronously reaches into the replication topology
// and retrieves master's data
// and retrieves primary's data
func GetInstanceMaster(instance *Instance) (*Instance, error) {
master, err := ReadTopologyInstance(&instance.MasterKey)
return master, err
}
// InstancesAreSiblings checks whether both instances are replicating from same master
// InstancesAreSiblings checks whether both instances are replicating from same primary
func InstancesAreSiblings(instance0, instance1 *Instance) bool {
if !instance0.IsReplica() {
return false
@ -200,7 +200,7 @@ func InstancesAreSiblings(instance0, instance1 *Instance) bool {
return instance0.MasterKey.Equals(&instance1.MasterKey)
}
// InstanceIsMasterOf checks whether an instance is the master of another
// InstanceIsMasterOf checks whether an instance is the primary of another
func InstanceIsMasterOf(allegedMaster, allegedReplica *Instance) bool {
if !allegedReplica.IsReplica() {
return false
@ -214,7 +214,7 @@ func InstanceIsMasterOf(allegedMaster, allegedReplica *Instance) bool {
// MoveUp will attempt moving instance indicated by instanceKey up the topology hierarchy.
// It will perform all safety and sanity checks and will tamper with this instance's replication
// as well as its master.
// as well as its primary.
func MoveUp(instanceKey *InstanceKey) (*Instance, error) {
instance, err := ReadTopologyInstance(instanceKey)
if err != nil {
@ -278,7 +278,7 @@ func MoveUp(instanceKey *InstanceKey) (*Instance, error) {
}
}
// We can skip hostname unresolve; we just copy+paste whatever our master thinks of its master.
// We can skip hostname unresolve; we just copy+paste whatever our primary thinks of its primary.
_, err = ChangeMasterTo(instanceKey, &master.MasterKey, &master.ExecBinlogCoordinates, true, GTIDHintDeny)
if err != nil {
goto Cleanup
@ -446,7 +446,7 @@ func MoveBelow(instanceKey, siblingKey *InstanceKey) (*Instance, error) {
}
if sibling.IsBinlogServer() {
// Binlog server has same coordinates as master
// Binlog server has same coordinates as primary
// Easy solution!
return Repoint(instanceKey, &sibling.Key, GTIDHintDeny)
}
@ -686,7 +686,7 @@ func moveReplicasViaGTID(replicas [](*Instance), other *Instance, postponedFunct
return movedReplicas, unmovedReplicas, err, errs
}
// MoveReplicasGTID will (attempt to) move all replicas of given master below given instance.
// MoveReplicasGTID will (attempt to) move all replicas of given primary below given instance.
func MoveReplicasGTID(masterKey *InstanceKey, belowKey *InstanceKey, pattern string) (movedReplicas [](*Instance), unmovedReplicas [](*Instance), err error, errs []error) {
belowInstance, err := ReadTopologyInstance(belowKey)
if err != nil {
@ -712,8 +712,8 @@ func MoveReplicasGTID(masterKey *InstanceKey, belowKey *InstanceKey, pattern str
return movedReplicas, unmovedReplicas, err, errs
}
// Repoint connects a replica to a master using its exact same executing coordinates.
// The given masterKey can be null, in which case the existing master is used.
// Repoint connects a replica to a primary using its exact same executing coordinates.
// The given masterKey can be null, in which case the existing primary is used.
// Two use cases:
// - masterKey is nil: use case is corrupted relay logs on replica
// - masterKey is not nil: using Binlog servers (coordinates remain the same)
@ -733,9 +733,9 @@ func Repoint(instanceKey *InstanceKey, masterKey *InstanceKey, gtidHint Operatio
if masterKey == nil {
masterKey = &instance.MasterKey
}
// With repoint we *prefer* the master to be alive, but we don't strictly require it.
// The use case for the master being alive is with hostname-resolve or hostname-unresolve: asking the replica
// to reconnect to its same master while changing the MASTER_HOST in CHANGE MASTER TO due to DNS changes etc.
// With repoint we *prefer* the primary to be alive, but we don't strictly require it.
// The use case for the primary being alive is with hostname-resolve or hostname-unresolve: asking the replica
// to reconnect to its same primary while changing the MASTER_HOST in CHANGE MASTER TO due to DNS changes etc.
master, err := ReadTopologyInstance(masterKey)
masterIsAccessible := (err == nil)
if !masterIsAccessible {
@ -770,7 +770,7 @@ func Repoint(instanceKey *InstanceKey, masterKey *InstanceKey, gtidHint Operatio
goto Cleanup
}
// See above, we are relaxed about the master being accessible/inaccessible.
// See above, we are relaxed about the primary being accessible/inaccessible.
// If accessible, we wish to do hostname-unresolve. If inaccessible, we can skip the test and not fail the
// ChangeMasterTo operation. This is why we pass "!masterIsAccessible" below.
if instance.ExecBinlogCoordinates.IsEmpty() {
@ -793,7 +793,7 @@ Cleanup:
}
// RepointTo repoints list of replicas onto another master.
// RepointTo repoints list of replicas onto another primary.
// Binlog Server is the major use case
func RepointTo(replicas [](*Instance), belowKey *InstanceKey) ([](*Instance), error, []error) {
res := [](*Instance){}
@ -846,7 +846,7 @@ func RepointTo(replicas [](*Instance), belowKey *InstanceKey) ([](*Instance), er
return res, nil, errs
}
// RepointReplicasTo repoints replicas of a given instance (possibly filtered) onto another master.
// RepointReplicasTo repoints replicas of a given instance (possibly filtered) onto another primary.
// Binlog Server is the major use case
func RepointReplicasTo(instanceKey *InstanceKey, pattern string, belowKey *InstanceKey) ([](*Instance), error, []error) {
res := [](*Instance){}
@ -863,20 +863,20 @@ func RepointReplicasTo(instanceKey *InstanceKey, pattern string, belowKey *Insta
return res, nil, errs
}
if belowKey == nil {
// Default to existing master. All replicas are of the same master, hence just pick one.
// Default to existing primary. All replicas are of the same primary, hence just pick one.
belowKey = &replicas[0].MasterKey
}
log.Infof("Will repoint replicas of %+v to %+v", *instanceKey, *belowKey)
return RepointTo(replicas, belowKey)
}
// RepointReplicas repoints all replicas of a given instance onto its existing master.
// RepointReplicas repoints all replicas of a given instance onto its existing primary.
func RepointReplicas(instanceKey *InstanceKey, pattern string) ([](*Instance), error, []error) {
return RepointReplicasTo(instanceKey, pattern, nil)
}
// MakeCoMaster will attempt to make an instance co-master with its master, by making its master a replica of its own.
// This only works out if the master is not replicating; the master does not have a known master (it may have an unknown master).
// MakeCoMaster will attempt to make an instance co-primary with its primary, by making its primary a replica of its own.
// This only works out if the primary is not replicating; the primary does not have a known primary (it may have an unknown primary).
func MakeCoMaster(instanceKey *InstanceKey) (*Instance, error) {
instance, err := ReadTopologyInstance(instanceKey)
if err != nil {
@ -905,16 +905,16 @@ func MakeCoMaster(instanceKey *InstanceKey) (*Instance, error) {
return instance, fmt.Errorf("instance %+v is not read-only; first make it read-only before making it co-master", instance.Key)
}
if master.IsCoMaster {
// We allow breaking of an existing co-master replication. Here's the breakdown:
// We allow breaking of an existing co-primary replication. Here's the breakdown:
// Ideally, this would not eb allowed, and we would first require the user to RESET SLAVE on 'master'
// prior to making it participate as co-master with our 'instance'.
// prior to making it participate as co-primary with our 'instance'.
// However there's the problem that upon RESET SLAVE we lose the replication's user/password info.
// Thus, we come up with the following rule:
// If S replicates from M1, and M1<->M2 are co masters, we allow S to become co-master of M1 (S<->M1) if:
// If S replicates from M1, and M1<->M2 are co primaries, we allow S to become co-primary of M1 (S<->M1) if:
// - M1 is writeable
// - M2 is read-only or is unreachable/invalid
// - S is read-only
// And so we will be replacing one read-only co-master with another.
// And so we will be replacing one read-only co-primary with another.
otherCoMaster, found, _ := ReadInstance(&master.MasterKey)
if found && otherCoMaster.IsLastCheckValid && !otherCoMaster.ReadOnly {
return instance, fmt.Errorf("master %+v is already co-master with %+v, and %+v is alive, and not read-only; cowardly refusing to demote it. Please set it as read-only beforehand", master.Key, otherCoMaster.Key, otherCoMaster.Key)
@ -942,10 +942,10 @@ func MakeCoMaster(instanceKey *InstanceKey) (*Instance, error) {
defer EndMaintenance(maintenanceToken)
}
// the coMaster used to be merely a replica. Just point master into *some* position
// the coMaster used to be merely a replica. Just point primary into *some* position
// within coMaster...
if master.IsReplica() {
// this is the case of a co-master. For masters, the StopReplication operation throws an error, and
// this is the case of a co-primary. For primaries, the StopReplication operation throws an error, and
// there's really no point in doing it.
master, err = StopReplication(&master.Key)
if err != nil {
@ -1021,7 +1021,7 @@ Cleanup:
return instance, err
}
// DetachReplicaMasterHost detaches a replica from its master by corrupting the Master_Host (in such way that is reversible)
// DetachReplicaMasterHost detaches a replica from its primary by corrupting the Master_Host (in such way that is reversible)
func DetachReplicaMasterHost(instanceKey *InstanceKey) (*Instance, error) {
instance, err := ReadTopologyInstance(instanceKey)
if err != nil {
@ -1065,7 +1065,7 @@ Cleanup:
return instance, err
}
// ReattachReplicaMasterHost reattaches a replica back onto its master by undoing a DetachReplicaMasterHost operation
// ReattachReplicaMasterHost reattaches a replica back onto its primary by undoing a DetachReplicaMasterHost operation
func ReattachReplicaMasterHost(instanceKey *InstanceKey) (*Instance, error) {
instance, err := ReadTopologyInstance(instanceKey)
if err != nil {
@ -1098,7 +1098,7 @@ func ReattachReplicaMasterHost(instanceKey *InstanceKey) (*Instance, error) {
if err != nil {
goto Cleanup
}
// Just in case this instance used to be a master:
// Just in case this instance used to be a primary:
ReplaceAliasClusterName(instanceKey.StringCode(), reattachedMasterKey.StringCode())
Cleanup:
@ -1322,7 +1322,7 @@ Cleanup:
return instance, err
}
// ErrantGTIDInjectEmpty will inject an empty transaction on the master of an instance's cluster in order to get rid
// ErrantGTIDInjectEmpty will inject an empty transaction on the primary of an instance's cluster in order to get rid
// of an errant transaction observed on the instance.
func ErrantGTIDInjectEmpty(instanceKey *InstanceKey) (instance *Instance, clusterMaster *Instance, countInjectedTransactions int64, err error) {
instance, err = ReadTopologyInstance(instanceKey)
@ -1418,10 +1418,10 @@ func TakeMasterHook(successor *Instance, demoted *Instance) {
}
// TakeMaster will move an instance up the chain and cause its master to become its replica.
// It's almost a role change, just that other replicas of either 'instance' or its master are currently unaffected
// TakeMaster will move an instance up the chain and cause its primary to become its replica.
// It's almost a role change, just that other replicas of either 'instance' or its primary are currently unaffected
// (they continue replicate without change)
// Note that the master must itself be a replica; however the grandparent does not necessarily have to be reachable
// Note that the primary must itself be a replica; however the grandparent does not necessarily have to be reachable
// and can in fact be dead.
func TakeMaster(instanceKey *InstanceKey, allowTakingCoMaster bool) (*Instance, error) {
instance, err := ReadTopologyInstance(instanceKey)
@ -1461,14 +1461,14 @@ func TakeMaster(instanceKey *InstanceKey, allowTakingCoMaster bool) (*Instance,
}
// instance and masterInstance are equal
// We skip name unresolve. It is OK if the master's master is dead, unreachable, does not resolve properly.
// We just copy+paste info from the master.
// We skip name unresolve. It is OK if the primary's primary is dead, unreachable, does not resolve properly.
// We just copy+paste info from the primary.
// In particular, this is commonly calledin DeadMaster recovery
instance, err = ChangeMasterTo(&instance.Key, &masterInstance.MasterKey, &masterInstance.ExecBinlogCoordinates, true, GTIDHintNeutral)
if err != nil {
goto Cleanup
}
// instance is now sibling of master
// instance is now sibling of primary
masterInstance, err = ChangeMasterTo(&masterInstance.Key, &instance.Key, &instance.SelfBinlogCoordinates, false, GTIDHintNeutral)
if err != nil {
goto Cleanup
@ -1508,7 +1508,7 @@ func sortInstances(instances [](*Instance)) {
sortInstancesDataCenterHint(instances, "")
}
// getReplicasForSorting returns a list of replicas of a given master potentially for candidate choosing
// getReplicasForSorting returns a list of replicas of a given primary potentially for candidate choosing
func getReplicasForSorting(masterKey *InstanceKey, includeBinlogServerSubReplicas bool) (replicas [](*Instance), err error) {
if includeBinlogServerSubReplicas {
replicas, err = ReadReplicaInstancesIncludingBinlogServerSubReplicas(masterKey)
@ -1522,10 +1522,10 @@ func sortedReplicas(replicas [](*Instance), stopReplicationMethod StopReplicatio
return sortedReplicasDataCenterHint(replicas, stopReplicationMethod, "")
}
// sortedReplicas returns the list of replicas of some master, sorted by exec coordinates
// sortedReplicas returns the list of replicas of some primary, sorted by exec coordinates
// (most up-to-date replica first).
// This function assumes given `replicas` argument is indeed a list of instances all replicating
// from the same master (the result of `getReplicasForSorting()` is appropriate)
// from the same primary (the result of `getReplicasForSorting()` is appropriate)
func sortedReplicasDataCenterHint(replicas [](*Instance), stopReplicationMethod StopReplicationMethod, dataCenterHint string) [](*Instance) {
if len(replicas) <= 1 {
return replicas
@ -1541,7 +1541,7 @@ func sortedReplicasDataCenterHint(replicas [](*Instance), stopReplicationMethod
return replicas
}
// GetSortedReplicas reads list of replicas of a given master, and returns them sorted by exec coordinates
// GetSortedReplicas reads list of replicas of a given primary, and returns them sorted by exec coordinates
// (most up-to-date replica first).
func GetSortedReplicas(masterKey *InstanceKey, stopReplicationMethod StopReplicationMethod) (replicas [](*Instance), err error) {
if replicas, err = getReplicasForSorting(masterKey, false); err != nil {
@ -1583,7 +1583,7 @@ func isGenerallyValidAsCandidateReplica(replica *Instance) bool {
}
// isValidAsCandidateMasterInBinlogServerTopology let's us know whether a given replica is generally
// valid to promote to be master.
// valid to promote to be primary.
func isValidAsCandidateMasterInBinlogServerTopology(replica *Instance) bool {
if !replica.IsLastCheckValid {
// something wrong with this replica right now. We shouldn't hope to be able to promote it
@ -1674,7 +1674,7 @@ func chooseCandidateReplica(replicas [](*Instance)) (candidateReplica *Instance,
}
}
if candidateReplica == nil {
// Unable to find a candidate that will master others.
// Unable to find a candidate that will primary others.
// Instead, pick a (single) replica which is not banned.
for _, replica := range replicas {
replica := replica
@ -1710,7 +1710,7 @@ func chooseCandidateReplica(replicas [](*Instance)) (candidateReplica *Instance,
return candidateReplica, aheadReplicas, equalReplicas, laterReplicas, cannotReplicateReplicas, err
}
// GetCandidateReplica chooses the best replica to promote given a (possibly dead) master
// GetCandidateReplica chooses the best replica to promote given a (possibly dead) primary
func GetCandidateReplica(masterKey *InstanceKey, forRematchPurposes bool) (*Instance, [](*Instance), [](*Instance), [](*Instance), [](*Instance), error) {
var candidateReplica *Instance
aheadReplicas := [](*Instance){}
@ -1751,7 +1751,7 @@ func GetCandidateReplica(masterKey *InstanceKey, forRematchPurposes bool) (*Inst
return candidateReplica, aheadReplicas, equalReplicas, laterReplicas, cannotReplicateReplicas, nil
}
// GetCandidateReplicaOfBinlogServerTopology chooses the best replica to promote given a (possibly dead) master
// GetCandidateReplicaOfBinlogServerTopology chooses the best replica to promote given a (possibly dead) primary
func GetCandidateReplicaOfBinlogServerTopology(masterKey *InstanceKey) (candidateReplica *Instance, err error) {
replicas, err := getReplicasForSorting(masterKey, true)
if err != nil {
@ -1963,7 +1963,7 @@ func relocateBelowInternal(instance, other *Instance) (*Instance, error) {
return Repoint(&instance.Key, &other.Key, GTIDHintDeny)
}
// Relocate to its master, then repoint to the binlog server
// Relocate to its primary, then repoint to the binlog server
otherMaster, found, err := ReadInstance(&other.MasterKey)
if err != nil {
return instance, err
@ -1983,7 +1983,7 @@ func relocateBelowInternal(instance, other *Instance) (*Instance, error) {
}
if instance.IsBinlogServer() {
// Can only move within the binlog-server family tree
// And these have been covered just now: move up from a master binlog server, move below a binling binlog server.
// And these have been covered just now: move up from a primary binlog server, move below a binling binlog server.
// sure, the family can be more complex, but we keep these operations atomic
return nil, log.Errorf("Relocating binlog server %+v below %+v turns to be too complex; please do it manually", instance.Key, other.Key)
}
@ -2001,7 +2001,7 @@ func relocateBelowInternal(instance, other *Instance) (*Instance, error) {
}
// See if we need to MoveUp
if instanceMaster != nil && instanceMaster.MasterKey.Equals(&other.Key) {
// Moving to grandparent--handles co-mastering writable case
// Moving to grandparent--handles co-primary writable case
return MoveUp(&instance.Key)
}
if instanceMaster != nil && instanceMaster.IsBinlogServer() {

Просмотреть файл

@ -566,7 +566,7 @@ func workaroundBug83713(instanceKey *InstanceKey) {
}
}
// ChangeMasterTo changes the given instance's master according to given input.
// ChangeMasterTo changes the given instance's primary according to given input.
// TODO(sougou): deprecate ReplicationCredentialsQuery, and all other credential discovery.
func ChangeMasterTo(instanceKey *InstanceKey, masterKey *InstanceKey, masterBinlogCoordinates *BinlogCoordinates, skipUnresolve bool, gtidHint OperationGTIDHint) (*Instance, error) {
user, password := config.Config.MySQLReplicaUser, config.Config.MySQLReplicaPassword
@ -617,7 +617,7 @@ func ChangeMasterTo(instanceKey *InstanceKey, masterKey *InstanceKey, masterBinl
// Is MariaDB; not using GTID, turn into GTID
mariadbGTIDHint := "slave_pos"
if !instance.ReplicationThreadsExist() {
// This instance is currently a master. As per https://mariadb.com/kb/en/change-master-to/#master_use_gtid
// This instance is currently a primary. As per https://mariadb.com/kb/en/change-master-to/#master_use_gtid
// we should be using current_pos.
// See also:
// - https://github.com/openark/orchestrator/issues/1146
@ -684,9 +684,9 @@ func ChangeMasterTo(instanceKey *InstanceKey, masterKey *InstanceKey, masterBinl
return instance, err
}
// SkipToNextBinaryLog changes master position to beginning of next binlog
// SkipToNextBinaryLog changes primary position to beginning of next binlog
// USE WITH CARE!
// Use case is binlog servers where the master was gone & replaced by another.
// Use case is binlog servers where the primary was gone & replaced by another.
func SkipToNextBinaryLog(instanceKey *InstanceKey) (*Instance, error) {
instance, err := ReadTopologyInstance(instanceKey)
if err != nil {

Просмотреть файл

@ -40,8 +40,8 @@ var TopoServ *topo.Server
// ErrTabletAliasNil is a fixed error message.
var ErrTabletAliasNil = errors.New("tablet alias is nil")
// SwitchMaster makes the new tablet the master and proactively performs
// the necessary propagation to the old master. The propagation is best
// SwitchMaster makes the new tablet the primary and proactively performs
// the necessary propagation to the old primary. The propagation is best
// effort. If it fails, the tablet's shard sync will eventually converge.
// The proactive propagation allows a competing Orchestrator from discovering
// the successful action of a previous one, which reduces churn.
@ -87,7 +87,7 @@ func SwitchMaster(newMasterKey, oldMasterKey InstanceKey) error {
return nil
}
// ChangeTabletType designates the tablet that owns an instance as the master.
// ChangeTabletType designates the tablet that owns an instance as the primary.
func ChangeTabletType(instanceKey InstanceKey, tabletType topodatapb.TabletType) (*topodatapb.Tablet, error) {
if instanceKey.Hostname == "" {
return nil, errors.New("can't set tablet to master: instance is unspecified")

Просмотреть файл

@ -176,7 +176,7 @@ func handleDiscoveryRequests() {
}
// DiscoverInstance will attempt to discover (poll) an instance (unless
// it is already up to date) and will also ensure that its master and
// it is already up to date) and will also ensure that its primary and
// replicas (if any) are also checked.
func DiscoverInstance(instanceKey inst.InstanceKey) {
if inst.InstanceIsForgotten(&instanceKey) {
@ -324,7 +324,7 @@ func onHealthTick() {
}
}
// Write a cluster's master (or all clusters masters) to kv stores.
// SubmitMastersToKvStores records a cluster's primary (or all clusters primaries) to kv stores.
// This should generally only happen once in a lifetime of a cluster. Otherwise KV
// stores are updated via failovers.
func SubmitMastersToKvStores(clusterName string, force bool) (kvPairs [](*kv.KVPair), submittedCount int, err error) {

Просмотреть файл

@ -284,12 +284,12 @@ func TabletRefresh(instanceKey inst.InstanceKey) (*topodatapb.Tablet, error) {
return ti.Tablet, nil
}
// TabletDemoteMaster requests the master tablet to stop accepting transactions.
// TabletDemoteMaster requests the primary tablet to stop accepting transactions.
func TabletDemoteMaster(instanceKey inst.InstanceKey) error {
return tabletDemoteMaster(instanceKey, true)
}
// TabletUndoDemoteMaster requests the master tablet to undo the demote.
// TabletUndoDemoteMaster requests the primary tablet to undo the demote.
func TabletUndoDemoteMaster(instanceKey inst.InstanceKey) error {
return tabletDemoteMaster(instanceKey, false)
}

Просмотреть файл

@ -423,7 +423,7 @@ func recoverDeadMasterInBinlogServerTopology(topologyRecovery *TopologyRecovery)
if err != nil {
return promotedReplica, log.Errore(err)
}
// Reconnect binlog servers to promoted replica (now master):
// Reconnect binlog servers to promoted replica (now primary):
promotedBinlogServer, err = inst.SkipToNextBinaryLog(&promotedBinlogServer.Key)
if err != nil {
return promotedReplica, log.Errore(err)
@ -434,9 +434,9 @@ func recoverDeadMasterInBinlogServerTopology(topologyRecovery *TopologyRecovery)
}
func() {
// Move binlog server replicas up to replicate from master.
// Move binlog server replicas up to replicate from primary.
// This can only be done once a BLS has skipped to the next binlog
// We postpone this operation. The master is already promoted and we're happy.
// We postpone this operation. The primary is already promoted and we're happy.
binlogServerReplicas, err := inst.ReadBinlogServerReplicaInstances(&promotedBinlogServer.Key)
if err != nil {
return
@ -452,8 +452,8 @@ func recoverDeadMasterInBinlogServerTopology(topologyRecovery *TopologyRecovery)
if err != nil {
return err
}
// Make sure the BLS has the "next binlog" -- the one the master flushed & purged to. Otherwise the BLS
// will request a binlog the master does not have
// Make sure the BLS has the "next binlog" -- the one the primary flushed & purged to. Otherwise the BLS
// will request a binlog the primary does not have
if binlogServerReplica.ExecBinlogCoordinates.SmallerThan(&promotedBinlogServer.ExecBinlogCoordinates) {
binlogServerReplica, err = inst.StartReplicationUntilMasterCoordinates(&binlogServerReplica.Key, &promotedBinlogServer.ExecBinlogCoordinates)
if err != nil {
@ -480,7 +480,7 @@ func GetMasterRecoveryType(analysisEntry *inst.ReplicationAnalysis) (masterRecov
return masterRecoveryType
}
// recoverDeadMaster recovers a dead master, complete logic inside
// recoverDeadMaster recovers a dead primary, complete logic inside
func recoverDeadMaster(topologyRecovery *TopologyRecovery, candidateInstanceKey *inst.InstanceKey, skipProcesses bool) (recoveryAttempted bool, promotedReplica *inst.Instance, lostReplicas [](*inst.Instance), err error) {
topologyRecovery.Type = MasterRecovery
analysisEntry := &topologyRecovery.AnalysisEntry
@ -617,7 +617,7 @@ func SuggestReplacementForPromotedReplica(topologyRecovery *TopologyRecovery, de
// Maybe we actually promoted such a replica. Does that mean we should keep it?
// Maybe we promoted a "neutral", and some "prefer" server is available.
// Maybe we promoted a "prefer_not"
// Maybe we promoted a server in a different DC than the master
// Maybe we promoted a server in a different DC than the primary
// There's many options. We may wish to replace the server we promoted with a better one.
AuditTopologyRecovery(topologyRecovery, "checking if should replace promoted replica with a better candidate")
if candidateInstanceKey == nil {
@ -651,11 +651,11 @@ func SuggestReplacementForPromotedReplica(topologyRecovery *TopologyRecovery, de
}
}
if candidateInstanceKey == nil {
// We cannot find a candidate in same DC and ENV as dead master
// We cannot find a candidate in same DC and ENV as dead primary
AuditTopologyRecovery(topologyRecovery, "+ checking if promoted replica is an OK candidate")
for _, candidateReplica := range candidateReplicas {
if promotedReplica.Key.Equals(&candidateReplica.Key) {
// Seems like we promoted a candidate replica (though not in same DC and ENV as dead master)
// Seems like we promoted a candidate replica (though not in same DC and ENV as dead primary)
if satisfied, reason := MasterFailoverGeographicConstraintSatisfied(&topologyRecovery.AnalysisEntry, candidateReplica); satisfied {
// Good enough. No further action required.
AuditTopologyRecovery(topologyRecovery, fmt.Sprintf("promoted replica %+v is a good candidate", promotedReplica.Key))
@ -709,7 +709,7 @@ func SuggestReplacementForPromotedReplica(topologyRecovery *TopologyRecovery, de
if candidateInstanceKey == nil {
// Still nothing? Then we didn't find a replica marked as "candidate". OK, further down the stream we have:
// find neutral instance in same dv&env as dead master
// find neutral instance in same dv&env as dead primary
AuditTopologyRecovery(topologyRecovery, "+ searching for a neutral server to replace promoted server, in same DC and env as dead master")
for _, neutralReplica := range neutralReplicas {
if canTakeOverPromotedServerAsMaster(neutralReplica, promotedReplica) &&
@ -763,7 +763,7 @@ func SuggestReplacementForPromotedReplica(topologyRecovery *TopologyRecovery, de
return replacement, true, err
}
// replacePromotedReplicaWithCandidate is called after a master (or co-master)
// replacePromotedReplicaWithCandidate is called after a primary (or co-primary)
// died and was replaced by some promotedReplica.
// But, is there an even better replica to promote?
// if candidateInstanceKey is given, then it is forced to be promoted over the promotedReplica
@ -920,7 +920,7 @@ func checkAndRecoverDeadMaster(analysisEntry inst.ReplicationAnalysis, candidate
AuditTopologyRecovery(topologyRecovery, fmt.Sprintf("- RecoverDeadMaster: applying read-only=0 on promoted master: success=%t", (err == nil)))
}
}
// Let's attempt, though we won't necessarily succeed, to set old master as read-only
// Let's attempt, though we won't necessarily succeed, to set old primary as read-only
go func() {
_, err := inst.SetReadOnly(&analysisEntry.AnalyzedInstanceKey, true)
AuditTopologyRecovery(topologyRecovery, fmt.Sprintf("- RecoverDeadMaster: applying read-only=1 on demoted master: success=%t", (err == nil)))
@ -961,7 +961,7 @@ func checkAndRecoverDeadMaster(analysisEntry inst.ReplicationAnalysis, candidate
attributes.SetGeneralAttribute(analysisEntry.ClusterDetails.ClusterDomain, promotedReplica.Key.StringCode())
if !skipProcesses {
// Execute post master-failover processes
// Execute post primary-failover processes
executeProcesses(config.Config.PostMasterFailoverProcesses, "PostMasterFailoverProcesses", topologyRecovery, false)
}
} else {
@ -1048,7 +1048,7 @@ func canTakeOverPromotedServerAsMaster(wantToTakeOver *inst.Instance, toBeTakenO
return true
}
// GetCandidateSiblingOfIntermediateMaster chooses the best sibling of a dead intermediate master
// GetCandidateSiblingOfIntermediateMaster chooses the best sibling of a dead intermediate primary
// to whom the IM's replicas can be moved.
func GetCandidateSiblingOfIntermediateMaster(topologyRecovery *TopologyRecovery, intermediateMasterInstance *inst.Instance) (*inst.Instance, error) {
@ -1107,7 +1107,7 @@ func GetCandidateSiblingOfIntermediateMaster(topologyRecovery *TopologyRecovery,
return nil, log.Errorf("topology_recovery: cannot find candidate sibling of %+v", intermediateMasterInstance.Key)
}
// RecoverDeadIntermediateMaster performs intermediate master recovery; complete logic inside
// RecoverDeadIntermediateMaster performs intermediate primary recovery; complete logic inside
func RecoverDeadIntermediateMaster(topologyRecovery *TopologyRecovery, skipProcesses bool) (successorInstance *inst.Instance, err error) {
topologyRecovery.Type = IntermediateMasterRecovery
analysisEntry := &topologyRecovery.AnalysisEntry
@ -1152,7 +1152,7 @@ func RecoverDeadIntermediateMaster(topologyRecovery *TopologyRecovery, skipProce
inst.AuditOperation("recover-dead-intermediate-master", failedInstanceKey, fmt.Sprintf("Relocated %d replicas under candidate sibling: %+v; %d errors: %+v", len(relocatedReplicas), candidateSibling.Key, len(errs), errs))
}
}
// Plan A: find a replacement intermediate master in same Data Center
// Plan A: find a replacement intermediate primary in same Data Center
if candidateSiblingOfIntermediateMaster != nil && candidateSiblingOfIntermediateMaster.DataCenter == intermediateMasterInstance.DataCenter {
relocateReplicasToCandidateSibling()
}
@ -1173,7 +1173,7 @@ func RecoverDeadIntermediateMaster(topologyRecovery *TopologyRecovery, skipProce
successorInstance = regroupPromotedReplica
}
}
// Plan C: try replacement intermediate master in other DC...
// Plan C: try replacement intermediate primary in other DC...
if candidateSiblingOfIntermediateMaster != nil && candidateSiblingOfIntermediateMaster.DataCenter != intermediateMasterInstance.DataCenter {
AuditTopologyRecovery(topologyRecovery, "- RecoverDeadIntermediateMaster: will next attempt relocating to another DC server")
relocateReplicasToCandidateSibling()
@ -1242,7 +1242,7 @@ func checkAndRecoverDeadIntermediateMaster(analysisEntry inst.ReplicationAnalysi
return true, topologyRecovery, err
}
// RecoverDeadCoMaster recovers a dead co-master, complete logic inside
// RecoverDeadCoMaster recovers a dead co-primary, complete logic inside
func RecoverDeadCoMaster(topologyRecovery *TopologyRecovery, skipProcesses bool) (promotedReplica *inst.Instance, lostReplicas [](*inst.Instance), err error) {
topologyRecovery.Type = CoMasterRecovery
analysisEntry := &topologyRecovery.AnalysisEntry
@ -1317,9 +1317,9 @@ func RecoverDeadCoMaster(topologyRecovery *TopologyRecovery, skipProcesses bool)
topologyRecovery.ParticipatingInstanceKeys.AddKey(promotedReplica.Key)
}
// OK, we may have someone promoted. Either this was the other co-master or another replica.
// Noting down that we DO NOT attempt to set a new co-master topology. We are good with remaining with a single master.
// I tried solving the "let's promote a replica and create a new co-master setup" but this turns so complex due to various factors.
// OK, we may have someone promoted. Either this was the other co-primary or another replica.
// Noting down that we DO NOT attempt to set a new co-primary topology. We are good with remaining with a single primary.
// I tried solving the "let's promote a replica and create a new co-primary setup" but this turns so complex due to various factors.
// I see this as risky and not worth the questionable benefit.
// Maybe future me is a smarter person and finds a simple solution. Unlikely. I'm getting dumber.
//
@ -1330,7 +1330,7 @@ func RecoverDeadCoMaster(topologyRecovery *TopologyRecovery, skipProcesses bool)
// !! This is an evil 3-node circle that must be broken.
// config.Config.ApplyMySQLPromotionAfterMasterFailover, if true, will cause it to break, because we would RESET SLAVE on S1
// but we want to make sure the circle is broken no matter what.
// So in the case we promoted not-the-other-co-master, we issue a detach-replica-master-host, which is a reversible operation
// So in the case we promoted not-the-other-co-primary, we issue a detach-replica-master-host, which is a reversible operation
if promotedReplica != nil && !promotedReplica.Key.Equals(otherCoMasterKey) {
_, err = inst.DetachReplicaMasterHost(&promotedReplica.Key)
topologyRecovery.AddError(log.Errore(err))
@ -1471,9 +1471,9 @@ func isInEmergencyOperationGracefulPeriod(instanceKey *inst.InstanceKey) bool {
// emergentlyRestartReplicationOnTopologyInstanceReplicas forces a stop slave + start slave on
// replicas of a given instance, in an attempt to cause them to re-evaluate their replication state.
// This can be useful in scenarios where the master has Too Many Connections, but long-time connected
// This can be useful in scenarios where the primary has Too Many Connections, but long-time connected
// replicas are not seeing this; when they stop+start replication, they need to re-authenticate and
// that's where we hope they realize the master is bad.
// that's where we hope they realize the primary is bad.
func emergentlyRestartReplicationOnTopologyInstanceReplicas(instanceKey *inst.InstanceKey, analysisCode inst.AnalysisCode) {
if existsInCacheError := emergencyRestartReplicaTopologyInstanceMap.Add(instanceKey.StringCode(), true, cache.DefaultExpiration); existsInCacheError != nil {
// While each replica's RestartReplication() is throttled on its own, it's also wasteful to
@ -1520,7 +1520,7 @@ func getCheckAndRecoverFunction(analysisCode inst.AnalysisCode, analyzedInstance
isActionableRecovery bool,
) {
switch analysisCode {
// master
// primary
case inst.DeadMaster, inst.DeadMasterAndSomeReplicas:
if isInEmergencyOperationGracefulPeriod(analyzedInstanceKey) {
return checkAndRecoverGenericProblem, false
@ -1543,7 +1543,7 @@ func getCheckAndRecoverFunction(analysisCode inst.AnalysisCode, analyzedInstance
case inst.NotConnectedToMaster, inst.ConnectedToWrongMaster, inst.ReplicationStopped, inst.ReplicaIsWritable,
inst.ReplicaSemiSyncMustBeSet, inst.ReplicaSemiSyncMustNotBeSet:
return fixReplica, false
// intermediate master
// intermediate primary
case inst.DeadIntermediateMaster:
return checkAndRecoverDeadIntermediateMaster, true
case inst.DeadIntermediateMasterAndSomeReplicas:
@ -1554,12 +1554,12 @@ func getCheckAndRecoverFunction(analysisCode inst.AnalysisCode, analyzedInstance
return checkAndRecoverDeadIntermediateMaster, true
case inst.DeadIntermediateMasterAndReplicas:
return checkAndRecoverGenericProblem, false
// co-master
// co-primary
case inst.DeadCoMaster:
return checkAndRecoverDeadCoMaster, true
case inst.DeadCoMasterAndSomeReplicas:
return checkAndRecoverDeadCoMaster, true
// master, non actionable
// primary, non actionable
case inst.DeadMasterAndReplicas:
return checkAndRecoverGenericProblem, false
case inst.UnreachableMaster:
@ -1769,7 +1769,7 @@ func ForceExecuteRecovery(analysisEntry inst.ReplicationAnalysis, candidateInsta
return executeCheckAndRecoverFunction(analysisEntry, candidateInstanceKey, true, skipProcesses)
}
// ForceMasterFailover *trusts* master of given cluster is dead and initiates a failover
// ForceMasterFailover *trusts* primary of given cluster is dead and initiates a failover
func ForceMasterFailover(clusterName string) (topologyRecovery *TopologyRecovery, err error) {
clusterMasters, err := inst.ReadClusterMaster(clusterName)
if err != nil {
@ -1800,7 +1800,7 @@ func ForceMasterFailover(clusterName string) (topologyRecovery *TopologyRecovery
return topologyRecovery, nil
}
// ForceMasterTakeover *trusts* master of given cluster is dead and fails over to designated instance,
// ForceMasterTakeover *trusts* primary of given cluster is dead and fails over to designated instance,
// which has to be its direct child.
func ForceMasterTakeover(clusterName string, destination *inst.Instance) (topologyRecovery *TopologyRecovery, err error) {
clusterMasters, err := inst.ReadClusterWriteableMaster(clusterName)
@ -1861,7 +1861,7 @@ func getGracefulMasterTakeoverDesignatedInstance(clusterMasterKey *inst.Instance
return designatedInstance, nil
}
// Verify designated instance is a direct replica of master
// Verify designated instance is a direct replica of primary
for _, directReplica := range clusterMasterDirectReplicas {
if directReplica.Key.Equals(designatedKey) {
designatedInstance = directReplica
@ -1874,12 +1874,12 @@ func getGracefulMasterTakeoverDesignatedInstance(clusterMasterKey *inst.Instance
return designatedInstance, nil
}
// GracefulMasterTakeover will demote master of existing topology and promote its
// GracefulMasterTakeover will demote primary of existing topology and promote its
// direct replica instead.
// It expects that replica to have no siblings.
// This function is graceful in that it will first lock down the master, then wait
// This function is graceful in that it will first lock down the primary, then wait
// for the designated replica to catch up with last position.
// It will point old master at the newly promoted master at the correct coordinates.
// It will point old primary at the newly promoted primary at the correct coordinates.
func GracefulMasterTakeover(clusterName string, designatedKey *inst.InstanceKey, auto bool) (topologyRecovery *TopologyRecovery, promotedMasterCoordinates *inst.BinlogCoordinates, err error) {
clusterMasters, err := inst.ReadClusterMaster(clusterName)
if err != nil {
@ -1927,7 +1927,7 @@ func GracefulMasterTakeover(clusterName string, designatedKey *inst.InstanceKey,
log.Infof("GracefulMasterTakeover: Will let %+v take over its siblings", designatedInstance.Key)
relocatedReplicas, _, err, _ := inst.RelocateReplicas(&clusterMaster.Key, &designatedInstance.Key, "")
if len(relocatedReplicas) != len(clusterMasterDirectReplicas)-1 {
// We are unable to make designated instance master of all its siblings
// We are unable to make designated instance primary of all its siblings
relocatedReplicasKeyMap := inst.NewInstanceKeyMap()
relocatedReplicasKeyMap.AddInstances(relocatedReplicas)
// Let's see which replicas have not been relocated
@ -2004,7 +2004,7 @@ func GracefulMasterTakeover(clusterName string, designatedKey *inst.InstanceKey,
return topologyRecovery, promotedMasterCoordinates, err
}
// electNewMaster elects a new master while none were present before.
// electNewMaster elects a new primary while none were present before.
// TODO(sougou): this should be mreged with recoverDeadMaster
func electNewMaster(analysisEntry inst.ReplicationAnalysis, candidateInstanceKey *inst.InstanceKey, forceInstanceRecovery bool, skipProcesses bool) (recoveryAttempted bool, topologyRecovery *TopologyRecovery, err error) {
topologyRecovery, err = AttemptRecoveryRegistration(&analysisEntry, false, true)
@ -2103,7 +2103,7 @@ func fixClusterAndMaster(analysisEntry inst.ReplicationAnalysis, candidateInstan
}
log.Infof("Analysis: %v, will fix incorrect mastership %+v", analysisEntry.Analysis, analysisEntry.AnalyzedInstanceKey)
// Reset replication on current master. This will prevent the comaster code-path.
// Reset replication on current primary. This will prevent the co-primary code-path.
// TODO(sougou): this should probably done while holding a lock.
_, err = inst.ResetReplicationOperation(&analysisEntry.AnalyzedInstanceKey)
if err != nil {
@ -2124,7 +2124,7 @@ func fixClusterAndMaster(analysisEntry inst.ReplicationAnalysis, candidateInstan
return recoveryAttempted, topologyRecovery, err
}
// fixMaster sets the master as read-write.
// fixMaster sets the primary as read-write.
func fixMaster(analysisEntry inst.ReplicationAnalysis, candidateInstanceKey *inst.InstanceKey, forceInstanceRecovery bool, skipProcesses bool) (recoveryAttempted bool, topologyRecovery *TopologyRecovery, err error) {
topologyRecovery, err = AttemptRecoveryRegistration(&analysisEntry, false, true)
if topologyRecovery == nil {
@ -2156,7 +2156,7 @@ func fixMaster(analysisEntry inst.ReplicationAnalysis, candidateInstanceKey *ins
return true, topologyRecovery, nil
}
// fixReplica sets the replica as read-only and points it at the current master.
// fixReplica sets the replica as read-only and points it at the current primary.
func fixReplica(analysisEntry inst.ReplicationAnalysis, candidateInstanceKey *inst.InstanceKey, forceInstanceRecovery bool, skipProcesses bool) (recoveryAttempted bool, topologyRecovery *TopologyRecovery, err error) {
topologyRecovery, err = AttemptRecoveryRegistration(&analysisEntry, false, true)
if topologyRecovery == nil {

Просмотреть файл

@ -3752,11 +3752,11 @@ type DemotePrimaryResponse struct {
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Position is deprecated, and is a string representation of a demoted masters executed position.
// Position is deprecated, and is a string representation of a demoted primaries executed position.
//
// Deprecated: Do not use.
DeprecatedPosition string `protobuf:"bytes,1,opt,name=deprecated_position,json=deprecatedPosition,proto3" json:"deprecated_position,omitempty"`
// PrimaryStatus represents the response from calling `SHOW MASTER STATUS` on a master that has been demoted.
// PrimaryStatus represents the response from calling `SHOW MASTER STATUS` on a primary that has been demoted.
PrimaryStatus *replicationdata.PrimaryStatus `protobuf:"bytes,2,opt,name=primary_status,json=primaryStatus,proto3" json:"primary_status,omitempty"`
}

Просмотреть файл

@ -47,13 +47,13 @@ type TabletManagerClient interface {
ExecuteFetchAsApp(ctx context.Context, in *tabletmanagerdata.ExecuteFetchAsAppRequest, opts ...grpc.CallOption) (*tabletmanagerdata.ExecuteFetchAsAppResponse, error)
// ReplicationStatus returns the current replication status.
ReplicationStatus(ctx context.Context, in *tabletmanagerdata.ReplicationStatusRequest, opts ...grpc.CallOption) (*tabletmanagerdata.ReplicationStatusResponse, error)
// MasterStatus returns the current master status.
// MasterStatus returns the current primary status.
MasterStatus(ctx context.Context, in *tabletmanagerdata.PrimaryStatusRequest, opts ...grpc.CallOption) (*tabletmanagerdata.PrimaryStatusResponse, error)
// PrimaryStatus returns the current master status.
// PrimaryStatus returns the current primary status.
PrimaryStatus(ctx context.Context, in *tabletmanagerdata.PrimaryStatusRequest, opts ...grpc.CallOption) (*tabletmanagerdata.PrimaryStatusResponse, error)
// MasterPosition returns the current master position
// MasterPosition returns the current primary position
MasterPosition(ctx context.Context, in *tabletmanagerdata.PrimaryPositionRequest, opts ...grpc.CallOption) (*tabletmanagerdata.PrimaryPositionResponse, error)
// PrimaryPosition returns the current master position
// PrimaryPosition returns the current primary position
PrimaryPosition(ctx context.Context, in *tabletmanagerdata.PrimaryPositionRequest, opts ...grpc.CallOption) (*tabletmanagerdata.PrimaryPositionResponse, error)
// WaitForPosition waits for the position to be reached
WaitForPosition(ctx context.Context, in *tabletmanagerdata.WaitForPositionRequest, opts ...grpc.CallOption) (*tabletmanagerdata.WaitForPositionResponse, error)
@ -76,12 +76,12 @@ type TabletManagerClient interface {
ResetReplication(ctx context.Context, in *tabletmanagerdata.ResetReplicationRequest, opts ...grpc.CallOption) (*tabletmanagerdata.ResetReplicationResponse, error)
// Deprecated, use InitPrimary instead
InitMaster(ctx context.Context, in *tabletmanagerdata.InitPrimaryRequest, opts ...grpc.CallOption) (*tabletmanagerdata.InitPrimaryResponse, error)
// InitPrimary initializes the tablet as a master
// InitPrimary initializes the tablet as a primary
InitPrimary(ctx context.Context, in *tabletmanagerdata.InitPrimaryRequest, opts ...grpc.CallOption) (*tabletmanagerdata.InitPrimaryResponse, error)
// PopulateReparentJournal tells the tablet to add an entry to its
// reparent journal
PopulateReparentJournal(ctx context.Context, in *tabletmanagerdata.PopulateReparentJournalRequest, opts ...grpc.CallOption) (*tabletmanagerdata.PopulateReparentJournalResponse, error)
// InitReplica tells the tablet to reparent to the master unconditionally
// InitReplica tells the tablet to reparent to the primary unconditionally
InitReplica(ctx context.Context, in *tabletmanagerdata.InitReplicaRequest, opts ...grpc.CallOption) (*tabletmanagerdata.InitReplicaResponse, error)
// Deprecated, see DemotePrimary instead
DemoteMaster(ctx context.Context, in *tabletmanagerdata.DemotePrimaryRequest, opts ...grpc.CallOption) (*tabletmanagerdata.DemotePrimaryResponse, error)
@ -91,18 +91,18 @@ type TabletManagerClient interface {
UndoDemoteMaster(ctx context.Context, in *tabletmanagerdata.UndoDemotePrimaryRequest, opts ...grpc.CallOption) (*tabletmanagerdata.UndoDemotePrimaryResponse, error)
// UndoDemotePrimary reverts all changes made by DemotePrimary
UndoDemotePrimary(ctx context.Context, in *tabletmanagerdata.UndoDemotePrimaryRequest, opts ...grpc.CallOption) (*tabletmanagerdata.UndoDemotePrimaryResponse, error)
// ReplicaWasPromoted tells the remote tablet it is now the master
// ReplicaWasPromoted tells the remote tablet it is now the primary
ReplicaWasPromoted(ctx context.Context, in *tabletmanagerdata.ReplicaWasPromotedRequest, opts ...grpc.CallOption) (*tabletmanagerdata.ReplicaWasPromotedResponse, error)
// SetMaster tells the replica to reparent
SetMaster(ctx context.Context, in *tabletmanagerdata.SetReplicationSourceRequest, opts ...grpc.CallOption) (*tabletmanagerdata.SetReplicationSourceResponse, error)
// SetReplicationSource tells the replica to reparent
SetReplicationSource(ctx context.Context, in *tabletmanagerdata.SetReplicationSourceRequest, opts ...grpc.CallOption) (*tabletmanagerdata.SetReplicationSourceResponse, error)
// ReplicaWasRestarted tells the remote tablet its master has changed
// ReplicaWasRestarted tells the remote tablet its primary has changed
ReplicaWasRestarted(ctx context.Context, in *tabletmanagerdata.ReplicaWasRestartedRequest, opts ...grpc.CallOption) (*tabletmanagerdata.ReplicaWasRestartedResponse, error)
// StopReplicationAndGetStatus stops MySQL replication, and returns the
// replication status
StopReplicationAndGetStatus(ctx context.Context, in *tabletmanagerdata.StopReplicationAndGetStatusRequest, opts ...grpc.CallOption) (*tabletmanagerdata.StopReplicationAndGetStatusResponse, error)
// PromoteReplica makes the replica the new master
// PromoteReplica makes the replica the new primary
PromoteReplica(ctx context.Context, in *tabletmanagerdata.PromoteReplicaRequest, opts ...grpc.CallOption) (*tabletmanagerdata.PromoteReplicaResponse, error)
Backup(ctx context.Context, in *tabletmanagerdata.BackupRequest, opts ...grpc.CallOption) (TabletManager_BackupClient, error)
// RestoreFromBackup deletes all local data and restores it from the latest backup.
@ -656,13 +656,13 @@ type TabletManagerServer interface {
ExecuteFetchAsApp(context.Context, *tabletmanagerdata.ExecuteFetchAsAppRequest) (*tabletmanagerdata.ExecuteFetchAsAppResponse, error)
// ReplicationStatus returns the current replication status.
ReplicationStatus(context.Context, *tabletmanagerdata.ReplicationStatusRequest) (*tabletmanagerdata.ReplicationStatusResponse, error)
// MasterStatus returns the current master status.
// MasterStatus returns the current primary status.
MasterStatus(context.Context, *tabletmanagerdata.PrimaryStatusRequest) (*tabletmanagerdata.PrimaryStatusResponse, error)
// PrimaryStatus returns the current master status.
// PrimaryStatus returns the current primary status.
PrimaryStatus(context.Context, *tabletmanagerdata.PrimaryStatusRequest) (*tabletmanagerdata.PrimaryStatusResponse, error)
// MasterPosition returns the current master position
// MasterPosition returns the current primary position
MasterPosition(context.Context, *tabletmanagerdata.PrimaryPositionRequest) (*tabletmanagerdata.PrimaryPositionResponse, error)
// PrimaryPosition returns the current master position
// PrimaryPosition returns the current primary position
PrimaryPosition(context.Context, *tabletmanagerdata.PrimaryPositionRequest) (*tabletmanagerdata.PrimaryPositionResponse, error)
// WaitForPosition waits for the position to be reached
WaitForPosition(context.Context, *tabletmanagerdata.WaitForPositionRequest) (*tabletmanagerdata.WaitForPositionResponse, error)
@ -685,12 +685,12 @@ type TabletManagerServer interface {
ResetReplication(context.Context, *tabletmanagerdata.ResetReplicationRequest) (*tabletmanagerdata.ResetReplicationResponse, error)
// Deprecated, use InitPrimary instead
InitMaster(context.Context, *tabletmanagerdata.InitPrimaryRequest) (*tabletmanagerdata.InitPrimaryResponse, error)
// InitPrimary initializes the tablet as a master
// InitPrimary initializes the tablet as a primary
InitPrimary(context.Context, *tabletmanagerdata.InitPrimaryRequest) (*tabletmanagerdata.InitPrimaryResponse, error)
// PopulateReparentJournal tells the tablet to add an entry to its
// reparent journal
PopulateReparentJournal(context.Context, *tabletmanagerdata.PopulateReparentJournalRequest) (*tabletmanagerdata.PopulateReparentJournalResponse, error)
// InitReplica tells the tablet to reparent to the master unconditionally
// InitReplica tells the tablet to reparent to the primary unconditionally
InitReplica(context.Context, *tabletmanagerdata.InitReplicaRequest) (*tabletmanagerdata.InitReplicaResponse, error)
// Deprecated, see DemotePrimary instead
DemoteMaster(context.Context, *tabletmanagerdata.DemotePrimaryRequest) (*tabletmanagerdata.DemotePrimaryResponse, error)
@ -700,18 +700,18 @@ type TabletManagerServer interface {
UndoDemoteMaster(context.Context, *tabletmanagerdata.UndoDemotePrimaryRequest) (*tabletmanagerdata.UndoDemotePrimaryResponse, error)
// UndoDemotePrimary reverts all changes made by DemotePrimary
UndoDemotePrimary(context.Context, *tabletmanagerdata.UndoDemotePrimaryRequest) (*tabletmanagerdata.UndoDemotePrimaryResponse, error)
// ReplicaWasPromoted tells the remote tablet it is now the master
// ReplicaWasPromoted tells the remote tablet it is now the primary
ReplicaWasPromoted(context.Context, *tabletmanagerdata.ReplicaWasPromotedRequest) (*tabletmanagerdata.ReplicaWasPromotedResponse, error)
// SetMaster tells the replica to reparent
SetMaster(context.Context, *tabletmanagerdata.SetReplicationSourceRequest) (*tabletmanagerdata.SetReplicationSourceResponse, error)
// SetReplicationSource tells the replica to reparent
SetReplicationSource(context.Context, *tabletmanagerdata.SetReplicationSourceRequest) (*tabletmanagerdata.SetReplicationSourceResponse, error)
// ReplicaWasRestarted tells the remote tablet its master has changed
// ReplicaWasRestarted tells the remote tablet its primary has changed
ReplicaWasRestarted(context.Context, *tabletmanagerdata.ReplicaWasRestartedRequest) (*tabletmanagerdata.ReplicaWasRestartedResponse, error)
// StopReplicationAndGetStatus stops MySQL replication, and returns the
// replication status
StopReplicationAndGetStatus(context.Context, *tabletmanagerdata.StopReplicationAndGetStatusRequest) (*tabletmanagerdata.StopReplicationAndGetStatusResponse, error)
// PromoteReplica makes the replica the new master
// PromoteReplica makes the replica the new primary
PromoteReplica(context.Context, *tabletmanagerdata.PromoteReplicaRequest) (*tabletmanagerdata.PromoteReplicaResponse, error)
Backup(*tabletmanagerdata.BackupRequest, TabletManager_BackupServer) error
// RestoreFromBackup deletes all local data and restores it from the latest backup.

Просмотреть файл

@ -1848,7 +1848,7 @@ type DeleteTabletsRequest struct {
// TabletAliases is the list of tablets to delete.
TabletAliases []*topodata.TabletAlias `protobuf:"bytes,1,rep,name=tablet_aliases,json=tabletAliases,proto3" json:"tablet_aliases,omitempty"`
// AllowPrimary allows for the master/primary tablet of a shard to be deleted.
// AllowPrimary allows for the primary tablet of a shard to be deleted.
// Use with caution.
AllowPrimary bool `protobuf:"varint,2,opt,name=allow_primary,json=allowPrimary,proto3" json:"allow_primary,omitempty"`
}

Просмотреть файл

@ -139,7 +139,7 @@ type Keyspace struct {
ShardingColumnType string `protobuf:"bytes,4,opt,name=sharding_column_type,json=shardingColumnType,proto3" json:"sharding_column_type,omitempty"`
// redirects all traffic to another keyspace. If set, shards is ignored.
ServedFrom string `protobuf:"bytes,5,opt,name=served_from,json=servedFrom,proto3" json:"served_from,omitempty"`
// number of replica tablets to instantiate. This includes the master tablet.
// number of replica tablets to instantiate. This includes the primary tablet.
ReplicaCount int32 `protobuf:"varint,6,opt,name=replica_count,json=replicaCount,proto3" json:"replica_count,omitempty"`
// number of rdonly tablets to instantiate.
RdonlyCount int32 `protobuf:"varint,7,opt,name=rdonly_count,json=rdonlyCount,proto3" json:"rdonly_count,omitempty"`

Просмотреть файл

@ -287,7 +287,7 @@ func (client *fakeTabletManagerClient) ExecuteFetchAsDba(ctx context.Context, ta
// newFakeTopo returns a topo with:
// - a keyspace named 'test_keyspace'.
// - 3 shards named '1', '2', '3'.
// - A master tablet for each shard.
// - A primary tablet for each shard.
func newFakeTopo(t *testing.T) *topo.Server {
ts := memorytopo.NewServer("test_cell")
ctx := context.Background()

Просмотреть файл

@ -83,7 +83,7 @@ func (exec *TabletExecutor) SkipPreflight() {
exec.skipPreflight = true
}
// Open opens a connection to the master for every shard.
// Open opens a connection to the primary for every shard.
func (exec *TabletExecutor) Open(ctx context.Context, keyspace string) error {
if !exec.isClosed {
return nil
@ -412,7 +412,7 @@ func (exec *TabletExecutor) executeOneTablet(
return
}
// Get a replication position that's guaranteed to be after the schema change
// was applied on the master.
// was applied on the primary.
pos, err := exec.wr.TabletManagerClient().MasterPosition(ctx, tablet)
if err != nil {
errChan <- ShardWithError{

Просмотреть файл

@ -68,7 +68,7 @@ func TestTabletExecutorOpenWithEmptyMasterAlias(t *testing.T) {
Type: topodatapb.TabletType_REPLICA,
}
// This will create the Keyspace, Shard and Tablet record.
// Since this is a replica tablet, the Shard will have no master.
// Since this is a replica tablet, the Shard will have no primary.
if err := wr.InitTablet(ctx, tablet, false /*allowMasterOverride*/, true /*createShardAndKeyspace*/, false /*allowUpdate*/); err != nil {
t.Fatalf("InitTablet failed: %v", err)
}

Просмотреть файл

@ -45,16 +45,16 @@ import (
// throttler adapts its throttling rate to the replication lag.
//
// The throttler is necessary because replicas apply transactions at a slower
// rate than masters and fall behind at high write throughput.
// rate than primaries and fall behind at high write throughput.
// (Mostly they fall behind because MySQL replication is single threaded but
// the write throughput on the master does not have to.)
// the write throughput on the primary does not have to.)
//
// This demo simulates a client (writer), a master and a replica.
// The client writes to the master which in turn replicas everything to the
// This demo simulates a client (writer), a primary and a replica.
// The client writes to the primary which in turn replicas everything to the
// replica.
// The replica measures its replication lag via the timestamp which is part of
// each message.
// While the master has no rate limit, the replica is limited to
// While the primary has no rate limit, the replica is limited to
// --rate (see below) transactions/second. The client runs the resharding
// throttler which tries to throttle the client based on the observed
// replication lag.
@ -67,7 +67,7 @@ var (
replicaDegrationDuration = flag.Duration("replica_degration_duration", 10*time.Second, "duration a simulated degration should take")
)
// master simulates an *unthrottled* MySQL master which replicates every
// primary simulates an *unthrottled* MySQL primary which replicates every
// received "execute" call to a known "replica".
type master struct {
replica *replica
@ -79,13 +79,13 @@ func (m *master) execute(msg time.Time) {
}
// replica simulates a *throttled* MySQL replica.
// If it cannot keep up with applying the master writes, it will report a
// If it cannot keep up with applying the primary writes, it will report a
// replication lag > 0 seconds.
type replica struct {
fakeTablet *testlib.FakeTablet
qs *fakes.StreamHealthQueryService
// replicationStream is the incoming stream of messages from the master.
// replicationStream is the incoming stream of messages from the primary.
replicationStream chan time.Time
// throttler is used to enforce the maximum rate at which replica applies

Просмотреть файл

@ -597,7 +597,7 @@ func (m *MaxReplicationLagModule) decreaseAndGuessRate(r *result, now time.Time,
}
// Find out the average rate (per second) at which we inserted data
// at the master during the observed timespan.
// at the primary during the observed timespan.
from := lagRecordBefore.time
to := lagRecordNow.time
avgMasterRate := m.actualRatesHistory.average(from, to)

Просмотреть файл

@ -156,7 +156,7 @@ type Conn interface {
//
// Master election methods. This is meant to have a small
// number of processes elect a master within a group. The
// number of processes elect a primary within a group. The
// backend storage for this can either be the global topo
// server, or a resilient quorum of individual cells, to
// reduce the load / dependency on the global topo server.
@ -198,7 +198,7 @@ type DirEntry struct {
// Ephemeral is set if the directory / file only contains
// data that was not set by the file API, like lock files
// or master-election related files.
// or primary-election related files.
// Only filled in if full is true.
Ephemeral bool
}
@ -284,7 +284,7 @@ type WatchData struct {
// case topo.ErrInterrupted:
// return
// default:
// log.Errorf("Got error while waiting for master, will retry in 5s: %v", err)
// log.Errorf("Got error while waiting for primary, will retry in 5s: %v", err)
// time.Sleep(5 * time.Second)
// }
// }
@ -303,16 +303,16 @@ type WatchData struct {
// })
type MasterParticipation interface {
// WaitForMastership makes the current process a candidate
// for election, and waits until this process is the master.
// After we become the master, we may lose mastership. In that case,
// for election, and waits until this process is the primary.
// After we become the primary, we may lose primaryship. In that case,
// the returned context will be canceled. If Stop was called,
// WaitForMastership will return nil, ErrInterrupted.
WaitForMastership() (context.Context, error)
// Stop is called when we don't want to participate in the
// master election any more. Typically, that is when the
// primary election any more. Typically, that is when the
// hosting process is terminating. We will relinquish
// mastership at that point, if we had it. Stop should
// primaryship at that point, if we had it. Stop should
// not return until everything has been done.
// The MasterParticipation object should be discarded
// after Stop has been called. Any call to WaitForMastership
@ -321,7 +321,7 @@ type MasterParticipation interface {
// nil, ErrInterrupted as soon as possible.
Stop()
// GetCurrentMasterID returns the current master id.
// GetCurrentMasterID returns the current primary id.
// This may not work after Stop has been called.
GetCurrentMasterID(ctx context.Context) (string, error)
}

Просмотреть файл

@ -90,7 +90,7 @@ func (mp *consulMasterParticipation) WaitForMastership() (context.Context, error
return nil, err
}
// We have the lock, keep mastership until we lose it.
// We have the lock, keep primaryship until we lose it.
lockCtx, lockCancel := context.WithCancel(context.Background())
go func() {
select {
@ -103,7 +103,7 @@ func (mp *consulMasterParticipation) WaitForMastership() (context.Context, error
case <-mp.stop:
// Stop was called. We stop the context first,
// so the running process is not thinking it
// is the master any more, then we unlock.
// is the primary any more, then we unlock.
lockCancel()
if err := l.Unlock(); err != nil {
log.Errorf("master election(%v) Unlock failed: %v", mp.name, err)

Просмотреть файл

@ -86,7 +86,7 @@ func (mp *etcdMasterParticipation) WaitForMastership() (context.Context, error)
close(mp.done)
}()
// Try to get the mastership, by getting a lock.
// Try to get the primaryship, by getting a lock.
var err error
ld, err = mp.s.lock(lockCtx, electionPath, mp.id)
if err != nil {
@ -118,7 +118,7 @@ func (mp *etcdMasterParticipation) GetCurrentMasterID(ctx context.Context) (stri
return "", convertError(err, electionPath)
}
if len(resp.Kvs) == 0 {
// No key starts with this prefix, means nobody is the master.
// No key starts with this prefix, means nobody is the primary.
return "", nil
}
return string(resp.Kvs[0].Value), nil

Просмотреть файл

@ -58,7 +58,7 @@ func convertError(err error, nodePath string) error {
// seem to be using the codes.Unavailable
// category. So changing all of them to ErrTimeout.
// The other reasons for codes.Unavailable are when
// etcd master election is failing, so timeout
// etcd primary election is failing, so timeout
// also sounds reasonable there.
return topo.NewError(topo.Timeout, nodePath)
}

Просмотреть файл

@ -136,7 +136,7 @@ func (s *Server) Lock(ctx context.Context, dirPath, contents string) (topo.LockD
return s.lock(ctx, dirPath, contents)
}
// lock is used by both Lock() and master election.
// lock is used by both Lock() and primary election.
func (s *Server) lock(ctx context.Context, nodePath, contents string) (topo.LockDescriptor, error) {
nodePath = path.Join(s.root, nodePath, locksPath)

Просмотреть файл

@ -90,7 +90,7 @@ func (mp *kubernetesMasterParticipation) WaitForMastership() (context.Context, e
close(mp.done)
}()
// Try to get the mastership, by getting a lock.
// Try to get the primaryship, by getting a lock.
var err error
ld, err = mp.s.lock(lockCtx, electionPath, mp.id, true)
if err != nil {
@ -113,7 +113,7 @@ func (mp *kubernetesMasterParticipation) Stop() {
func (mp *kubernetesMasterParticipation) GetCurrentMasterID(ctx context.Context) (string, error) {
id, _, err := mp.s.Get(ctx, mp.getElectionPath())
if err != nil {
// NoNode means nobody is the master
// NoNode means nobody is the primary
if topo.IsErrType(err, topo.NoNode) {
return "", nil
}

Просмотреть файл

@ -40,7 +40,7 @@ func (s *Server) Lock(ctx context.Context, dirPath, contents string) (topo.LockD
return s.lock(ctx, dirPath, contents, false)
}
// lock is used by both Lock() and master election.
// lock is used by both Lock() and primary election.
// it blocks until the lock is taken, interrupted, or times out
func (s *Server) lock(ctx context.Context, nodePath, contents string, createMissing bool) (topo.LockDescriptor, error) {
// Satisfy the topo.Conn interface

Просмотреть файл

@ -61,7 +61,7 @@ func (ki *KeyspaceInfo) GetServedFrom(tabletType topodatapb.TabletType) *topodat
// CheckServedFromMigration makes sure a requested migration is safe
func (ki *KeyspaceInfo) CheckServedFromMigration(tabletType topodatapb.TabletType, cells []string, keyspace string, remove bool) error {
// master is a special case with a few extra checks
// primary is a special case with a few extra checks
if tabletType == topodatapb.TabletType_PRIMARY {
if !remove {
return vterrors.Errorf(vtrpcpb.Code_FAILED_PRECONDITION, "cannot add master back to %v", ki.keyspace)
@ -242,7 +242,7 @@ func (ts *Server) FindAllShardsInKeyspace(ctx context.Context, keyspace string)
return result, nil
}
// GetServingShards returns all shards where the master is serving.
// GetServingShards returns all shards where the primary is serving.
func (ts *Server) GetServingShards(ctx context.Context, keyspace string) ([]*ShardInfo, error) {
shards, err := ts.GetShardNames(ctx, keyspace)
if err != nil {

Просмотреть файл

@ -116,7 +116,7 @@ func TestUpdateServedFromMap(t *testing.T) {
t.Fatalf("migrate rdonly again should have failed: %v", err)
}
// finally migrate the master
// finally migrate the primary
if err := ki.UpdateServedFromMap(topodatapb.TabletType_PRIMARY, []string{"second"}, "source", true, allCells); err == nil || err.Error() != "cannot migrate only some cells for master removal in keyspace ks" {
t.Fatalf("migrate master with cells should have failed: %v", err)
}

Просмотреть файл

@ -270,7 +270,7 @@ func (l *Lock) unlockKeyspace(ctx context.Context, ts *Server, keyspace string,
// * PlannedReparentShard
// * EmergencyReparentShard
// * operations that we don't want to conflict with re-parenting:
// * DeleteTablet when it's the shard's current master
// * DeleteTablet when it's the shard's current primary
//
func (ts *Server) LockShard(ctx context.Context, keyspace, shard, action string) (context.Context, func(*error), error) {
i, ok := ctx.Value(locksKey).(*locksInfo)

Просмотреть файл

@ -93,7 +93,7 @@ func (mp *cMasterParticipation) WaitForMastership() (context.Context, error) {
close(mp.done)
}()
// Try to get the mastership, by getting a lock.
// Try to get the primaryship, by getting a lock.
var err error
ld, err = mp.c.Lock(lockCtx, electionPath, mp.id)
if err != nil {

Просмотреть файл

@ -144,7 +144,7 @@ type node struct {
// lockContents is the contents of the locks.
// For regular locks, it has the contents that was passed in.
// For master election, it has the id of the election leader.
// For primary election, it has the id of the election leader.
lockContents string
}

Просмотреть файл

@ -47,7 +47,7 @@ import (
const (
blTablesAlreadyPresent = "one or more tables are already present in the blacklist"
blTablesNotPresent = "cannot remove tables since one or more do not exist in the blacklist"
blNoCellsForMaster = "you cannot specify cells for a master's tablet control"
blNoCellsForMaster = "you cannot specify cells for a primary's tablet control"
)
// Functions for dealing with shard representations in topology.
@ -177,17 +177,17 @@ func (si *ShardInfo) Version() Version {
return si.version
}
// HasMaster returns true if the Shard has an assigned Master.
// HasMaster returns true if the Shard has an assigned primary.
func (si *ShardInfo) HasMaster() bool {
return !topoproto.TabletAliasIsZero(si.Shard.PrimaryAlias)
}
// GetPrimaryTermStartTime returns the shard's master term start time as a Time value.
// GetPrimaryTermStartTime returns the shard's primary term start time as a Time value.
func (si *ShardInfo) GetPrimaryTermStartTime() time.Time {
return logutil.ProtoToTime(si.Shard.PrimaryTermStartTime)
}
// SetPrimaryTermStartTime sets the shard's master term start time as a Time value.
// SetPrimaryTermStartTime sets the shard's primary term start time as a Time value.
func (si *ShardInfo) SetPrimaryTermStartTime(t time.Time) {
si.Shard.PrimaryTermStartTime = logutil.TimeToProto(t)
}
@ -296,7 +296,7 @@ func (ts *Server) CreateShard(ctx context.Context, keyspace, shard string) (err
KeyRange: keyRange,
}
// Set master as serving only if its keyrange doesn't overlap
// Set primary as serving only if its keyrange doesn't overlap
// with other shards. This applies to unsharded keyspaces also
value.IsPrimaryServing = true
sis, err := ts.FindAllShardsInKeyspace(ctx, keyspace)

Просмотреть файл

@ -109,9 +109,9 @@ func IsRunningUpdateStream(tt topodatapb.TabletType) bool {
return false
}
// IsReplicaType returns if this type should be connected to a master db
// IsReplicaType returns if this type should be connected to a primary db
// and actively replicating?
// MASTER is not obviously (only support one level replication graph)
// PRIMARY is not obviously (only support one level replication graph)
// BACKUP, RESTORE, DRAINED may or may not be, but we don't know for sure
func IsReplicaType(tt topodatapb.TabletType) bool {
switch tt {
@ -211,7 +211,7 @@ func (ti *TabletInfo) IsReplicaType() bool {
return IsReplicaType(ti.Type)
}
// GetPrimaryTermStartTime returns the tablet's master term start time as a Time value.
// GetPrimaryTermStartTime returns the tablet's primary term start time as a Time value.
func (ti *TabletInfo) GetPrimaryTermStartTime() time.Time {
return logutil.ProtoToTime(ti.Tablet.PrimaryTermStartTime)
}
@ -476,7 +476,7 @@ func (ts *Server) GetTabletsByCell(ctx context.Context, cell string) ([]*topodat
}
// ParseServingTabletType parses the tablet type into the enum, and makes sure
// that the enum is of serving type (MASTER, REPLICA, RDONLY/BATCH).
// that the enum is of serving type (PRIMARY, REPLICA, RDONLY/BATCH).
//
// Note: This function more closely belongs in topoproto, but that would create
// a circular import between packages topo and topoproto.

Просмотреть файл

@ -61,10 +61,10 @@ func checkElection(t *testing.T, ts *topo.Server) {
t.Fatalf("cannot create mp1: %v", err)
}
// no master yet, check name
// no primary yet, check name
waitForMasterID(t, mp1, "")
// wait for id1 to be the master
// wait for id1 to be the primary
ctx1, err := mp1.WaitForMastership()
if err != nil {
t.Fatalf("mp1 cannot become master: %v", err)
@ -84,7 +84,7 @@ func checkElection(t *testing.T, ts *topo.Server) {
}
}
// get the current master name, better be id1
// get the current primary name, better be id1
waitForMasterID(t, mp1, id1)
// create a second MasterParticipation on same name
@ -94,7 +94,7 @@ func checkElection(t *testing.T, ts *topo.Server) {
t.Fatalf("cannot create mp2: %v", err)
}
// wait until mp2 gets to be the master in the background
// wait until mp2 gets to be the primary in the background
mp2IsMaster := make(chan error)
var mp2Context context.Context
go func() {
@ -103,7 +103,7 @@ func checkElection(t *testing.T, ts *topo.Server) {
mp2IsMaster <- err
}()
// ask mp2 for master name, should get id1
// ask mp2 for primary name, should get id1
waitForMasterID(t, mp2, id1)
// stop mp1
@ -118,13 +118,13 @@ func checkElection(t *testing.T, ts *topo.Server) {
t.Fatalf("shutting down mp1 didn't close ctx1 in time")
}
// now mp2 should be master
// now mp2 should be primary
err = <-mp2IsMaster
if err != nil {
t.Fatalf("mp2 awoke with error: %v", err)
}
// ask mp2 for master name, should get id2
// ask mp2 for primary name, should get id2
waitForMasterID(t, mp2, id2)
// stop mp2, we're done

Просмотреть файл

@ -1106,7 +1106,7 @@ func TestMasterMigrateServedType(t *testing.T) {
t.Errorf("MigrateServedType() failure. Got %v, want: %v", string(got), string(want))
}
// migrating master type cleans up shard tablet controls records
// migrating primary type cleans up shard tablet controls records
targetKs = &topodatapb.SrvKeyspace{
Partitions: []*topodatapb.SrvKeyspace_KeyspacePartition{

Просмотреть файл

@ -30,7 +30,7 @@ import (
"vitess.io/vitess/go/vt/topo"
)
// This file contains the master election code for zk2topo.Server.
// This file contains the primary election code for zk2topo.Server.
// NewMasterParticipation is part of the topo.Server interface.
// We use the full path: <root path>/election/<name>
@ -129,14 +129,14 @@ func (mp *zkMasterParticipation) WaitForMastership() (context.Context, error) {
return ctx, nil
}
// watchMastership is the background go routine we run while we are the master.
// watchMastership is the background go routine we run while we are the primary.
// We will do two things:
// - watch for changes to the proposal file. If anything happens there,
// it most likely means we lost the ZK session, so we want to stop
// being the master.
// being the primary.
// - wait for mp.stop.
func (mp *zkMasterParticipation) watchMastership(ctx context.Context, conn *ZkConn, proposal string, cancel context.CancelFunc) {
// any interruption of this routine means we're not master any more.
// any interruption of this routine means we're not primary any more.
defer cancel()
// get to work watching our own proposal
@ -179,7 +179,7 @@ func (mp *zkMasterParticipation) GetCurrentMasterID(ctx context.Context) (string
return "", convertError(err, zkPath)
}
if len(children) == 0 {
// no current master
// no current primary
return "", nil
}
sort.Strings(children)
@ -188,7 +188,7 @@ func (mp *zkMasterParticipation) GetCurrentMasterID(ctx context.Context) (string
data, _, err := mp.zs.conn.Get(ctx, childPath)
if err != nil {
if err == zk.ErrNoNode {
// master terminated in front of our own eyes,
// primary terminated in front of our own eyes,
// try again
continue
}

Просмотреть файл

@ -109,7 +109,7 @@ func RebuildKeyspaceLocked(ctx context.Context, log logutil.Logger, ts *topo.Ser
// - check the ranges are compatible (no hole, covers everything)
for cell, srvKeyspace := range srvKeyspaceMap {
for _, si := range shards {
// We rebuild keyspace iff shard master is in a serving state.
// We rebuild keyspace iff shard primary is in a serving state.
if !si.GetIsPrimaryServing() {
continue
}

Просмотреть файл

@ -26,7 +26,7 @@ topotools is used by wrangler, so it ends up in all tools using
wrangler (vtctl, vtctld, ...). It is also included by vttablet, so it contains:
- most of the logic to create a shard / keyspace (tablet's init code)
- some of the logic to perform a TabletExternallyReparented (RPC call
to master vttablet to let it know it's the master).
to primary vttablet to let it know it's the primary).
*/
package topotools
@ -66,7 +66,7 @@ func ConfigureTabletHook(hk *hook.Hook, tabletAlias *topodatapb.TabletAlias) {
// If successful, the updated tablet record is returned.
func ChangeType(ctx context.Context, ts *topo.Server, tabletAlias *topodatapb.TabletAlias, newType topodatapb.TabletType, PrimaryTermStartTime *vttime.Time) (*topodatapb.Tablet, error) {
var result *topodatapb.Tablet
// Always clear out the master timestamp if not master.
// Always clear out the primary timestamp if not primary.
if newType != topodatapb.TabletType_PRIMARY {
PrimaryTermStartTime = nil
}
@ -107,7 +107,7 @@ func CheckOwnership(oldTablet, newTablet *topodatapb.Tablet) error {
// is a primary before we allow its tablet record to be deleted. The canonical
// way to determine the only true primary in a shard is to list all the tablets
// and find the one with the highest PrimaryTermStartTime among the ones that
// claim to be master.
// claim to be primary.
//
// We err on the side of caution here, i.e. we should never return false for
// a true primary tablet, but it is okay to return true for a tablet that isn't
@ -115,7 +115,7 @@ func CheckOwnership(oldTablet, newTablet *topodatapb.Tablet) error {
// the system is in transition (a reparenting event is in progress and parts of
// the topo have not yet been updated).
func IsPrimaryTablet(ctx context.Context, ts *topo.Server, ti *topo.TabletInfo) (bool, error) {
// Tablet record claims to be non-master, we believe it
// Tablet record claims to be non-primary, we believe it
if ti.Type != topodatapb.TabletType_PRIMARY {
return false, nil
}
@ -127,14 +127,14 @@ func IsPrimaryTablet(ctx context.Context, ts *topo.Server, ti *topo.TabletInfo)
return false, err
}
// Tablet record claims to be master, and shard record matches
// Tablet record claims to be primary, and shard record matches
if topoproto.TabletAliasEqual(si.PrimaryAlias, ti.Tablet.Alias) {
return true, nil
}
// Shard record has another tablet as master, so check PrimaryTermStartTime
// Shard record has another tablet as primary, so check PrimaryTermStartTime
// If tablet record's PrimaryTermStartTime is later than the one in the shard
// record, then the tablet is master
// record, then the tablet is primary
tabletMTST := ti.GetPrimaryTermStartTime()
shardMTST := si.GetPrimaryTermStartTime()

Просмотреть файл

@ -115,11 +115,11 @@ func GetAllTabletsAcrossCells(ctx context.Context, ts *topo.Server) ([]*topo.Tab
}
// SortedTabletMap returns two maps:
// - The replicaMap contains all the non-master non-scrapped hosts.
// - The replicaMap contains all the non-primary non-scrapped hosts.
// This can be used as a list of replicas to fix up for reparenting
// - The masterMap contains all the tablets without parents
// (scrapped or not). This can be used to special case
// the old master, and any tablet in a weird state, left over, ...
// the old primary, and any tablet in a weird state, left over, ...
func SortedTabletMap(tabletMap map[string]*topo.TabletInfo) (map[string]*topo.TabletInfo, map[string]*topo.TabletInfo) {
replicaMap := make(map[string]*topo.TabletInfo)
masterMap := make(map[string]*topo.TabletInfo)

Просмотреть файл

@ -38,7 +38,7 @@ Using this SQL driver is as simple as:
// Use "db" via the Golang sql interface.
}
For a full example, please see: https://github.com/vitessio/vitess/blob/master/test/client.go
For a full example, please see: https://github.com/vitessio/vitess/blob/main/test/client.go
The full example is based on our tutorial for running Vitess locally: https://vitess.io/docs/get-started/local/
@ -61,21 +61,21 @@ The driver uses the V3 API which doesn't require you to specify routing
information. You just send the query as if Vitess was a regular database.
VTGate analyzes the query and uses additional metadata called VSchema
to perform the necessary routing. See the vtgate v3 Features doc for an overview:
https://github.com/vitessio/vitess/blob/master/doc/VTGateV3Features.md
https://github.com/vitessio/vitess/blob/main/doc/VTGateV3Features.md
As of 12/2015, the VSchema creation is not documented yet as we are in the
process of simplifying the VSchema definition and the overall process for
creating one.
If you want to create your own VSchema, we recommend to have a
look at the VSchema from the vtgate v3 demo:
https://github.com/vitessio/vitess/blob/master/examples/demo/schema
https://github.com/vitessio/vitess/blob/main/examples/demo/schema
(The demo itself is interactive and can be run by executing "./run.py" in the
"examples/demo/" directory.)
The vtgate v3 design doc, which we will also update and simplify in the future,
contains more details on the VSchema:
https://github.com/vitessio/vitess/blob/master/doc/V3VindexDesign.md
https://github.com/vitessio/vitess/blob/main/doc/V3VindexDesign.md
Isolation levels

Просмотреть файл

@ -73,7 +73,7 @@ type comboTablet struct {
var tabletMap map[uint32]*comboTablet
// CreateTablet creates an individual tablet, with its tm, and adds
// it to the map. If it's a master tablet, it also issues a TER.
// it to the map. If it's a primary tablet, it also issues a TER.
func CreateTablet(ctx context.Context, ts *topo.Server, cell string, uid uint32, keyspace, shard, dbname string, tabletType topodatapb.TabletType, mysqld mysqlctl.MysqlDaemon, dbcfgs *dbconfigs.DBConfigs) error {
alias := &topodatapb.TabletAlias{
Cell: cell,
@ -296,7 +296,7 @@ func CreateKs(ctx context.Context, ts *topo.Server, tpb *vttestpb.VTTestTopology
replicas := int(kpb.ReplicaCount)
if replicas == 0 {
// 2 replicas in order to ensure the master cell has a master and a replica
// 2 replicas in order to ensure the primary cell has a primary and a replica
replicas = 2
}
rdonlys := int(kpb.RdonlyCount)
@ -321,7 +321,7 @@ func CreateKs(ctx context.Context, ts *topo.Server, tpb *vttestpb.VTTestTopology
if cell == tpb.Cells[0] {
replicas--
// create the master
// create the primary
if err := CreateTablet(ctx, ts, cell, uid, keyspace, shard, dbname, topodatapb.TabletType_PRIMARY, mysqld, dbcfgs.Clone()); err != nil {
return 0, err
}

Просмотреть файл

@ -1020,7 +1020,7 @@ func (s *VtctldServer) GetTablets(ctx context.Context, req *vtctldatapb.GetTable
span.Annotate("strict", req.Strict)
// It is possible that an old primary has not yet updated its type in the
// topo. In that case, report its type as UNKNOWN. It used to be MASTER but
// topo. In that case, report its type as UNKNOWN. It used to be PRIMARY but
// is no longer the serving primary.
adjustTypeForStalePrimary := func(ti *topo.TabletInfo, mtst time.Time) {
if ti.Type == topodatapb.TabletType_PRIMARY && ti.GetPrimaryTermStartTime().Before(mtst) {
@ -1144,7 +1144,7 @@ func (s *VtctldServer) GetTablets(ctx context.Context, req *vtctldatapb.GetTable
}
}
// Collect true master term start times, and optionally filter out any
// Collect true primary term start times, and optionally filter out any
// tablets by keyspace according to the request.
PrimaryTermStartTimes := map[string]time.Time{}
filteredTablets := make([]*topo.TabletInfo, 0, len(allTablets))
@ -1168,7 +1168,7 @@ func (s *VtctldServer) GetTablets(ctx context.Context, req *vtctldatapb.GetTable
adjustedTablets := make([]*topodatapb.Tablet, len(filteredTablets))
// collect the tablets with adjusted master term start times. they've
// collect the tablets with adjusted primary term start times. they've
// already been filtered by the above loop, so no keyspace filtering
// here.
for i, ti := range filteredTablets {
@ -1303,7 +1303,7 @@ func (s *VtctldServer) InitShardPrimaryLocked(
return err
}
// Check the master elect is in tabletMap.
// Check the primary elect is in tabletMap.
masterElectTabletAliasStr := topoproto.TabletAliasString(req.PrimaryElectTabletAlias)
masterElectTabletInfo, ok := tabletMap[masterElectTabletAliasStr]
if !ok {
@ -1311,7 +1311,7 @@ func (s *VtctldServer) InitShardPrimaryLocked(
}
ev.NewMaster = proto.Clone(masterElectTabletInfo.Tablet).(*topodatapb.Tablet)
// Check the master is the only master is the shard, or -force was used.
// Check the primary is the only primary is the shard, or -force was used.
_, masterTabletMap := topotools.SortedTabletMap(tabletMap)
if !topoproto.TabletAliasEqual(shardInfo.PrimaryAlias, req.PrimaryElectTabletAlias) {
if !req.Force {
@ -1372,7 +1372,7 @@ func (s *VtctldServer) InitShardPrimaryLocked(
return fmt.Errorf("lost topology lock, aborting: %v", err)
}
// Tell the new master to break its replicas, return its replication
// Tell the new primary to break its replicas, return its replication
// position
logger.Infof("initializing master on %v", topoproto.TabletAliasString(req.PrimaryElectTabletAlias))
event.DispatchUpdate(ev, "initializing master")
@ -1391,11 +1391,11 @@ func (s *VtctldServer) InitShardPrimaryLocked(
replCtx, replCancel := context.WithTimeout(ctx, waitReplicasTimeout)
defer replCancel()
// Now tell the new master to insert the reparent_journal row,
// and tell everybody else to become a replica of the new master,
// Now tell the new primary to insert the reparent_journal row,
// and tell everybody else to become a replica of the new primary,
// and wait for the row in the reparent_journal table.
// We start all these in parallel, to handle the semi-sync
// case: for the master to be able to commit its row in the
// case: for the primary to be able to commit its row in the
// reparent_journal table, it needs connected replicas.
event.DispatchUpdate(ev, "reparenting all tablets")
now := time.Now().UnixNano()
@ -1424,11 +1424,11 @@ func (s *VtctldServer) InitShardPrimaryLocked(
}
}
// After the master is done, we can update the shard record
// After the primary is done, we can update the shard record
// (note with semi-sync, it also means at least one replica is done).
wgMaster.Wait()
if masterErr != nil {
// The master failed, there is no way the
// The primary failed, there is no way the
// replicas will work. So we cancel them all.
logger.Warningf("master failed to PopulateReparentJournal, canceling replicas")
replCancel()
@ -1454,7 +1454,7 @@ func (s *VtctldServer) InitShardPrimaryLocked(
return err
}
// Create database if necessary on the master. replicas will get it too through
// Create database if necessary on the primary. replicas will get it too through
// replication. Since the user called InitShardPrimary, they've told us to
// assume that whatever data is on all the replicas is what they intended.
// If the database doesn't exist, it means the user intends for these tablets
@ -1869,7 +1869,7 @@ func (s *VtctldServer) TabletExternallyReparented(ctx context.Context, req *vtct
OldPrimary: shard.PrimaryAlias,
}
// If the externally reparented (new primary) tablet is already MASTER in
// If the externally reparented (new primary) tablet is already PRIMARY in
// the topo, this is a no-op.
if tablet.Type == topodatapb.TabletType_PRIMARY {
return resp, nil

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше