зеркало из https://github.com/github/vitess-gh.git
merge main, resolve conflict
Signed-off-by: Shlomi Noach <2607934+shlomi-noach@users.noreply.github.com>
This commit is contained in:
Коммит
6ae9c51728
|
@ -21,6 +21,7 @@ Vitess 9.0 is not compatible with the previous release of the Vitess Kubernetes
|
|||
* Bug fix regression in /healthz #7090
|
||||
* Fix metadata related operation hangs when zk down #7228
|
||||
* Fix accidentally-broken legacy vtctl output format #7285
|
||||
* Healthcheck: use isIncluded correctly to fix replica/rdonly routing bug #6904
|
||||
|
||||
## Functionality Added or Changed
|
||||
|
||||
|
@ -68,26 +69,34 @@ Vitess 9.0 is not compatible with the previous release of the Vitess Kubernetes
|
|||
* VTGate: Cache only dml and select plans #7196
|
||||
* VTGate: Planning and Parsing Support for Alter Table #7199
|
||||
* VTGate: Add FindAllShardsInKeyspace to vtctldserver #7201
|
||||
* VTGate: Initial implementation of vtctld service #7128
|
||||
* VTGate: improve-log: FAILED_PRECONDITION #7215
|
||||
* VTGate: Planner refactoring #7103
|
||||
* VTGate: Migrate `vtctlclient InitShardMaster` => `vtctldclient InitShardPrimary` #7220
|
||||
* VTGate: Add Planning and Parsing Support for Truncate, Rename, Drop Index and Flush #7242
|
||||
* VTGate: Fix create table format function to include if not exists #7250
|
||||
* VTGate: Added default databases when calling 'show databases' #7256
|
||||
* VTGate : Add Update.AddWhere to mirror Select.AddWhere #7277
|
||||
* VTGate :Rremoved resolver usage from StreamExecute #7281
|
||||
* VTGate: Adding a MySQL connection at Vtgate to run queries on it directly in case of testing mode #7291
|
||||
* VTGate: Added vitess_version as variable #7295
|
||||
* VTGate: Default to false for system settings to be changed per session at the database connection level #7299
|
||||
* VTGate: Gen4: Add Limit clause support #7312
|
||||
* VTGate: Gen4: Handling subquery in query graph #7313
|
||||
* VTGate: Addition of @@enable_system_settings #7300
|
||||
* VTGate: Route INFORMATION_SCHEMA queries #6932
|
||||
* VTGate: Adds Planning and Parsing Support for Create Index of MySQL 5.7 #7024
|
||||
* VTGate: Log sql which exceeds max memory rows #7055
|
||||
* VTGate: Enable Client Session Tracking feature in mysql protocol #6783
|
||||
* VTGate: Show columns from table_name targeted like select queries #6825
|
||||
* VTGate: This PR adds logic to simplify subquery expressions that are simple to
|
||||
* VTGate: Adding MySQL Check Constraints #6865
|
||||
* VTGate: Manage read your own writes system settings #6871
|
||||
* VTGate: Allow table_schema comparisons #6887
|
||||
* VTGate: Additional options support for SELECT INTO and LOAD DATA #6872
|
||||
* VTGate: Fixes vtgate which throws an error in case of empty statements #6947
|
||||
* VTGate: [Forward Port] #6940 - Fix error handling in olap mode #6949
|
||||
* VTGate: Adds Planning and Parsing Support for Create View of MySQL 5.7 #7060
|
||||
* VTGate: fix error: cannot run Select on table "dual" #7118
|
||||
* VTGate: Allow system table to be set as default database #7150
|
||||
* VTGate: Move auto_increment from reserved to non reserved keyword #7162
|
||||
* VTGate: Add only expr of aliasedExpr to weightstring function #7165
|
||||
* VTGate: [9.0] don't try to compare varchars in vtgate #7271
|
||||
* VTGate: Load Data From S3 #6823
|
||||
* VTGate: Unnest simple subqueries #6831
|
||||
* VTGate: Adding MySQL Check Constraints #6869
|
||||
* VTExplain: Add sequence table support for vtexplain #7186
|
||||
* VSchema: Support back-quoted names #7073
|
||||
* Healthcheck: healthy list should be recomputed when a tablet is removed #7176
|
||||
* Healthcheck: Hellcatlk wants to merge 1 commit into master from master #6953
|
||||
|
||||
### Set Statement Support
|
||||
|
||||
|
@ -108,18 +117,19 @@ Set statement support has been added in Vitess. There are [some system variables
|
|||
* VReplication: MoveTables: delete routing rules and update vschema on Complete and Abort #7234
|
||||
* VReplication: V2 Workflow Start: wait for streams to start and report errors if any while starting a workflow #7248
|
||||
* VReplication: Ignore temp tables created by onlineddl #7159
|
||||
* VReplication V2 Workflows: rename Abort to Cancel #7276
|
||||
* VReplication DryRun: Report current dry run results for v2 commands #7255
|
||||
* VReplication: Miscellaneous improvements #7275
|
||||
* VReplication: Tablet throttle support "/throttle/check-self" available on all tablets #7319
|
||||
* VStreamer Events: remove preceding zeroes from decimals in Row Events #7297
|
||||
* Workflow Show: use timeUpdated to calculate vreplication lag #7342
|
||||
* vtctl: Add missing err checks for VReplication v2 #7361
|
||||
* VReplication: Set time zone to UTC while streaming rows #6845
|
||||
* VReplication: Materialization and character sets: Add test to verify/demo a workaround for charset issues while using string functions in filters #6847
|
||||
* VReplication: Tool to diagnose vreplication issues in production #6892
|
||||
* VReplication: Allow multiple blacklists for master #6816
|
||||
* VStreamer Field Event: add allowed values for set/enum #6981
|
||||
* VDiff: lock keyspace while snapshoting, restart target in case of errors #7012
|
||||
* VDiff: make enums comparable #6880
|
||||
* VDiff: add ability to limit number of rows to compare #6890
|
||||
* VDiff/Tablet Picker: fix issue where vdiff sleeps occasionally for tablet picker retry interval #6944
|
||||
* [vtctld]: fix error state in Workflow Show #6970
|
||||
* [vtctld] Workflow command: minor fixes #7008
|
||||
* [vtctl] Add missing err checks for VReplication v2 #7361
|
||||
* MoveTables: validate that source tables exist, move all tables #7018
|
||||
* SwitchWrites bug: reverse replication workflows can have wrong start positions #7169
|
||||
|
||||
### VTTablet
|
||||
|
||||
|
@ -129,7 +139,6 @@ Set statement support has been added in Vitess. There are [some system variables
|
|||
* VTTablet: Adds better errors when there are timeouts in resource pools #7002
|
||||
* VTTablet: Return to re-using server IDs for binlog connections #6941
|
||||
* VTTablet: Correctly initialize the TabletType stats #6989
|
||||
* Backup: Use provided xtrabackup_root_path to find xbstream #7359
|
||||
* Backup: Use pargzip instead of pgzip for compression. #7037
|
||||
* Backup: Add s3 server-side encryption and decryption with customer provided key #7088
|
||||
|
||||
|
@ -155,7 +164,12 @@ Automatically terminate migrations run by a failed tablet
|
|||
* Online DDL: Adding @@session_uuid to vtgate; used as 'context' #7263
|
||||
* Online DDL: ignore errors if extracted gh-ost binary is identical to installed binary #6928
|
||||
* Online DDL: Table lifecycle: skip time hint for unspecified states #7151
|
||||
|
||||
* Online DDL: Migration uses low priority throttling #6830
|
||||
* Online DDL: Fix parsing of online-ddl command line options #6900
|
||||
* OnlineDDL bugfix: make sure schema is applied on tablet #6910
|
||||
* OnlineDDL: request_context/migration_context #7082
|
||||
* OnlineDDL: Fix missed rename in onlineddl_test #7148
|
||||
* OnlineDDL: Online DDL endtoend tests to support MacOS #7168
|
||||
|
||||
### VTadmin
|
||||
|
||||
|
@ -164,21 +178,8 @@ Automatically terminate migrations run by a failed tablet
|
|||
* VTadmin: Add cluster protos to discovery and vtsql package constructors #7224
|
||||
* VTadmin: Add static file service discovery implementation #7229
|
||||
* VTadmin: Query vtadmin-api from vtadmin-web with fetch + react-query #7239
|
||||
* VTadmin: Add vtctld proxy to vtadmin API, add GetKeyspaces endpoint #7266
|
||||
* VTadmin: [vtctld] Expose vtctld gRPC port in local Docker example + update VTAdmin README #7306
|
||||
* VTadmin: Add CSS variables + fonts to VTAdmin #7309
|
||||
* VTadmin: Add React Router + a skeleton /debug page to VTAdmin #7310
|
||||
* VTadmin: Add NavRail component #7316
|
||||
* VTadmin: Add Button + Icon components #7350
|
||||
* VTadmin: Move allow_alias option in MySqlFlag enum to precede the aliased IDs #7166
|
||||
* [vtctld]: vtctldclient generator #7238
|
||||
* [vtctld] Migrate cell getters #7302
|
||||
* [vtctld] Migrate tablet getters #7311
|
||||
* [vtctld] Migrate GetSchema #7346
|
||||
* [vtctld] vtctldclient command pkg #7321
|
||||
* [vtctld] Add GetSrvVSchema command #7334
|
||||
* [vtctld] Migrate ListBackups as GetBackups in new vtctld server #7352
|
||||
Merged
|
||||
* [vtctld] Migrate GetVSchema to VtctldServer #7360
|
||||
|
||||
### Other
|
||||
|
||||
|
@ -187,12 +188,13 @@ Automatically terminate migrations run by a failed tablet
|
|||
* Fix incorrect comments #7257
|
||||
* Fix comment for IDPool #7212
|
||||
* IsInternalOperationTableName: see if a table is used internally by vitess #7104
|
||||
* Add timeout for mysqld_shutdown #6849
|
||||
* Should receive healthcheck updates from all tablets in cells_to_watch #6852
|
||||
* Workflow listall with no workflows was missing newline #6853
|
||||
* Allow incomplete SNAPSHOT keyspaces #6863
|
||||
|
||||
## Examples / Tutorials
|
||||
|
||||
* Update demo #7205
|
||||
* Delete select_commerce_data.sql #7245
|
||||
* Docker/vttestserver: Add MYSQL_BIND_HOST env #7293
|
||||
* Examples/operator: fix tags and add vtorc example #7358
|
||||
* local docker: copy examples/common into /vt/common to match MoveTables user guide #7252
|
||||
* Update docker-compose examples to take advantage of improvements in Vitess #7009
|
||||
|
@ -202,7 +204,18 @@ Automatically terminate migrations run by a failed tablet
|
|||
* Vitess Slack Guidelines v1.0 #6961
|
||||
* Do vschema_customer_sharded.json before create_customer_sharded.sql #7210
|
||||
* Added readme for the demo example #7226
|
||||
* Pull Request template: link to contribution guide #7314
|
||||
* Adding @shlomi-noach to CODEOWNERS #6855
|
||||
* Add Rohit Nayak to maintainers #6903
|
||||
* 7.0.3 Release Notes #6902
|
||||
* 8_0_0 Release Notes #6958
|
||||
* Update maintainers of Vitess #7093
|
||||
* Updating Email Address #7095
|
||||
* Update morgo changes #7105
|
||||
* Move PR template to .github directory #7126
|
||||
* Fix trivial typo #7179
|
||||
* Add @ajm188 + @doeg to CODEOWNERS for vtctld service files #7202
|
||||
* Add @ajm188 + @doeg as vtadmin codeowners #7223:w
|
||||
|
||||
|
||||
## Build Environment Changes
|
||||
|
||||
|
@ -224,16 +237,24 @@ Automatically terminate migrations run by a failed tablet
|
|||
* Add unit test case to improve test coverage for go/sqltypes/result.go #7227
|
||||
* Update Golang to 1.15 #7204
|
||||
* Add linter configuration #7247
|
||||
* Tracking failed check runs #7026
|
||||
* Github Actions CI Builds: convert matrix strategy for unit and cluster tests to individual tests #7258
|
||||
* Add Update.AddWhere to mirror Select.AddWhere #7277
|
||||
* Descriptive names for CI checks #7289
|
||||
* Testing upgrade path from / downgrade path to v8.0.0 #7294
|
||||
* Add mysqlctl to docker images #7326
|
||||
* Modify targets to restore behavior of make install #6842
|
||||
* Download zookeeper 3.4.14 from archive site #6865
|
||||
* Bump junit from 4.12 to 4.13.1 in /java #6870
|
||||
* Fix ListBackups for gcp and az to work with root directory #6873
|
||||
* Pulling bootstrap resources from vitess-resources #6875
|
||||
* [Java] Bump SNAPSHOT version to 9.0 after Vitess release 8.0 #6907
|
||||
* Change dependencies for lite builds #6933
|
||||
* Truncate logged query in dbconn.go. #6959
|
||||
* [GO] go mod tidy #7137
|
||||
* goimport proto files correctly #7264
|
||||
* Cherry pick version of #7233 for release-9.0 #7265
|
||||
* Update Java version to 9.0 #7369
|
||||
* Adding curl as dependency #6965
|
||||
|
||||
## Functionality Neutral Changes
|
||||
|
||||
* Healthcheck: add unit test for multi-cell replica configurations #6978
|
||||
* Healthcheck: Correct Health Check for Non-Serving Types #6908
|
||||
* Adds timeout to checking for tablets. #7106
|
||||
* Remove deprecated vtctl commands, flags and vttablet rpcs #7115
|
||||
* Fixes comment to mention the existence of reference tables. #7122
|
||||
|
@ -242,4 +263,26 @@ Automatically terminate migrations run by a failed tablet
|
|||
* action_repository: no need for http.Request #7124
|
||||
* Testing version upgrade/downgrade path from/to 8.0 #7323
|
||||
* Use `context` from Go's standard library #7235
|
||||
* Update `operator.yaml` backup engine description #6832
|
||||
* Docker - upgrade to Debian Buster #6833
|
||||
* Updating azblob to remove directory after removing backup #6836
|
||||
* Fixing some flaky tests #6874
|
||||
* Flaky test: attempt to fix TestConnection in go/test/endtoend/messaging #6879
|
||||
* Stabilize test #6882
|
||||
* Tablet streaming health fix: never silently skip health state changes #6885
|
||||
* Add owners to /go/mysql #6886
|
||||
* Fixes a bug in Load From statement #6911
|
||||
* Query consolidator: fix to ignore leading margin comments #6917
|
||||
* Updates to Contacts section as Reporting #7023
|
||||
* Create pull_request_template #7027
|
||||
* Fixed pull request template path #7062
|
||||
|
||||
## Backport
|
||||
* Backport: [vtctld] Fix accidentally-broken legacy vtctl output format #7292
|
||||
* Backport #7276: Vreplication V2 Workflows: rename Abort to Cancel #7339
|
||||
* Backport #7297: VStreamer Events: remove preceding zeroes from decimals in Row Events
|
||||
* Backport #7255: VReplication DryRun: Report current dry run results for v2 commands #7345
|
||||
* Backport #7275: VReplication: Miscellaneous improvements #7349
|
||||
* Backport 7342: Workflow Show: use timeUpdated to calculate vreplication lag #7354
|
||||
* Backport 7361: vtctl: Add missing err checks for VReplication v2 #7363
|
||||
* Backport 7297: VStreamer Events: remove preceding zeroes from decimals in Row Events #7340
|
||||
|
|
1
go.sum
1
go.sum
|
@ -693,6 +693,7 @@ github.com/spyzhov/ajson v0.4.2 h1:JMByd/jZApPKDvNsmO90X2WWGbmT2ahDFp73QhZbg3s=
|
|||
github.com/spyzhov/ajson v0.4.2/go.mod h1:63V+CGM6f1Bu/p4nLIN8885ojBdt88TbLoSFzyqMuVA=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
|
||||
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
|
||||
github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
|
|
|
@ -23,30 +23,50 @@ import (
|
|||
"github.com/spf13/cobra"
|
||||
|
||||
"vitess.io/vitess/go/cmd/vtctldclient/cli"
|
||||
"vitess.io/vitess/go/vt/topo/topoproto"
|
||||
|
||||
vtctldatapb "vitess.io/vitess/go/vt/proto/vtctldata"
|
||||
)
|
||||
|
||||
// GetBackups makes a GetBackups gRPC call to a vtctld.
|
||||
var GetBackups = &cobra.Command{
|
||||
Use: "GetBackups keyspace shard",
|
||||
Args: cobra.ExactArgs(2),
|
||||
Use: "GetBackups <keyspace/shard>",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: commandGetBackups,
|
||||
}
|
||||
|
||||
func commandGetBackups(cmd *cobra.Command, args []string) error {
|
||||
cli.FinishedParsing(cmd)
|
||||
var getBackupsOptions = struct {
|
||||
Limit uint32
|
||||
OutputJSON bool
|
||||
}{}
|
||||
|
||||
keyspace := cmd.Flags().Arg(0)
|
||||
shard := cmd.Flags().Arg(1)
|
||||
func commandGetBackups(cmd *cobra.Command, args []string) error {
|
||||
keyspace, shard, err := topoproto.ParseKeyspaceShard(cmd.Flags().Arg(0))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cli.FinishedParsing(cmd)
|
||||
|
||||
resp, err := client.GetBackups(commandCtx, &vtctldatapb.GetBackupsRequest{
|
||||
Keyspace: keyspace,
|
||||
Shard: shard,
|
||||
Limit: getBackupsOptions.Limit,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if getBackupsOptions.OutputJSON {
|
||||
data, err := cli.MarshalJSON(resp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Printf("%s\n", data)
|
||||
return nil
|
||||
}
|
||||
|
||||
names := make([]string, len(resp.Backups))
|
||||
for i, b := range resp.Backups {
|
||||
names[i] = b.Name
|
||||
|
@ -58,5 +78,7 @@ func commandGetBackups(cmd *cobra.Command, args []string) error {
|
|||
}
|
||||
|
||||
func init() {
|
||||
GetBackups.Flags().Uint32VarP(&getBackupsOptions.Limit, "limit", "l", 0, "Retrieve only the most recent N backups")
|
||||
GetBackups.Flags().BoolVarP(&getBackupsOptions.OutputJSON, "json", "j", false, "Output backup info in JSON format rather than a list of backups")
|
||||
Root.AddCommand(GetBackups)
|
||||
}
|
||||
|
|
|
@ -32,27 +32,69 @@ import (
|
|||
var (
|
||||
// ChangeTabletType makes a ChangeTabletType gRPC call to a vtctld.
|
||||
ChangeTabletType = &cobra.Command{
|
||||
Use: "ChangeTabletType [--dry-run] TABLET_ALIAS TABLET_TYPE",
|
||||
Args: cobra.ExactArgs(2),
|
||||
RunE: commandChangeTabletType,
|
||||
Use: "ChangeTabletType [--dry-run] <alias> <tablet-type>",
|
||||
Short: "Changes the db type for the specified tablet, if possible.",
|
||||
Long: `Changes the db type for the specified tablet, if possible.
|
||||
|
||||
This command is used primarily to arrange replicas, and it will not convert a primary.
|
||||
NOTE: This command automatically updates the serving graph.`,
|
||||
DisableFlagsInUseLine: true,
|
||||
Args: cobra.ExactArgs(2),
|
||||
RunE: commandChangeTabletType,
|
||||
}
|
||||
// DeleteTablets makes a DeleteTablets gRPC call to a vtctld.
|
||||
DeleteTablets = &cobra.Command{
|
||||
Use: "DeleteTablets TABLET_ALIAS [ TABLET_ALIAS ... ]",
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
RunE: commandDeleteTablets,
|
||||
Use: "DeleteTablets <alias> [ <alias> ... ]",
|
||||
Short: "Deletes tablet(s) from the topology.",
|
||||
DisableFlagsInUseLine: true,
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
RunE: commandDeleteTablets,
|
||||
}
|
||||
// GetTablet makes a GetTablet gRPC call to a vtctld.
|
||||
GetTablet = &cobra.Command{
|
||||
Use: "GetTablet alias",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: commandGetTablet,
|
||||
Use: "GetTablet <alias>",
|
||||
Short: "Outputs a JSON structure that contains information about the tablet.",
|
||||
DisableFlagsInUseLine: true,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: commandGetTablet,
|
||||
}
|
||||
// GetTablets makes a GetTablets gRPC call to a vtctld.
|
||||
GetTablets = &cobra.Command{
|
||||
Use: "GetTablets [--strict] [{--cell $c1 [--cell $c2 ...], --keyspace $ks [--shard $shard], --tablet-alias $alias}]",
|
||||
Args: cobra.NoArgs,
|
||||
RunE: commandGetTablets,
|
||||
Use: "GetTablets [--strict] [{--cell $c1 [--cell $c2 ...], --keyspace $ks [--shard $shard], --tablet-alias $alias}]",
|
||||
Short: "Looks up tablets according to filter criteria.",
|
||||
Long: `Looks up tablets according to the filter criteria.
|
||||
|
||||
If --tablet-alias is passed, none of the other filters (keyspace, shard, cell) may
|
||||
be passed, and tablets are looked up by tablet alias only.
|
||||
|
||||
If --keyspace is passed, then all tablets in the keyspace are retrieved. The
|
||||
--shard flag may also be passed to further narrow the set of tablets to that
|
||||
<keyspace/shard>. Passing --shard without also passing --keyspace will fail.
|
||||
|
||||
Passing --cell limits the set of tablets to those in the specified cells. The
|
||||
--cell flag accepts a CSV argument (e.g. --cell "c1,c2") and may be repeated
|
||||
(e.g. --cell "c1" --cell "c2").
|
||||
|
||||
Valid output formats are "awk" and "json".`,
|
||||
DisableFlagsInUseLine: true,
|
||||
Args: cobra.NoArgs,
|
||||
RunE: commandGetTablets,
|
||||
}
|
||||
// RefreshState makes a RefreshState gRPC call to a vtctld.
|
||||
RefreshState = &cobra.Command{
|
||||
Use: "RefreshState <alias>",
|
||||
Short: "Reloads the tablet record on the specified tablet.",
|
||||
DisableFlagsInUseLine: true,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: commandRefreshState,
|
||||
}
|
||||
// RefreshStateByShard makes a RefreshStateByShard gRPC call to a vtcld.
|
||||
RefreshStateByShard = &cobra.Command{
|
||||
Use: "RefreshStateByShard [--cell <cell1> ...] <keyspace/shard>",
|
||||
Short: "Reloads the tablet record all tablets in the shard, optionally limited to the specified cells.",
|
||||
DisableFlagsInUseLine: true,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: commandRefreshStateByShard,
|
||||
}
|
||||
)
|
||||
|
||||
|
@ -218,6 +260,60 @@ func commandGetTablets(cmd *cobra.Command, args []string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func commandRefreshState(cmd *cobra.Command, args []string) error {
|
||||
alias, err := topoproto.ParseTabletAlias(cmd.Flags().Arg(0))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cli.FinishedParsing(cmd)
|
||||
|
||||
_, err = client.RefreshState(commandCtx, &vtctldatapb.RefreshStateRequest{
|
||||
TabletAlias: alias,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Printf("Refreshed state on %s\n", topoproto.TabletAliasString(alias))
|
||||
return nil
|
||||
}
|
||||
|
||||
var refreshStateByShardOptions = struct {
|
||||
Cells []string
|
||||
}{}
|
||||
|
||||
func commandRefreshStateByShard(cmd *cobra.Command, args []string) error {
|
||||
keyspace, shard, err := topoproto.ParseKeyspaceShard(cmd.Flags().Arg(0))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cli.FinishedParsing(cmd)
|
||||
|
||||
resp, err := client.RefreshStateByShard(commandCtx, &vtctldatapb.RefreshStateByShardRequest{
|
||||
Keyspace: keyspace,
|
||||
Shard: shard,
|
||||
Cells: refreshStateByShardOptions.Cells,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
msg := &strings.Builder{}
|
||||
msg.WriteString(fmt.Sprintf("Refreshed state on %s/%s", keyspace, shard))
|
||||
if len(refreshStateByShardOptions.Cells) > 0 {
|
||||
msg.WriteString(fmt.Sprintf(" in cells %s", strings.Join(refreshStateByShardOptions.Cells, ", ")))
|
||||
}
|
||||
msg.WriteByte('\n')
|
||||
if resp.IsPartialRefresh {
|
||||
msg.WriteString("State refresh was partial; some tablets in the shard may not have succeeded.\n")
|
||||
}
|
||||
|
||||
fmt.Print(msg.String())
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
ChangeTabletType.Flags().BoolVarP(&changeTabletTypeOptions.DryRun, "dry-run", "d", false, "Shows the proposed change without actually executing it")
|
||||
Root.AddCommand(ChangeTabletType)
|
||||
|
@ -234,4 +330,9 @@ func init() {
|
|||
GetTablets.Flags().StringVar(&getTabletsOptions.Format, "format", "awk", "Output format to use; valid choices are (json, awk)")
|
||||
GetTablets.Flags().BoolVar(&getTabletsOptions.Strict, "strict", false, "Require all cells to return successful tablet data. Without --strict, tablet listings may be partial.")
|
||||
Root.AddCommand(GetTablets)
|
||||
|
||||
Root.AddCommand(RefreshState)
|
||||
|
||||
RefreshStateByShard.Flags().StringSliceVarP(&refreshStateByShardOptions.Cells, "cells", "c", nil, "If specified, only call RefreshState on tablets in the specified cells. If empty, all cells are considered.")
|
||||
Root.AddCommand(RefreshStateByShard)
|
||||
}
|
||||
|
|
|
@ -0,0 +1,44 @@
|
|||
/*
|
||||
Copyright 2021 The Vitess Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package protoutil
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"vitess.io/vitess/go/vt/proto/vttime"
|
||||
)
|
||||
|
||||
// TimeFromProto converts a vttime.Time proto message into a time.Time object.
|
||||
func TimeFromProto(tpb *vttime.Time) time.Time {
|
||||
if tpb == nil {
|
||||
return time.Time{}
|
||||
}
|
||||
|
||||
return time.Unix(tpb.Seconds, int64(tpb.Nanoseconds))
|
||||
}
|
||||
|
||||
// TimeToProto converts a time.Time object into a vttime.Time proto mesasge.
|
||||
func TimeToProto(t time.Time) *vttime.Time {
|
||||
secs, nanos := t.Unix(), t.UnixNano()
|
||||
|
||||
nsecs := secs * 1e9
|
||||
extraNanos := nanos - nsecs
|
||||
return &vttime.Time{
|
||||
Seconds: secs,
|
||||
Nanoseconds: int32(extraNanos),
|
||||
}
|
||||
}
|
|
@ -0,0 +1,52 @@
|
|||
/*
|
||||
Copyright 2021 The Vitess Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package protoutil
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"vitess.io/vitess/go/test/utils"
|
||||
"vitess.io/vitess/go/vt/proto/vttime"
|
||||
)
|
||||
|
||||
func TestTimeFromProto(t *testing.T) {
|
||||
now := time.Date(2021, time.June, 12, 13, 14, 15, 0 /* nanos */, time.UTC)
|
||||
vtt := TimeToProto(now)
|
||||
|
||||
utils.MustMatch(t, now, TimeFromProto(vtt))
|
||||
|
||||
vtt.Nanoseconds = 100
|
||||
utils.MustMatch(t, now.Add(100*time.Nanosecond), TimeFromProto(vtt))
|
||||
|
||||
vtt.Nanoseconds = 1e9
|
||||
utils.MustMatch(t, now.Add(time.Second), TimeFromProto(vtt))
|
||||
|
||||
assert.True(t, TimeFromProto(nil).IsZero(), "expected Go time from nil vttime to be Zero")
|
||||
}
|
||||
|
||||
func TestTimeToProto(t *testing.T) {
|
||||
now := time.Date(2021, time.June, 12, 13, 14, 15, 0 /* nanos */, time.UTC)
|
||||
secs := now.Unix()
|
||||
utils.MustMatch(t, &vttime.Time{Seconds: secs}, TimeToProto(now))
|
||||
|
||||
// Testing secs/nanos conversions
|
||||
utils.MustMatch(t, &vttime.Time{Seconds: secs, Nanoseconds: 100}, TimeToProto(now.Add(100*time.Nanosecond)))
|
||||
utils.MustMatch(t, &vttime.Time{Seconds: secs + 1}, TimeToProto(now.Add(1e9*time.Nanosecond))) // this should rollover to a full second
|
||||
}
|
|
@ -250,6 +250,7 @@ func TestSchemaChange(t *testing.T) {
|
|||
uuid := testOnlineDDLStatement(t, alterTableSuccessfulStatement, "online", "vtgate", "vrepl_col")
|
||||
onlineddl.CheckMigrationStatus(t, &vtParams, shards, uuid, schema.OnlineDDLStatusComplete)
|
||||
testRows(t)
|
||||
testMigrationRowCount(t, uuid)
|
||||
onlineddl.CheckCancelMigration(t, &vtParams, shards, uuid, false)
|
||||
onlineddl.CheckRetryMigration(t, &vtParams, shards, uuid, false)
|
||||
})
|
||||
|
@ -258,6 +259,7 @@ func TestSchemaChange(t *testing.T) {
|
|||
uuid := testOnlineDDLStatement(t, alterTableTrivialStatement, "online", "vtctl", "vrepl_col")
|
||||
onlineddl.CheckMigrationStatus(t, &vtParams, shards, uuid, schema.OnlineDDLStatusComplete)
|
||||
testRows(t)
|
||||
testMigrationRowCount(t, uuid)
|
||||
onlineddl.CheckCancelMigration(t, &vtParams, shards, uuid, false)
|
||||
onlineddl.CheckRetryMigration(t, &vtParams, shards, uuid, false)
|
||||
})
|
||||
|
@ -372,6 +374,21 @@ func testRows(t *testing.T) {
|
|||
require.Equal(t, countInserts, row.AsInt64("c", 0))
|
||||
}
|
||||
|
||||
func testMigrationRowCount(t *testing.T, uuid string) {
|
||||
insertMutex.Lock()
|
||||
defer insertMutex.Unlock()
|
||||
|
||||
var totalRowsCopied uint64
|
||||
// count sum of rows copied in all shards, that should be the total number of rows inserted to the table
|
||||
rs := onlineddl.ReadMigrations(t, &vtParams, uuid)
|
||||
require.NotNil(t, rs)
|
||||
for _, row := range rs.Named().Rows {
|
||||
rowsCopied := row.AsUint64("rows_copied", 0)
|
||||
totalRowsCopied += rowsCopied
|
||||
}
|
||||
require.Equal(t, uint64(countInserts), totalRowsCopied)
|
||||
}
|
||||
|
||||
func testWithInitialSchema(t *testing.T) {
|
||||
// Create 4 tables
|
||||
var sqlQuery = "" //nolint
|
||||
|
|
1
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar-rename/after_columns
поставляемый
Normal file
1
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar-rename/after_columns
поставляемый
Normal file
|
@ -0,0 +1 @@
|
|||
id, i, e2
|
1
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar-rename/alter
поставляемый
Normal file
1
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar-rename/alter
поставляемый
Normal file
|
@ -0,0 +1 @@
|
|||
change e e2 varchar(32) not null default ''
|
1
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar-rename/before_columns
поставляемый
Normal file
1
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar-rename/before_columns
поставляемый
Normal file
|
@ -0,0 +1 @@
|
|||
id, i, e
|
26
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar-rename/create.sql
поставляемый
Normal file
26
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar-rename/create.sql
поставляемый
Normal file
|
@ -0,0 +1,26 @@
|
|||
drop table if exists onlineddl_test;
|
||||
create table onlineddl_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
e enum('red', 'green', 'blue', 'orange') null default null collate 'utf8_bin',
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
insert into onlineddl_test values (null, 7, 'red');
|
||||
|
||||
drop event if exists onlineddl_test;
|
||||
delimiter ;;
|
||||
create event onlineddl_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into onlineddl_test values (null, 11, 'red');
|
||||
insert into onlineddl_test values (null, 13, 'green');
|
||||
insert into onlineddl_test values (null, 17, 'blue');
|
||||
set @last_insert_id := last_insert_id();
|
||||
update onlineddl_test set e='orange' where id = @last_insert_id;
|
||||
end ;;
|
|
@ -0,0 +1 @@
|
|||
change e e varchar(32) not null default ''
|
26
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar/create.sql
поставляемый
Normal file
26
go/test/endtoend/onlineddl/vrepl_suite/testdata/enum-to-varchar/create.sql
поставляемый
Normal file
|
@ -0,0 +1,26 @@
|
|||
drop table if exists onlineddl_test;
|
||||
create table onlineddl_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
e enum('red', 'green', 'blue', 'orange') null default null collate 'utf8_bin',
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
insert into onlineddl_test values (null, 7, 'red');
|
||||
|
||||
drop event if exists onlineddl_test;
|
||||
delimiter ;;
|
||||
create event onlineddl_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into onlineddl_test values (null, 11, 'red');
|
||||
insert into onlineddl_test values (null, 13, 'green');
|
||||
insert into onlineddl_test values (null, 17, 'blue');
|
||||
set @last_insert_id := last_insert_id();
|
||||
update onlineddl_test set e='orange' where id = @last_insert_id;
|
||||
end ;;
|
|
@ -94,6 +94,7 @@ func loadMergedPRs(from, to string) (prs []string, authors []string, commitCount
|
|||
func parseGitLog(s string) (prs []string, authorCommits []string, commitCount int, err error) {
|
||||
rx := regexp.MustCompile(`(.+)\t(.+)\t(.+)\t(.+)`)
|
||||
mergePR := regexp.MustCompile(`Merge pull request #(\d+)`)
|
||||
squashPR := regexp.MustCompile(`\(#(\d+)\)`)
|
||||
authMap := map[string]string{} // here we will store email <-> gh user mappings
|
||||
lines := strings.Split(s, "\n")
|
||||
for _, line := range lines {
|
||||
|
@ -112,13 +113,19 @@ func parseGitLog(s string) (prs []string, authorCommits []string, commitCount in
|
|||
continue
|
||||
}
|
||||
|
||||
if len(parents) > lengthOfSingleSHA {
|
||||
// if we have two parents, it means this is a merge commit. we only count non-merge commits
|
||||
continue
|
||||
if len(parents) <= lengthOfSingleSHA {
|
||||
// we have a single parent, and the commit counts
|
||||
commitCount++
|
||||
if _, exists := authMap[authorEmail]; !exists {
|
||||
authMap[authorEmail] = sha
|
||||
}
|
||||
}
|
||||
commitCount++
|
||||
if _, exists := authMap[authorEmail]; !exists {
|
||||
authMap[authorEmail] = sha
|
||||
|
||||
squashed := squashPR.FindStringSubmatch(title)
|
||||
if len(squashed) == 2 {
|
||||
// this is a merged PR. remember the PR #
|
||||
prs = append(prs, squashed[1])
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -98,9 +98,9 @@ aquarapTEST@gmail.com Fix mysql80 docker build with dep. a28591577b8d432b9c5d78a
|
|||
TEST@planetscale.com Revert "docker/lite/install_dependencies.sh: Upgrade MySQL 8 to 8.0.24" 7858ff46545cff749b3663c92ae90ef27a5dfbc2 27a5dfbc2
|
||||
TEST@planetscale.com docker/lite/install_dependencies.sh: Upgrade MySQL 8 to 8.0.24 c91d46782933292941a846fef2590ff1a6fa193f a6fa193f`
|
||||
|
||||
prs, authorCommits, count, err := parseGitLog(in)
|
||||
prs, authorCommits, nonMergeCommits, err := parseGitLog(in)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []string{"7629", "7831", "7912", "7943", "7951", "7959", "7964", "7968", "7970"}, prs)
|
||||
assert.Equal(t, []string{"385d0b327", "3b744e782", "4a0a943b0", "538709da5", "616f5562c", "6b9a731a2", "e5242a88a", "edac2baf8"}, authorCommits)
|
||||
assert.Equal(t, 28, count)
|
||||
assert.Equal(t, prs, []string{"7629", "7831", "7912", "7934", "7943", "7951", "7959", "7964", "7968", "7970"})
|
||||
assert.Equal(t, authorCommits, []string{"385d0b327", "3b744e782", "4a0a943b0", "538709da5", "616f5562c", "6b9a731a2", "e5242a88a", "edac2baf8"})
|
||||
assert.Equal(t, 28, nonMergeCommits)
|
||||
}
|
||||
|
|
|
@ -653,6 +653,11 @@ func GenerateUpdatePos(uid uint32, pos mysql.Position, timeUpdated int64, txTime
|
|||
"update _vt.vreplication set pos=%v, time_updated=%v, rows_copied=%v, message='' where id=%v", strGTID, timeUpdated, rowsCopied, uid)
|
||||
}
|
||||
|
||||
// GenerateUpdateRowsCopied returns a statement to update the rows_copied value in the _vt.vreplication table.
|
||||
func GenerateUpdateRowsCopied(uid uint32, rowsCopied int64) string {
|
||||
return fmt.Sprintf("update _vt.vreplication set rows_copied=%v where id=%v", rowsCopied, uid)
|
||||
}
|
||||
|
||||
// GenerateUpdateTime returns a statement to update time_updated in the _vt.vreplication table.
|
||||
func GenerateUpdateTime(uid uint32, timeUpdated int64) (string, error) {
|
||||
if timeUpdated == 0 {
|
||||
|
|
|
@ -32,7 +32,10 @@ import (
|
|||
"vitess.io/vitess/go/vt/log"
|
||||
"vitess.io/vitess/go/vt/mysqlctl/backupstorage"
|
||||
"vitess.io/vitess/go/vt/proto/vtrpc"
|
||||
"vitess.io/vitess/go/vt/topo/topoproto"
|
||||
"vitess.io/vitess/go/vt/vterrors"
|
||||
|
||||
topodatapb "vitess.io/vitess/go/vt/proto/topodata"
|
||||
)
|
||||
|
||||
// This file handles the backup and restore related code
|
||||
|
@ -138,6 +141,39 @@ func Backup(ctx context.Context, params BackupParams) error {
|
|||
return finishErr
|
||||
}
|
||||
|
||||
// ParseBackupName parses the backup name for a given dir/name, according to
|
||||
// the format generated by mysqlctl.Backup. An error is returned only if the
|
||||
// backup name does not have the expected number of parts; errors parsing the
|
||||
// timestamp and tablet alias are logged, and a nil value is returned for those
|
||||
// fields in case of error.
|
||||
func ParseBackupName(dir string, name string) (backupTime *time.Time, alias *topodatapb.TabletAlias, err error) {
|
||||
parts := strings.Split(name, ".")
|
||||
if len(parts) != 3 {
|
||||
return nil, nil, vterrors.Errorf(vtrpc.Code_INVALID_ARGUMENT, "cannot backup name %s, expected <date>.<time>.<tablet_alias>", name)
|
||||
}
|
||||
|
||||
// parts[0]: date part of BackupTimestampFormat
|
||||
// parts[1]: time part of BackupTimestampFormat
|
||||
// parts[2]: tablet alias
|
||||
timestamp := strings.Join(parts[:2], ".")
|
||||
aliasStr := parts[2]
|
||||
|
||||
btime, err := time.Parse(BackupTimestampFormat, timestamp)
|
||||
if err != nil {
|
||||
log.Errorf("error parsing backup time for %s/%s: %s", dir, name, err)
|
||||
} else {
|
||||
backupTime = &btime
|
||||
}
|
||||
|
||||
alias, err = topoproto.ParseTabletAlias(aliasStr)
|
||||
if err != nil {
|
||||
log.Errorf("error parsing tablet alias for %s/%s: %s", dir, name, err)
|
||||
alias = nil
|
||||
}
|
||||
|
||||
return backupTime, alias, nil
|
||||
}
|
||||
|
||||
// checkNoDB makes sure there is no user data already there.
|
||||
// Used by Restore, as we do not want to destroy an existing DB.
|
||||
// The user's database name must be given since we ignore all others.
|
||||
|
|
|
@ -17,6 +17,8 @@ limitations under the License.
|
|||
package mysqlctlproto
|
||||
|
||||
import (
|
||||
"vitess.io/vitess/go/protoutil"
|
||||
"vitess.io/vitess/go/vt/mysqlctl"
|
||||
"vitess.io/vitess/go/vt/mysqlctl/backupstorage"
|
||||
|
||||
mysqlctlpb "vitess.io/vitess/go/vt/proto/mysqlctl"
|
||||
|
@ -24,8 +26,21 @@ import (
|
|||
|
||||
// BackupHandleToProto returns a BackupInfo proto from a BackupHandle.
|
||||
func BackupHandleToProto(bh backupstorage.BackupHandle) *mysqlctlpb.BackupInfo {
|
||||
return &mysqlctlpb.BackupInfo{
|
||||
bi := &mysqlctlpb.BackupInfo{
|
||||
Name: bh.Name(),
|
||||
Directory: bh.Directory(),
|
||||
}
|
||||
|
||||
btime, alias, err := mysqlctl.ParseBackupName(bi.Directory, bi.Name)
|
||||
if err != nil { // if bi.Name does not match expected format, don't parse any further fields
|
||||
return bi
|
||||
}
|
||||
|
||||
if btime != nil {
|
||||
bi.Time = protoutil.TimeToProto(*btime)
|
||||
}
|
||||
|
||||
bi.TabletAlias = alias
|
||||
|
||||
return bi
|
||||
}
|
||||
|
|
|
@ -0,0 +1,114 @@
|
|||
/*
|
||||
Copyright 2021 The Vitess Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package mysqlctlproto
|
||||
|
||||
import (
|
||||
"path"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"vitess.io/vitess/go/protoutil"
|
||||
"vitess.io/vitess/go/test/utils"
|
||||
"vitess.io/vitess/go/vt/mysqlctl/backupstorage"
|
||||
|
||||
mysqlctlpb "vitess.io/vitess/go/vt/proto/mysqlctl"
|
||||
topodatapb "vitess.io/vitess/go/vt/proto/topodata"
|
||||
)
|
||||
|
||||
type backupHandle struct {
|
||||
backupstorage.BackupHandle
|
||||
name string
|
||||
directory string
|
||||
}
|
||||
|
||||
func (bh *backupHandle) Name() string { return bh.name }
|
||||
func (bh *backupHandle) Directory() string { return bh.directory }
|
||||
func (bh *backupHandle) testname() string { return path.Join(bh.directory, bh.name) }
|
||||
|
||||
func TestBackupHandleToProto(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
now := time.Date(2021, time.June, 12, 15, 4, 5, 0, time.UTC)
|
||||
tests := []struct {
|
||||
bh *backupHandle
|
||||
want *mysqlctlpb.BackupInfo
|
||||
}{
|
||||
{
|
||||
bh: &backupHandle{
|
||||
name: "2021-06-12.150405.zone1-100",
|
||||
directory: "foo",
|
||||
},
|
||||
want: &mysqlctlpb.BackupInfo{
|
||||
Name: "2021-06-12.150405.zone1-100",
|
||||
Directory: "foo",
|
||||
TabletAlias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
Time: protoutil.TimeToProto(now),
|
||||
},
|
||||
},
|
||||
{
|
||||
bh: &backupHandle{
|
||||
name: "bar",
|
||||
directory: "foo",
|
||||
},
|
||||
want: &mysqlctlpb.BackupInfo{
|
||||
Name: "bar",
|
||||
Directory: "foo",
|
||||
},
|
||||
},
|
||||
{
|
||||
bh: &backupHandle{
|
||||
name: "invalid.time.zone1-100",
|
||||
directory: "foo",
|
||||
},
|
||||
want: &mysqlctlpb.BackupInfo{
|
||||
Name: "invalid.time.zone1-100",
|
||||
Directory: "foo",
|
||||
TabletAlias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
Time: nil,
|
||||
},
|
||||
},
|
||||
{
|
||||
bh: &backupHandle{
|
||||
name: "2021-06-12.150405.not_an_alias",
|
||||
directory: "foo",
|
||||
},
|
||||
want: &mysqlctlpb.BackupInfo{
|
||||
Name: "2021-06-12.150405.not_an_alias",
|
||||
Directory: "foo",
|
||||
TabletAlias: nil,
|
||||
Time: protoutil.TimeToProto(now),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
tt := tt
|
||||
|
||||
t.Run(tt.bh.testname(), func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
got := BackupHandleToProto(tt.bh)
|
||||
utils.MustMatch(t, tt.want, got)
|
||||
})
|
||||
}
|
||||
}
|
|
@ -809,8 +809,14 @@ type Rule struct {
|
|||
// "exclude" value, which will cause the matched tables
|
||||
// to be excluded.
|
||||
// TODO(sougou): support this on vstreamer side also.
|
||||
Filter string `protobuf:"bytes,2,opt,name=filter,proto3" json:"filter,omitempty"`
|
||||
ConvertCharset map[string]*CharsetConversion `protobuf:"bytes,3,rep,name=convert_charset,json=convertCharset,proto3" json:"convert_charset,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
|
||||
Filter string `protobuf:"bytes,2,opt,name=filter,proto3" json:"filter,omitempty"`
|
||||
// Example: key="color", value="'red','green','blue'"
|
||||
ConvertEnumToText map[string]string `protobuf:"bytes,3,rep,name=convert_enum_to_text,json=convertEnumToText,proto3" json:"convert_enum_to_text,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
|
||||
// ConvertCharset: optional mapping, between column name and a CharsetConversion.
|
||||
// This hints to vreplication that columns are encoded from/to non-trivial charsets
|
||||
// The map is only populated when either "from" or "to" charset of a column are non-trivial
|
||||
// trivial charsets are utf8 and ascii variants.
|
||||
ConvertCharset map[string]*CharsetConversion `protobuf:"bytes,4,rep,name=convert_charset,json=convertCharset,proto3" json:"convert_charset,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
|
||||
}
|
||||
|
||||
func (x *Rule) Reset() {
|
||||
|
@ -859,6 +865,13 @@ func (x *Rule) GetFilter() string {
|
|||
return ""
|
||||
}
|
||||
|
||||
func (x *Rule) GetConvertEnumToText() map[string]string {
|
||||
if x != nil {
|
||||
return x.ConvertEnumToText
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Rule) GetConvertCharset() map[string]*CharsetConversion {
|
||||
if x != nil {
|
||||
return x.ConvertCharset
|
||||
|
@ -2478,15 +2491,25 @@ var file_binlogdata_proto_rawDesc = []byte{
|
|||
0x65, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x66, 0x72, 0x6f, 0x6d, 0x43, 0x68,
|
||||
0x61, 0x72, 0x73, 0x65, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x74, 0x6f, 0x5f, 0x63, 0x68, 0x61, 0x72,
|
||||
0x73, 0x65, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x74, 0x6f, 0x43, 0x68, 0x61,
|
||||
0x72, 0x73, 0x65, 0x74, 0x22, 0xe5, 0x01, 0x0a, 0x04, 0x52, 0x75, 0x6c, 0x65, 0x12, 0x14, 0x0a,
|
||||
0x72, 0x73, 0x65, 0x74, 0x22, 0x85, 0x03, 0x0a, 0x04, 0x52, 0x75, 0x6c, 0x65, 0x12, 0x14, 0x0a,
|
||||
0x05, 0x6d, 0x61, 0x74, 0x63, 0x68, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x6d, 0x61,
|
||||
0x74, 0x63, 0x68, 0x12, 0x16, 0x0a, 0x06, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x18, 0x02, 0x20,
|
||||
0x01, 0x28, 0x09, 0x52, 0x06, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x12, 0x4d, 0x0a, 0x0f, 0x63,
|
||||
0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x5f, 0x63, 0x68, 0x61, 0x72, 0x73, 0x65, 0x74, 0x18, 0x03,
|
||||
0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x62, 0x69, 0x6e, 0x6c, 0x6f, 0x67, 0x64, 0x61, 0x74,
|
||||
0x61, 0x2e, 0x52, 0x75, 0x6c, 0x65, 0x2e, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x43, 0x68,
|
||||
0x61, 0x72, 0x73, 0x65, 0x74, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0e, 0x63, 0x6f, 0x6e, 0x76,
|
||||
0x65, 0x72, 0x74, 0x43, 0x68, 0x61, 0x72, 0x73, 0x65, 0x74, 0x1a, 0x60, 0x0a, 0x13, 0x43, 0x6f,
|
||||
0x01, 0x28, 0x09, 0x52, 0x06, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x12, 0x58, 0x0a, 0x14, 0x63,
|
||||
0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x5f, 0x65, 0x6e, 0x75, 0x6d, 0x5f, 0x74, 0x6f, 0x5f, 0x74,
|
||||
0x65, 0x78, 0x74, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x27, 0x2e, 0x62, 0x69, 0x6e, 0x6c,
|
||||
0x6f, 0x67, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x75, 0x6c, 0x65, 0x2e, 0x43, 0x6f, 0x6e, 0x76,
|
||||
0x65, 0x72, 0x74, 0x45, 0x6e, 0x75, 0x6d, 0x54, 0x6f, 0x54, 0x65, 0x78, 0x74, 0x45, 0x6e, 0x74,
|
||||
0x72, 0x79, 0x52, 0x11, 0x63, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x45, 0x6e, 0x75, 0x6d, 0x54,
|
||||
0x6f, 0x54, 0x65, 0x78, 0x74, 0x12, 0x4d, 0x0a, 0x0f, 0x63, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74,
|
||||
0x5f, 0x63, 0x68, 0x61, 0x72, 0x73, 0x65, 0x74, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24,
|
||||
0x2e, 0x62, 0x69, 0x6e, 0x6c, 0x6f, 0x67, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x75, 0x6c, 0x65,
|
||||
0x2e, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x43, 0x68, 0x61, 0x72, 0x73, 0x65, 0x74, 0x45,
|
||||
0x6e, 0x74, 0x72, 0x79, 0x52, 0x0e, 0x63, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x43, 0x68, 0x61,
|
||||
0x72, 0x73, 0x65, 0x74, 0x1a, 0x44, 0x0a, 0x16, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x45,
|
||||
0x6e, 0x75, 0x6d, 0x54, 0x6f, 0x54, 0x65, 0x78, 0x74, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10,
|
||||
0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79,
|
||||
0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
|
||||
0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x60, 0x0a, 0x13, 0x43, 0x6f,
|
||||
0x6e, 0x76, 0x65, 0x72, 0x74, 0x43, 0x68, 0x61, 0x72, 0x73, 0x65, 0x74, 0x45, 0x6e, 0x74, 0x72,
|
||||
0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03,
|
||||
0x6b, 0x65, 0x79, 0x12, 0x33, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01,
|
||||
|
@ -2749,7 +2772,7 @@ func file_binlogdata_proto_rawDescGZIP() []byte {
|
|||
}
|
||||
|
||||
var file_binlogdata_proto_enumTypes = make([]protoimpl.EnumInfo, 5)
|
||||
var file_binlogdata_proto_msgTypes = make([]protoimpl.MessageInfo, 30)
|
||||
var file_binlogdata_proto_msgTypes = make([]protoimpl.MessageInfo, 31)
|
||||
var file_binlogdata_proto_goTypes = []interface{}{
|
||||
(OnDDLAction)(0), // 0: binlogdata.OnDDLAction
|
||||
(VEventType)(0), // 1: binlogdata.VEventType
|
||||
|
@ -2785,78 +2808,80 @@ var file_binlogdata_proto_goTypes = []interface{}{
|
|||
(*VStreamResultsRequest)(nil), // 31: binlogdata.VStreamResultsRequest
|
||||
(*VStreamResultsResponse)(nil), // 32: binlogdata.VStreamResultsResponse
|
||||
(*BinlogTransaction_Statement)(nil), // 33: binlogdata.BinlogTransaction.Statement
|
||||
nil, // 34: binlogdata.Rule.ConvertCharsetEntry
|
||||
(*query.EventToken)(nil), // 35: query.EventToken
|
||||
(*topodata.KeyRange)(nil), // 36: topodata.KeyRange
|
||||
(topodata.TabletType)(0), // 37: topodata.TabletType
|
||||
(*query.Row)(nil), // 38: query.Row
|
||||
(*query.Field)(nil), // 39: query.Field
|
||||
(*vtrpc.CallerID)(nil), // 40: vtrpc.CallerID
|
||||
(*query.VTGateCallerID)(nil), // 41: query.VTGateCallerID
|
||||
(*query.Target)(nil), // 42: query.Target
|
||||
(*query.QueryResult)(nil), // 43: query.QueryResult
|
||||
nil, // 34: binlogdata.Rule.ConvertEnumToTextEntry
|
||||
nil, // 35: binlogdata.Rule.ConvertCharsetEntry
|
||||
(*query.EventToken)(nil), // 36: query.EventToken
|
||||
(*topodata.KeyRange)(nil), // 37: topodata.KeyRange
|
||||
(topodata.TabletType)(0), // 38: topodata.TabletType
|
||||
(*query.Row)(nil), // 39: query.Row
|
||||
(*query.Field)(nil), // 40: query.Field
|
||||
(*vtrpc.CallerID)(nil), // 41: vtrpc.CallerID
|
||||
(*query.VTGateCallerID)(nil), // 42: query.VTGateCallerID
|
||||
(*query.Target)(nil), // 43: query.Target
|
||||
(*query.QueryResult)(nil), // 44: query.QueryResult
|
||||
}
|
||||
var file_binlogdata_proto_depIdxs = []int32{
|
||||
33, // 0: binlogdata.BinlogTransaction.statements:type_name -> binlogdata.BinlogTransaction.Statement
|
||||
35, // 1: binlogdata.BinlogTransaction.event_token:type_name -> query.EventToken
|
||||
36, // 2: binlogdata.StreamKeyRangeRequest.key_range:type_name -> topodata.KeyRange
|
||||
36, // 1: binlogdata.BinlogTransaction.event_token:type_name -> query.EventToken
|
||||
37, // 2: binlogdata.StreamKeyRangeRequest.key_range:type_name -> topodata.KeyRange
|
||||
5, // 3: binlogdata.StreamKeyRangeRequest.charset:type_name -> binlogdata.Charset
|
||||
6, // 4: binlogdata.StreamKeyRangeResponse.binlog_transaction:type_name -> binlogdata.BinlogTransaction
|
||||
5, // 5: binlogdata.StreamTablesRequest.charset:type_name -> binlogdata.Charset
|
||||
6, // 6: binlogdata.StreamTablesResponse.binlog_transaction:type_name -> binlogdata.BinlogTransaction
|
||||
34, // 7: binlogdata.Rule.convert_charset:type_name -> binlogdata.Rule.ConvertCharsetEntry
|
||||
12, // 8: binlogdata.Filter.rules:type_name -> binlogdata.Rule
|
||||
4, // 9: binlogdata.Filter.fieldEventMode:type_name -> binlogdata.Filter.FieldEventMode
|
||||
37, // 10: binlogdata.BinlogSource.tablet_type:type_name -> topodata.TabletType
|
||||
36, // 11: binlogdata.BinlogSource.key_range:type_name -> topodata.KeyRange
|
||||
13, // 12: binlogdata.BinlogSource.filter:type_name -> binlogdata.Filter
|
||||
0, // 13: binlogdata.BinlogSource.on_ddl:type_name -> binlogdata.OnDDLAction
|
||||
38, // 14: binlogdata.RowChange.before:type_name -> query.Row
|
||||
38, // 15: binlogdata.RowChange.after:type_name -> query.Row
|
||||
15, // 16: binlogdata.RowEvent.row_changes:type_name -> binlogdata.RowChange
|
||||
39, // 17: binlogdata.FieldEvent.fields:type_name -> query.Field
|
||||
30, // 18: binlogdata.ShardGtid.table_p_ks:type_name -> binlogdata.TableLastPK
|
||||
18, // 19: binlogdata.VGtid.shard_gtids:type_name -> binlogdata.ShardGtid
|
||||
2, // 20: binlogdata.Journal.migration_type:type_name -> binlogdata.MigrationType
|
||||
18, // 21: binlogdata.Journal.shard_gtids:type_name -> binlogdata.ShardGtid
|
||||
20, // 22: binlogdata.Journal.participants:type_name -> binlogdata.KeyspaceShard
|
||||
1, // 23: binlogdata.VEvent.type:type_name -> binlogdata.VEventType
|
||||
16, // 24: binlogdata.VEvent.row_event:type_name -> binlogdata.RowEvent
|
||||
17, // 25: binlogdata.VEvent.field_event:type_name -> binlogdata.FieldEvent
|
||||
19, // 26: binlogdata.VEvent.vgtid:type_name -> binlogdata.VGtid
|
||||
21, // 27: binlogdata.VEvent.journal:type_name -> binlogdata.Journal
|
||||
29, // 28: binlogdata.VEvent.last_p_k_event:type_name -> binlogdata.LastPKEvent
|
||||
39, // 29: binlogdata.MinimalTable.fields:type_name -> query.Field
|
||||
23, // 30: binlogdata.MinimalSchema.tables:type_name -> binlogdata.MinimalTable
|
||||
40, // 31: binlogdata.VStreamRequest.effective_caller_id:type_name -> vtrpc.CallerID
|
||||
41, // 32: binlogdata.VStreamRequest.immediate_caller_id:type_name -> query.VTGateCallerID
|
||||
42, // 33: binlogdata.VStreamRequest.target:type_name -> query.Target
|
||||
13, // 34: binlogdata.VStreamRequest.filter:type_name -> binlogdata.Filter
|
||||
30, // 35: binlogdata.VStreamRequest.table_last_p_ks:type_name -> binlogdata.TableLastPK
|
||||
22, // 36: binlogdata.VStreamResponse.events:type_name -> binlogdata.VEvent
|
||||
40, // 37: binlogdata.VStreamRowsRequest.effective_caller_id:type_name -> vtrpc.CallerID
|
||||
41, // 38: binlogdata.VStreamRowsRequest.immediate_caller_id:type_name -> query.VTGateCallerID
|
||||
42, // 39: binlogdata.VStreamRowsRequest.target:type_name -> query.Target
|
||||
43, // 40: binlogdata.VStreamRowsRequest.lastpk:type_name -> query.QueryResult
|
||||
39, // 41: binlogdata.VStreamRowsResponse.fields:type_name -> query.Field
|
||||
39, // 42: binlogdata.VStreamRowsResponse.pkfields:type_name -> query.Field
|
||||
38, // 43: binlogdata.VStreamRowsResponse.rows:type_name -> query.Row
|
||||
38, // 44: binlogdata.VStreamRowsResponse.lastpk:type_name -> query.Row
|
||||
30, // 45: binlogdata.LastPKEvent.table_last_p_k:type_name -> binlogdata.TableLastPK
|
||||
43, // 46: binlogdata.TableLastPK.lastpk:type_name -> query.QueryResult
|
||||
40, // 47: binlogdata.VStreamResultsRequest.effective_caller_id:type_name -> vtrpc.CallerID
|
||||
41, // 48: binlogdata.VStreamResultsRequest.immediate_caller_id:type_name -> query.VTGateCallerID
|
||||
42, // 49: binlogdata.VStreamResultsRequest.target:type_name -> query.Target
|
||||
39, // 50: binlogdata.VStreamResultsResponse.fields:type_name -> query.Field
|
||||
38, // 51: binlogdata.VStreamResultsResponse.rows:type_name -> query.Row
|
||||
3, // 52: binlogdata.BinlogTransaction.Statement.category:type_name -> binlogdata.BinlogTransaction.Statement.Category
|
||||
5, // 53: binlogdata.BinlogTransaction.Statement.charset:type_name -> binlogdata.Charset
|
||||
11, // 54: binlogdata.Rule.ConvertCharsetEntry.value:type_name -> binlogdata.CharsetConversion
|
||||
55, // [55:55] is the sub-list for method output_type
|
||||
55, // [55:55] is the sub-list for method input_type
|
||||
55, // [55:55] is the sub-list for extension type_name
|
||||
55, // [55:55] is the sub-list for extension extendee
|
||||
0, // [0:55] is the sub-list for field type_name
|
||||
34, // 7: binlogdata.Rule.convert_enum_to_text:type_name -> binlogdata.Rule.ConvertEnumToTextEntry
|
||||
35, // 8: binlogdata.Rule.convert_charset:type_name -> binlogdata.Rule.ConvertCharsetEntry
|
||||
12, // 9: binlogdata.Filter.rules:type_name -> binlogdata.Rule
|
||||
4, // 10: binlogdata.Filter.fieldEventMode:type_name -> binlogdata.Filter.FieldEventMode
|
||||
38, // 11: binlogdata.BinlogSource.tablet_type:type_name -> topodata.TabletType
|
||||
37, // 12: binlogdata.BinlogSource.key_range:type_name -> topodata.KeyRange
|
||||
13, // 13: binlogdata.BinlogSource.filter:type_name -> binlogdata.Filter
|
||||
0, // 14: binlogdata.BinlogSource.on_ddl:type_name -> binlogdata.OnDDLAction
|
||||
39, // 15: binlogdata.RowChange.before:type_name -> query.Row
|
||||
39, // 16: binlogdata.RowChange.after:type_name -> query.Row
|
||||
15, // 17: binlogdata.RowEvent.row_changes:type_name -> binlogdata.RowChange
|
||||
40, // 18: binlogdata.FieldEvent.fields:type_name -> query.Field
|
||||
30, // 19: binlogdata.ShardGtid.table_p_ks:type_name -> binlogdata.TableLastPK
|
||||
18, // 20: binlogdata.VGtid.shard_gtids:type_name -> binlogdata.ShardGtid
|
||||
2, // 21: binlogdata.Journal.migration_type:type_name -> binlogdata.MigrationType
|
||||
18, // 22: binlogdata.Journal.shard_gtids:type_name -> binlogdata.ShardGtid
|
||||
20, // 23: binlogdata.Journal.participants:type_name -> binlogdata.KeyspaceShard
|
||||
1, // 24: binlogdata.VEvent.type:type_name -> binlogdata.VEventType
|
||||
16, // 25: binlogdata.VEvent.row_event:type_name -> binlogdata.RowEvent
|
||||
17, // 26: binlogdata.VEvent.field_event:type_name -> binlogdata.FieldEvent
|
||||
19, // 27: binlogdata.VEvent.vgtid:type_name -> binlogdata.VGtid
|
||||
21, // 28: binlogdata.VEvent.journal:type_name -> binlogdata.Journal
|
||||
29, // 29: binlogdata.VEvent.last_p_k_event:type_name -> binlogdata.LastPKEvent
|
||||
40, // 30: binlogdata.MinimalTable.fields:type_name -> query.Field
|
||||
23, // 31: binlogdata.MinimalSchema.tables:type_name -> binlogdata.MinimalTable
|
||||
41, // 32: binlogdata.VStreamRequest.effective_caller_id:type_name -> vtrpc.CallerID
|
||||
42, // 33: binlogdata.VStreamRequest.immediate_caller_id:type_name -> query.VTGateCallerID
|
||||
43, // 34: binlogdata.VStreamRequest.target:type_name -> query.Target
|
||||
13, // 35: binlogdata.VStreamRequest.filter:type_name -> binlogdata.Filter
|
||||
30, // 36: binlogdata.VStreamRequest.table_last_p_ks:type_name -> binlogdata.TableLastPK
|
||||
22, // 37: binlogdata.VStreamResponse.events:type_name -> binlogdata.VEvent
|
||||
41, // 38: binlogdata.VStreamRowsRequest.effective_caller_id:type_name -> vtrpc.CallerID
|
||||
42, // 39: binlogdata.VStreamRowsRequest.immediate_caller_id:type_name -> query.VTGateCallerID
|
||||
43, // 40: binlogdata.VStreamRowsRequest.target:type_name -> query.Target
|
||||
44, // 41: binlogdata.VStreamRowsRequest.lastpk:type_name -> query.QueryResult
|
||||
40, // 42: binlogdata.VStreamRowsResponse.fields:type_name -> query.Field
|
||||
40, // 43: binlogdata.VStreamRowsResponse.pkfields:type_name -> query.Field
|
||||
39, // 44: binlogdata.VStreamRowsResponse.rows:type_name -> query.Row
|
||||
39, // 45: binlogdata.VStreamRowsResponse.lastpk:type_name -> query.Row
|
||||
30, // 46: binlogdata.LastPKEvent.table_last_p_k:type_name -> binlogdata.TableLastPK
|
||||
44, // 47: binlogdata.TableLastPK.lastpk:type_name -> query.QueryResult
|
||||
41, // 48: binlogdata.VStreamResultsRequest.effective_caller_id:type_name -> vtrpc.CallerID
|
||||
42, // 49: binlogdata.VStreamResultsRequest.immediate_caller_id:type_name -> query.VTGateCallerID
|
||||
43, // 50: binlogdata.VStreamResultsRequest.target:type_name -> query.Target
|
||||
40, // 51: binlogdata.VStreamResultsResponse.fields:type_name -> query.Field
|
||||
39, // 52: binlogdata.VStreamResultsResponse.rows:type_name -> query.Row
|
||||
3, // 53: binlogdata.BinlogTransaction.Statement.category:type_name -> binlogdata.BinlogTransaction.Statement.Category
|
||||
5, // 54: binlogdata.BinlogTransaction.Statement.charset:type_name -> binlogdata.Charset
|
||||
11, // 55: binlogdata.Rule.ConvertCharsetEntry.value:type_name -> binlogdata.CharsetConversion
|
||||
56, // [56:56] is the sub-list for method output_type
|
||||
56, // [56:56] is the sub-list for method input_type
|
||||
56, // [56:56] is the sub-list for extension type_name
|
||||
56, // [56:56] is the sub-list for extension extendee
|
||||
0, // [0:56] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_binlogdata_proto_init() }
|
||||
|
@ -3220,7 +3245,7 @@ func file_binlogdata_proto_init() {
|
|||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_binlogdata_proto_rawDesc,
|
||||
NumEnums: 5,
|
||||
NumMessages: 30,
|
||||
NumMessages: 31,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
|
|
|
@ -499,6 +499,25 @@ func (m *Rule) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
|
|||
dAtA[i] = 0xa
|
||||
i = encodeVarint(dAtA, i, uint64(baseI-i))
|
||||
i--
|
||||
dAtA[i] = 0x22
|
||||
}
|
||||
}
|
||||
if len(m.ConvertEnumToText) > 0 {
|
||||
for k := range m.ConvertEnumToText {
|
||||
v := m.ConvertEnumToText[k]
|
||||
baseI := i
|
||||
i -= len(v)
|
||||
copy(dAtA[i:], v)
|
||||
i = encodeVarint(dAtA, i, uint64(len(v)))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
i -= len(k)
|
||||
copy(dAtA[i:], k)
|
||||
i = encodeVarint(dAtA, i, uint64(len(k)))
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
i = encodeVarint(dAtA, i, uint64(baseI-i))
|
||||
i--
|
||||
dAtA[i] = 0x1a
|
||||
}
|
||||
}
|
||||
|
@ -2167,6 +2186,14 @@ func (m *Rule) SizeVT() (n int) {
|
|||
if l > 0 {
|
||||
n += 1 + l + sov(uint64(l))
|
||||
}
|
||||
if len(m.ConvertEnumToText) > 0 {
|
||||
for k, v := range m.ConvertEnumToText {
|
||||
_ = k
|
||||
_ = v
|
||||
mapEntrySize := 1 + len(k) + sov(uint64(len(k))) + 1 + len(v) + sov(uint64(len(v)))
|
||||
n += mapEntrySize + 1 + sov(uint64(mapEntrySize))
|
||||
}
|
||||
}
|
||||
if len(m.ConvertCharset) > 0 {
|
||||
for k, v := range m.ConvertCharset {
|
||||
_ = k
|
||||
|
@ -3821,6 +3848,133 @@ func (m *Rule) UnmarshalVT(dAtA []byte) error {
|
|||
m.Filter = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field ConvertEnumToText", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.ConvertEnumToText == nil {
|
||||
m.ConvertEnumToText = make(map[string]string)
|
||||
}
|
||||
var mapkey string
|
||||
var mapvalue string
|
||||
for iNdEx < postIndex {
|
||||
entryPreIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
if fieldNum == 1 {
|
||||
var stringLenmapkey uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLenmapkey |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLenmapkey := int(stringLenmapkey)
|
||||
if intStringLenmapkey < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
postStringIndexmapkey := iNdEx + intStringLenmapkey
|
||||
if postStringIndexmapkey < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if postStringIndexmapkey > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
mapkey = string(dAtA[iNdEx:postStringIndexmapkey])
|
||||
iNdEx = postStringIndexmapkey
|
||||
} else if fieldNum == 2 {
|
||||
var stringLenmapvalue uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLenmapvalue |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLenmapvalue := int(stringLenmapvalue)
|
||||
if intStringLenmapvalue < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
postStringIndexmapvalue := iNdEx + intStringLenmapvalue
|
||||
if postStringIndexmapvalue < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if postStringIndexmapvalue > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue])
|
||||
iNdEx = postStringIndexmapvalue
|
||||
} else {
|
||||
iNdEx = entryPreIndex
|
||||
skippy, err := skip(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (skippy < 0) || (iNdEx+skippy) < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if (iNdEx + skippy) > postIndex {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
m.ConvertEnumToText[mapkey] = mapvalue
|
||||
iNdEx = postIndex
|
||||
case 4:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field ConvertCharset", wireType)
|
||||
}
|
||||
|
|
|
@ -29,6 +29,8 @@ import (
|
|||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
topodata "vitess.io/vitess/go/vt/proto/topodata"
|
||||
vttime "vitess.io/vitess/go/vt/proto/vttime"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -38,6 +40,66 @@ const (
|
|||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
// Status is an enum representing the possible status of a backup.
|
||||
type BackupInfo_Status int32
|
||||
|
||||
const (
|
||||
BackupInfo_UNKNOWN BackupInfo_Status = 0
|
||||
BackupInfo_INCOMPLETE BackupInfo_Status = 1
|
||||
BackupInfo_COMPLETE BackupInfo_Status = 2
|
||||
// A backup status of INVALID should be set if the backup is complete
|
||||
// but unusable in some way (partial upload, corrupt file, etc).
|
||||
BackupInfo_INVALID BackupInfo_Status = 3
|
||||
// A backup status of VALID should be set if the backup is both
|
||||
// complete and usuable.
|
||||
BackupInfo_VALID BackupInfo_Status = 4
|
||||
)
|
||||
|
||||
// Enum value maps for BackupInfo_Status.
|
||||
var (
|
||||
BackupInfo_Status_name = map[int32]string{
|
||||
0: "UNKNOWN",
|
||||
1: "INCOMPLETE",
|
||||
2: "COMPLETE",
|
||||
3: "INVALID",
|
||||
4: "VALID",
|
||||
}
|
||||
BackupInfo_Status_value = map[string]int32{
|
||||
"UNKNOWN": 0,
|
||||
"INCOMPLETE": 1,
|
||||
"COMPLETE": 2,
|
||||
"INVALID": 3,
|
||||
"VALID": 4,
|
||||
}
|
||||
)
|
||||
|
||||
func (x BackupInfo_Status) Enum() *BackupInfo_Status {
|
||||
p := new(BackupInfo_Status)
|
||||
*p = x
|
||||
return p
|
||||
}
|
||||
|
||||
func (x BackupInfo_Status) String() string {
|
||||
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
|
||||
}
|
||||
|
||||
func (BackupInfo_Status) Descriptor() protoreflect.EnumDescriptor {
|
||||
return file_mysqlctl_proto_enumTypes[0].Descriptor()
|
||||
}
|
||||
|
||||
func (BackupInfo_Status) Type() protoreflect.EnumType {
|
||||
return &file_mysqlctl_proto_enumTypes[0]
|
||||
}
|
||||
|
||||
func (x BackupInfo_Status) Number() protoreflect.EnumNumber {
|
||||
return protoreflect.EnumNumber(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use BackupInfo_Status.Descriptor instead.
|
||||
func (BackupInfo_Status) EnumDescriptor() ([]byte, []int) {
|
||||
return file_mysqlctl_proto_rawDescGZIP(), []int{10, 0}
|
||||
}
|
||||
|
||||
type StartRequest struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
|
@ -442,8 +504,16 @@ type BackupInfo struct {
|
|||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
|
||||
Directory string `protobuf:"bytes,2,opt,name=directory,proto3" json:"directory,omitempty"`
|
||||
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
|
||||
Directory string `protobuf:"bytes,2,opt,name=directory,proto3" json:"directory,omitempty"`
|
||||
Keyspace string `protobuf:"bytes,3,opt,name=keyspace,proto3" json:"keyspace,omitempty"`
|
||||
Shard string `protobuf:"bytes,4,opt,name=shard,proto3" json:"shard,omitempty"`
|
||||
TabletAlias *topodata.TabletAlias `protobuf:"bytes,5,opt,name=tablet_alias,json=tabletAlias,proto3" json:"tablet_alias,omitempty"`
|
||||
Time *vttime.Time `protobuf:"bytes,6,opt,name=time,proto3" json:"time,omitempty"`
|
||||
// Engine is the name of the backupengine implementation used to create
|
||||
// this backup.
|
||||
Engine string `protobuf:"bytes,7,opt,name=engine,proto3" json:"engine,omitempty"`
|
||||
Status BackupInfo_Status `protobuf:"varint,8,opt,name=status,proto3,enum=mysqlctl.BackupInfo_Status" json:"status,omitempty"`
|
||||
}
|
||||
|
||||
func (x *BackupInfo) Reset() {
|
||||
|
@ -492,61 +562,124 @@ func (x *BackupInfo) GetDirectory() string {
|
|||
return ""
|
||||
}
|
||||
|
||||
func (x *BackupInfo) GetKeyspace() string {
|
||||
if x != nil {
|
||||
return x.Keyspace
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *BackupInfo) GetShard() string {
|
||||
if x != nil {
|
||||
return x.Shard
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *BackupInfo) GetTabletAlias() *topodata.TabletAlias {
|
||||
if x != nil {
|
||||
return x.TabletAlias
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *BackupInfo) GetTime() *vttime.Time {
|
||||
if x != nil {
|
||||
return x.Time
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *BackupInfo) GetEngine() string {
|
||||
if x != nil {
|
||||
return x.Engine
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *BackupInfo) GetStatus() BackupInfo_Status {
|
||||
if x != nil {
|
||||
return x.Status
|
||||
}
|
||||
return BackupInfo_UNKNOWN
|
||||
}
|
||||
|
||||
var File_mysqlctl_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_mysqlctl_proto_rawDesc = []byte{
|
||||
0x0a, 0x0e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
|
||||
0x12, 0x08, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x22, 0x2f, 0x0a, 0x0c, 0x53, 0x74,
|
||||
0x61, 0x72, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1f, 0x0a, 0x0b, 0x6d, 0x79,
|
||||
0x73, 0x71, 0x6c, 0x64, 0x5f, 0x61, 0x72, 0x67, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x09, 0x52,
|
||||
0x0a, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x64, 0x41, 0x72, 0x67, 0x73, 0x22, 0x0f, 0x0a, 0x0d, 0x53,
|
||||
0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x39, 0x0a, 0x0f,
|
||||
0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12,
|
||||
0x26, 0x0a, 0x0f, 0x77, 0x61, 0x69, 0x74, 0x5f, 0x66, 0x6f, 0x72, 0x5f, 0x6d, 0x79, 0x73, 0x71,
|
||||
0x6c, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0d, 0x77, 0x61, 0x69, 0x74, 0x46, 0x6f,
|
||||
0x72, 0x4d, 0x79, 0x73, 0x71, 0x6c, 0x64, 0x22, 0x12, 0x0a, 0x10, 0x53, 0x68, 0x75, 0x74, 0x64,
|
||||
0x6f, 0x77, 0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x18, 0x0a, 0x16, 0x52,
|
||||
0x75, 0x6e, 0x4d, 0x79, 0x73, 0x71, 0x6c, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65,
|
||||
0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0x19, 0x0a, 0x17, 0x52, 0x75, 0x6e, 0x4d, 0x79, 0x73, 0x71,
|
||||
0x6c, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
|
||||
0x22, 0x15, 0x0a, 0x13, 0x52, 0x65, 0x69, 0x6e, 0x69, 0x74, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67,
|
||||
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0x16, 0x0a, 0x14, 0x52, 0x65, 0x69, 0x6e, 0x69,
|
||||
0x74, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22,
|
||||
0x16, 0x0a, 0x14, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67,
|
||||
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0x17, 0x0a, 0x15, 0x52, 0x65, 0x66, 0x72, 0x65,
|
||||
0x73, 0x68, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
|
||||
0x22, 0x3e, 0x0a, 0x0a, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x12,
|
||||
0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61,
|
||||
0x6d, 0x65, 0x12, 0x1c, 0x0a, 0x09, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x79, 0x18,
|
||||
0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x79,
|
||||
0x32, 0x8a, 0x03, 0x0a, 0x08, 0x4d, 0x79, 0x73, 0x71, 0x6c, 0x43, 0x74, 0x6c, 0x12, 0x3a, 0x0a,
|
||||
0x05, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12, 0x16, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74,
|
||||
0x6c, 0x2e, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x17,
|
||||
0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52,
|
||||
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x43, 0x0a, 0x08, 0x53, 0x68, 0x75,
|
||||
0x74, 0x64, 0x6f, 0x77, 0x6e, 0x12, 0x19, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c,
|
||||
0x2e, 0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
|
||||
0x1a, 0x1a, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x53, 0x68, 0x75, 0x74,
|
||||
0x64, 0x6f, 0x77, 0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x58,
|
||||
0x0a, 0x0f, 0x52, 0x75, 0x6e, 0x4d, 0x79, 0x73, 0x71, 0x6c, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64,
|
||||
0x65, 0x12, 0x20, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52, 0x75, 0x6e,
|
||||
0x12, 0x08, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x1a, 0x0e, 0x74, 0x6f, 0x70, 0x6f,
|
||||
0x64, 0x61, 0x74, 0x61, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x0c, 0x76, 0x74, 0x74, 0x69,
|
||||
0x6d, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x2f, 0x0a, 0x0c, 0x53, 0x74, 0x61, 0x72,
|
||||
0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1f, 0x0a, 0x0b, 0x6d, 0x79, 0x73, 0x71,
|
||||
0x6c, 0x64, 0x5f, 0x61, 0x72, 0x67, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0a, 0x6d,
|
||||
0x79, 0x73, 0x71, 0x6c, 0x64, 0x41, 0x72, 0x67, 0x73, 0x22, 0x0f, 0x0a, 0x0d, 0x53, 0x74, 0x61,
|
||||
0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x39, 0x0a, 0x0f, 0x53, 0x68,
|
||||
0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x26, 0x0a,
|
||||
0x0f, 0x77, 0x61, 0x69, 0x74, 0x5f, 0x66, 0x6f, 0x72, 0x5f, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x64,
|
||||
0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0d, 0x77, 0x61, 0x69, 0x74, 0x46, 0x6f, 0x72, 0x4d,
|
||||
0x79, 0x73, 0x71, 0x6c, 0x64, 0x22, 0x12, 0x0a, 0x10, 0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77,
|
||||
0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x18, 0x0a, 0x16, 0x52, 0x75, 0x6e,
|
||||
0x4d, 0x79, 0x73, 0x71, 0x6c, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x71, 0x75,
|
||||
0x65, 0x73, 0x74, 0x1a, 0x21, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52,
|
||||
0x75, 0x6e, 0x4d, 0x79, 0x73, 0x71, 0x6c, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65,
|
||||
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x4f, 0x0a, 0x0c, 0x52, 0x65, 0x69, 0x6e,
|
||||
0x69, 0x74, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x1d, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c,
|
||||
0x63, 0x74, 0x6c, 0x2e, 0x52, 0x65, 0x69, 0x6e, 0x69, 0x74, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67,
|
||||
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1e, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63,
|
||||
0x74, 0x6c, 0x2e, 0x52, 0x65, 0x69, 0x6e, 0x69, 0x74, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52,
|
||||
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x52, 0x0a, 0x0d, 0x52, 0x65, 0x66,
|
||||
0x72, 0x65, 0x73, 0x68, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x1e, 0x2e, 0x6d, 0x79, 0x73,
|
||||
0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x43, 0x6f, 0x6e,
|
||||
0x66, 0x69, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1f, 0x2e, 0x6d, 0x79, 0x73,
|
||||
0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x43, 0x6f, 0x6e,
|
||||
0x66, 0x69, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x42, 0x27, 0x5a,
|
||||
0x25, 0x76, 0x69, 0x74, 0x65, 0x73, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x76, 0x69, 0x74, 0x65, 0x73,
|
||||
0x73, 0x2f, 0x67, 0x6f, 0x2f, 0x76, 0x74, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x6d, 0x79,
|
||||
0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
0x65, 0x73, 0x74, 0x22, 0x19, 0x0a, 0x17, 0x52, 0x75, 0x6e, 0x4d, 0x79, 0x73, 0x71, 0x6c, 0x55,
|
||||
0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x15,
|
||||
0x0a, 0x13, 0x52, 0x65, 0x69, 0x6e, 0x69, 0x74, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65,
|
||||
0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0x16, 0x0a, 0x14, 0x52, 0x65, 0x69, 0x6e, 0x69, 0x74, 0x43,
|
||||
0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x16, 0x0a,
|
||||
0x14, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65,
|
||||
0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0x17, 0x0a, 0x15, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68,
|
||||
0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0xe6,
|
||||
0x02, 0x0a, 0x0a, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x12, 0x0a,
|
||||
0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d,
|
||||
0x65, 0x12, 0x1c, 0x0a, 0x09, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x79, 0x18, 0x02,
|
||||
0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x79, 0x12,
|
||||
0x1a, 0x0a, 0x08, 0x6b, 0x65, 0x79, 0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28,
|
||||
0x09, 0x52, 0x08, 0x6b, 0x65, 0x79, 0x73, 0x70, 0x61, 0x63, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x73,
|
||||
0x68, 0x61, 0x72, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x73, 0x68, 0x61, 0x72,
|
||||
0x64, 0x12, 0x38, 0x0a, 0x0c, 0x74, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x5f, 0x61, 0x6c, 0x69, 0x61,
|
||||
0x73, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x6f, 0x70, 0x6f, 0x64, 0x61,
|
||||
0x74, 0x61, 0x2e, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x52, 0x0b,
|
||||
0x74, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x12, 0x20, 0x0a, 0x04, 0x74,
|
||||
0x69, 0x6d, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x76, 0x74, 0x74, 0x69,
|
||||
0x6d, 0x65, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x52, 0x04, 0x74, 0x69, 0x6d, 0x65, 0x12, 0x16, 0x0a,
|
||||
0x06, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x65,
|
||||
0x6e, 0x67, 0x69, 0x6e, 0x65, 0x12, 0x33, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18,
|
||||
0x08, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x1b, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c,
|
||||
0x2e, 0x42, 0x61, 0x63, 0x6b, 0x75, 0x70, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x53, 0x74, 0x61, 0x74,
|
||||
0x75, 0x73, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x22, 0x4b, 0x0a, 0x06, 0x53, 0x74,
|
||||
0x61, 0x74, 0x75, 0x73, 0x12, 0x0b, 0x0a, 0x07, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10,
|
||||
0x00, 0x12, 0x0e, 0x0a, 0x0a, 0x49, 0x4e, 0x43, 0x4f, 0x4d, 0x50, 0x4c, 0x45, 0x54, 0x45, 0x10,
|
||||
0x01, 0x12, 0x0c, 0x0a, 0x08, 0x43, 0x4f, 0x4d, 0x50, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x02, 0x12,
|
||||
0x0b, 0x0a, 0x07, 0x49, 0x4e, 0x56, 0x41, 0x4c, 0x49, 0x44, 0x10, 0x03, 0x12, 0x09, 0x0a, 0x05,
|
||||
0x56, 0x41, 0x4c, 0x49, 0x44, 0x10, 0x04, 0x32, 0x8a, 0x03, 0x0a, 0x08, 0x4d, 0x79, 0x73, 0x71,
|
||||
0x6c, 0x43, 0x74, 0x6c, 0x12, 0x3a, 0x0a, 0x05, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12, 0x16, 0x2e,
|
||||
0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65,
|
||||
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x17, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c,
|
||||
0x2e, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00,
|
||||
0x12, 0x43, 0x0a, 0x08, 0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x12, 0x19, 0x2e, 0x6d,
|
||||
0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e,
|
||||
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1a, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63,
|
||||
0x74, 0x6c, 0x2e, 0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f,
|
||||
0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x58, 0x0a, 0x0f, 0x52, 0x75, 0x6e, 0x4d, 0x79, 0x73, 0x71,
|
||||
0x6c, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x12, 0x20, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c,
|
||||
0x63, 0x74, 0x6c, 0x2e, 0x52, 0x75, 0x6e, 0x4d, 0x79, 0x73, 0x71, 0x6c, 0x55, 0x70, 0x67, 0x72,
|
||||
0x61, 0x64, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x21, 0x2e, 0x6d, 0x79, 0x73,
|
||||
0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52, 0x75, 0x6e, 0x4d, 0x79, 0x73, 0x71, 0x6c, 0x55, 0x70,
|
||||
0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12,
|
||||
0x4f, 0x0a, 0x0c, 0x52, 0x65, 0x69, 0x6e, 0x69, 0x74, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12,
|
||||
0x1d, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52, 0x65, 0x69, 0x6e, 0x69,
|
||||
0x74, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1e,
|
||||
0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52, 0x65, 0x69, 0x6e, 0x69, 0x74,
|
||||
0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00,
|
||||
0x12, 0x52, 0x0a, 0x0d, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x43, 0x6f, 0x6e, 0x66, 0x69,
|
||||
0x67, 0x12, 0x1e, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52, 0x65, 0x66,
|
||||
0x72, 0x65, 0x73, 0x68, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
|
||||
0x74, 0x1a, 0x1f, 0x2e, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x2e, 0x52, 0x65, 0x66,
|
||||
0x72, 0x65, 0x73, 0x68, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
|
||||
0x73, 0x65, 0x22, 0x00, 0x42, 0x27, 0x5a, 0x25, 0x76, 0x69, 0x74, 0x65, 0x73, 0x73, 0x2e, 0x69,
|
||||
0x6f, 0x2f, 0x76, 0x69, 0x74, 0x65, 0x73, 0x73, 0x2f, 0x67, 0x6f, 0x2f, 0x76, 0x74, 0x2f, 0x70,
|
||||
0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x6d, 0x79, 0x73, 0x71, 0x6c, 0x63, 0x74, 0x6c, 0x62, 0x06, 0x70,
|
||||
0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
|
@ -561,36 +694,43 @@ func file_mysqlctl_proto_rawDescGZIP() []byte {
|
|||
return file_mysqlctl_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_mysqlctl_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
|
||||
var file_mysqlctl_proto_msgTypes = make([]protoimpl.MessageInfo, 11)
|
||||
var file_mysqlctl_proto_goTypes = []interface{}{
|
||||
(*StartRequest)(nil), // 0: mysqlctl.StartRequest
|
||||
(*StartResponse)(nil), // 1: mysqlctl.StartResponse
|
||||
(*ShutdownRequest)(nil), // 2: mysqlctl.ShutdownRequest
|
||||
(*ShutdownResponse)(nil), // 3: mysqlctl.ShutdownResponse
|
||||
(*RunMysqlUpgradeRequest)(nil), // 4: mysqlctl.RunMysqlUpgradeRequest
|
||||
(*RunMysqlUpgradeResponse)(nil), // 5: mysqlctl.RunMysqlUpgradeResponse
|
||||
(*ReinitConfigRequest)(nil), // 6: mysqlctl.ReinitConfigRequest
|
||||
(*ReinitConfigResponse)(nil), // 7: mysqlctl.ReinitConfigResponse
|
||||
(*RefreshConfigRequest)(nil), // 8: mysqlctl.RefreshConfigRequest
|
||||
(*RefreshConfigResponse)(nil), // 9: mysqlctl.RefreshConfigResponse
|
||||
(*BackupInfo)(nil), // 10: mysqlctl.BackupInfo
|
||||
(BackupInfo_Status)(0), // 0: mysqlctl.BackupInfo.Status
|
||||
(*StartRequest)(nil), // 1: mysqlctl.StartRequest
|
||||
(*StartResponse)(nil), // 2: mysqlctl.StartResponse
|
||||
(*ShutdownRequest)(nil), // 3: mysqlctl.ShutdownRequest
|
||||
(*ShutdownResponse)(nil), // 4: mysqlctl.ShutdownResponse
|
||||
(*RunMysqlUpgradeRequest)(nil), // 5: mysqlctl.RunMysqlUpgradeRequest
|
||||
(*RunMysqlUpgradeResponse)(nil), // 6: mysqlctl.RunMysqlUpgradeResponse
|
||||
(*ReinitConfigRequest)(nil), // 7: mysqlctl.ReinitConfigRequest
|
||||
(*ReinitConfigResponse)(nil), // 8: mysqlctl.ReinitConfigResponse
|
||||
(*RefreshConfigRequest)(nil), // 9: mysqlctl.RefreshConfigRequest
|
||||
(*RefreshConfigResponse)(nil), // 10: mysqlctl.RefreshConfigResponse
|
||||
(*BackupInfo)(nil), // 11: mysqlctl.BackupInfo
|
||||
(*topodata.TabletAlias)(nil), // 12: topodata.TabletAlias
|
||||
(*vttime.Time)(nil), // 13: vttime.Time
|
||||
}
|
||||
var file_mysqlctl_proto_depIdxs = []int32{
|
||||
0, // 0: mysqlctl.MysqlCtl.Start:input_type -> mysqlctl.StartRequest
|
||||
2, // 1: mysqlctl.MysqlCtl.Shutdown:input_type -> mysqlctl.ShutdownRequest
|
||||
4, // 2: mysqlctl.MysqlCtl.RunMysqlUpgrade:input_type -> mysqlctl.RunMysqlUpgradeRequest
|
||||
6, // 3: mysqlctl.MysqlCtl.ReinitConfig:input_type -> mysqlctl.ReinitConfigRequest
|
||||
8, // 4: mysqlctl.MysqlCtl.RefreshConfig:input_type -> mysqlctl.RefreshConfigRequest
|
||||
1, // 5: mysqlctl.MysqlCtl.Start:output_type -> mysqlctl.StartResponse
|
||||
3, // 6: mysqlctl.MysqlCtl.Shutdown:output_type -> mysqlctl.ShutdownResponse
|
||||
5, // 7: mysqlctl.MysqlCtl.RunMysqlUpgrade:output_type -> mysqlctl.RunMysqlUpgradeResponse
|
||||
7, // 8: mysqlctl.MysqlCtl.ReinitConfig:output_type -> mysqlctl.ReinitConfigResponse
|
||||
9, // 9: mysqlctl.MysqlCtl.RefreshConfig:output_type -> mysqlctl.RefreshConfigResponse
|
||||
5, // [5:10] is the sub-list for method output_type
|
||||
0, // [0:5] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
12, // 0: mysqlctl.BackupInfo.tablet_alias:type_name -> topodata.TabletAlias
|
||||
13, // 1: mysqlctl.BackupInfo.time:type_name -> vttime.Time
|
||||
0, // 2: mysqlctl.BackupInfo.status:type_name -> mysqlctl.BackupInfo.Status
|
||||
1, // 3: mysqlctl.MysqlCtl.Start:input_type -> mysqlctl.StartRequest
|
||||
3, // 4: mysqlctl.MysqlCtl.Shutdown:input_type -> mysqlctl.ShutdownRequest
|
||||
5, // 5: mysqlctl.MysqlCtl.RunMysqlUpgrade:input_type -> mysqlctl.RunMysqlUpgradeRequest
|
||||
7, // 6: mysqlctl.MysqlCtl.ReinitConfig:input_type -> mysqlctl.ReinitConfigRequest
|
||||
9, // 7: mysqlctl.MysqlCtl.RefreshConfig:input_type -> mysqlctl.RefreshConfigRequest
|
||||
2, // 8: mysqlctl.MysqlCtl.Start:output_type -> mysqlctl.StartResponse
|
||||
4, // 9: mysqlctl.MysqlCtl.Shutdown:output_type -> mysqlctl.ShutdownResponse
|
||||
6, // 10: mysqlctl.MysqlCtl.RunMysqlUpgrade:output_type -> mysqlctl.RunMysqlUpgradeResponse
|
||||
8, // 11: mysqlctl.MysqlCtl.ReinitConfig:output_type -> mysqlctl.ReinitConfigResponse
|
||||
10, // 12: mysqlctl.MysqlCtl.RefreshConfig:output_type -> mysqlctl.RefreshConfigResponse
|
||||
8, // [8:13] is the sub-list for method output_type
|
||||
3, // [3:8] is the sub-list for method input_type
|
||||
3, // [3:3] is the sub-list for extension type_name
|
||||
3, // [3:3] is the sub-list for extension extendee
|
||||
0, // [0:3] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_mysqlctl_proto_init() }
|
||||
|
@ -737,13 +877,14 @@ func file_mysqlctl_proto_init() {
|
|||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_mysqlctl_proto_rawDesc,
|
||||
NumEnums: 0,
|
||||
NumEnums: 1,
|
||||
NumMessages: 11,
|
||||
NumExtensions: 0,
|
||||
NumServices: 1,
|
||||
},
|
||||
GoTypes: file_mysqlctl_proto_goTypes,
|
||||
DependencyIndexes: file_mysqlctl_proto_depIdxs,
|
||||
EnumInfos: file_mysqlctl_proto_enumTypes,
|
||||
MessageInfos: file_mysqlctl_proto_msgTypes,
|
||||
}.Build()
|
||||
File_mysqlctl_proto = out.File
|
||||
|
|
|
@ -9,6 +9,8 @@ import (
|
|||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
io "io"
|
||||
bits "math/bits"
|
||||
topodata "vitess.io/vitess/go/vt/proto/topodata"
|
||||
vttime "vitess.io/vitess/go/vt/proto/vttime"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -397,6 +399,56 @@ func (m *BackupInfo) MarshalToSizedBufferVT(dAtA []byte) (int, error) {
|
|||
i -= len(m.unknownFields)
|
||||
copy(dAtA[i:], m.unknownFields)
|
||||
}
|
||||
if m.Status != 0 {
|
||||
i = encodeVarint(dAtA, i, uint64(m.Status))
|
||||
i--
|
||||
dAtA[i] = 0x40
|
||||
}
|
||||
if len(m.Engine) > 0 {
|
||||
i -= len(m.Engine)
|
||||
copy(dAtA[i:], m.Engine)
|
||||
i = encodeVarint(dAtA, i, uint64(len(m.Engine)))
|
||||
i--
|
||||
dAtA[i] = 0x3a
|
||||
}
|
||||
if m.Time != nil {
|
||||
{
|
||||
size, err := m.Time.MarshalToSizedBufferVT(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarint(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x32
|
||||
}
|
||||
if m.TabletAlias != nil {
|
||||
{
|
||||
size, err := m.TabletAlias.MarshalToSizedBufferVT(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarint(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x2a
|
||||
}
|
||||
if len(m.Shard) > 0 {
|
||||
i -= len(m.Shard)
|
||||
copy(dAtA[i:], m.Shard)
|
||||
i = encodeVarint(dAtA, i, uint64(len(m.Shard)))
|
||||
i--
|
||||
dAtA[i] = 0x22
|
||||
}
|
||||
if len(m.Keyspace) > 0 {
|
||||
i -= len(m.Keyspace)
|
||||
copy(dAtA[i:], m.Keyspace)
|
||||
i = encodeVarint(dAtA, i, uint64(len(m.Keyspace)))
|
||||
i--
|
||||
dAtA[i] = 0x1a
|
||||
}
|
||||
if len(m.Directory) > 0 {
|
||||
i -= len(m.Directory)
|
||||
copy(dAtA[i:], m.Directory)
|
||||
|
@ -568,6 +620,29 @@ func (m *BackupInfo) SizeVT() (n int) {
|
|||
if l > 0 {
|
||||
n += 1 + l + sov(uint64(l))
|
||||
}
|
||||
l = len(m.Keyspace)
|
||||
if l > 0 {
|
||||
n += 1 + l + sov(uint64(l))
|
||||
}
|
||||
l = len(m.Shard)
|
||||
if l > 0 {
|
||||
n += 1 + l + sov(uint64(l))
|
||||
}
|
||||
if m.TabletAlias != nil {
|
||||
l = m.TabletAlias.SizeVT()
|
||||
n += 1 + l + sov(uint64(l))
|
||||
}
|
||||
if m.Time != nil {
|
||||
l = m.Time.SizeVT()
|
||||
n += 1 + l + sov(uint64(l))
|
||||
}
|
||||
l = len(m.Engine)
|
||||
if l > 0 {
|
||||
n += 1 + l + sov(uint64(l))
|
||||
}
|
||||
if m.Status != 0 {
|
||||
n += 1 + sov(uint64(m.Status))
|
||||
}
|
||||
if m.unknownFields != nil {
|
||||
n += len(m.unknownFields)
|
||||
}
|
||||
|
@ -1235,6 +1310,193 @@ func (m *BackupInfo) UnmarshalVT(dAtA []byte) error {
|
|||
}
|
||||
m.Directory = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Keyspace", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Keyspace = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 4:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Shard", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Shard = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 5:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field TabletAlias", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.TabletAlias == nil {
|
||||
m.TabletAlias = &topodata.TabletAlias{}
|
||||
}
|
||||
if err := m.TabletAlias.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 6:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.Time == nil {
|
||||
m.Time = &vttime.Time{}
|
||||
}
|
||||
if err := m.Time.UnmarshalVT(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 7:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Engine", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Engine = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 8:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
|
||||
}
|
||||
m.Status = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflow
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Status |= BackupInfo_Status(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skip(dAtA[iNdEx:])
|
||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -51,7 +51,7 @@ var file_vtctlservice_proto_rawDesc = []byte{
|
|||
0x61, 0x6e, 0x64, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x76, 0x74, 0x63,
|
||||
0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x65, 0x56, 0x74,
|
||||
0x63, 0x74, 0x6c, 0x43, 0x6f, 0x6d, 0x6d, 0x61, 0x6e, 0x64, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
|
||||
0x73, 0x65, 0x22, 0x00, 0x30, 0x01, 0x32, 0xaf, 0x1c, 0x0a, 0x06, 0x56, 0x74, 0x63, 0x74, 0x6c,
|
||||
0x73, 0x65, 0x22, 0x00, 0x30, 0x01, 0x32, 0xea, 0x1d, 0x0a, 0x06, 0x56, 0x74, 0x63, 0x74, 0x6c,
|
||||
0x64, 0x12, 0x4e, 0x0a, 0x0b, 0x41, 0x64, 0x64, 0x43, 0x65, 0x6c, 0x6c, 0x49, 0x6e, 0x66, 0x6f,
|
||||
0x12, 0x1d, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x41, 0x64, 0x64,
|
||||
0x43, 0x65, 0x6c, 0x6c, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
|
||||
|
@ -234,54 +234,66 @@ var file_vtctlservice_proto_rawDesc = []byte{
|
|||
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74,
|
||||
0x61, 0x2e, 0x52, 0x65, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x56, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61,
|
||||
0x47, 0x72, 0x61, 0x70, 0x68, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12,
|
||||
0x63, 0x0a, 0x12, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x4b, 0x65, 0x79, 0x73, 0x70, 0x61, 0x63,
|
||||
0x65, 0x43, 0x65, 0x6c, 0x6c, 0x12, 0x24, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74,
|
||||
0x61, 0x2e, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x4b, 0x65, 0x79, 0x73, 0x70, 0x61, 0x63, 0x65,
|
||||
0x43, 0x65, 0x6c, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x76, 0x74,
|
||||
0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x4b, 0x65,
|
||||
0x79, 0x73, 0x70, 0x61, 0x63, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
|
||||
0x73, 0x65, 0x22, 0x00, 0x12, 0x5a, 0x0a, 0x0f, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x53, 0x68,
|
||||
0x61, 0x72, 0x64, 0x43, 0x65, 0x6c, 0x6c, 0x12, 0x21, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64,
|
||||
0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x53, 0x68, 0x61, 0x72, 0x64, 0x43,
|
||||
0x65, 0x6c, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x76, 0x74, 0x63,
|
||||
0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x53, 0x68, 0x61,
|
||||
0x72, 0x64, 0x43, 0x65, 0x6c, 0x6c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00,
|
||||
0x12, 0x57, 0x0a, 0x0e, 0x52, 0x65, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x54, 0x61, 0x62, 0x6c,
|
||||
0x65, 0x74, 0x12, 0x20, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52,
|
||||
0x65, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x52, 0x65, 0x71,
|
||||
0x75, 0x65, 0x73, 0x74, 0x1a, 0x21, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61,
|
||||
0x2e, 0x52, 0x65, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x52,
|
||||
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x78, 0x0a, 0x19, 0x53, 0x68, 0x61,
|
||||
0x72, 0x64, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x6f, 0x73,
|
||||
0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x2b, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61,
|
||||
0x74, 0x61, 0x2e, 0x53, 0x68, 0x61, 0x72, 0x64, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74,
|
||||
0x69, 0x6f, 0x6e, 0x50, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65, 0x71, 0x75,
|
||||
0x65, 0x73, 0x74, 0x1a, 0x2c, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e,
|
||||
0x53, 0x68, 0x61, 0x72, 0x64, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e,
|
||||
0x50, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
|
||||
0x65, 0x22, 0x00, 0x12, 0x7b, 0x0a, 0x1a, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x45, 0x78, 0x74,
|
||||
0x51, 0x0a, 0x0c, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12,
|
||||
0x1e, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x66, 0x72,
|
||||
0x65, 0x73, 0x68, 0x53, 0x74, 0x61, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
|
||||
0x1f, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x66, 0x72,
|
||||
0x65, 0x73, 0x68, 0x53, 0x74, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
|
||||
0x22, 0x00, 0x12, 0x66, 0x0a, 0x13, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x53, 0x74, 0x61,
|
||||
0x74, 0x65, 0x42, 0x79, 0x53, 0x68, 0x61, 0x72, 0x64, 0x12, 0x25, 0x2e, 0x76, 0x74, 0x63, 0x74,
|
||||
0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x66, 0x72, 0x65, 0x73, 0x68, 0x53, 0x74, 0x61,
|
||||
0x74, 0x65, 0x42, 0x79, 0x53, 0x68, 0x61, 0x72, 0x64, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
|
||||
0x1a, 0x26, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x66,
|
||||
0x72, 0x65, 0x73, 0x68, 0x53, 0x74, 0x61, 0x74, 0x65, 0x42, 0x79, 0x53, 0x68, 0x61, 0x72, 0x64,
|
||||
0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x63, 0x0a, 0x12, 0x52, 0x65,
|
||||
0x6d, 0x6f, 0x76, 0x65, 0x4b, 0x65, 0x79, 0x73, 0x70, 0x61, 0x63, 0x65, 0x43, 0x65, 0x6c, 0x6c,
|
||||
0x12, 0x24, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x6d,
|
||||
0x6f, 0x76, 0x65, 0x4b, 0x65, 0x79, 0x73, 0x70, 0x61, 0x63, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x52,
|
||||
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61,
|
||||
0x74, 0x61, 0x2e, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x4b, 0x65, 0x79, 0x73, 0x70, 0x61, 0x63,
|
||||
0x65, 0x43, 0x65, 0x6c, 0x6c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12,
|
||||
0x5a, 0x0a, 0x0f, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x53, 0x68, 0x61, 0x72, 0x64, 0x43, 0x65,
|
||||
0x6c, 0x6c, 0x12, 0x21, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52,
|
||||
0x65, 0x6d, 0x6f, 0x76, 0x65, 0x53, 0x68, 0x61, 0x72, 0x64, 0x43, 0x65, 0x6c, 0x6c, 0x52, 0x65,
|
||||
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74,
|
||||
0x61, 0x2e, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x53, 0x68, 0x61, 0x72, 0x64, 0x43, 0x65, 0x6c,
|
||||
0x6c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x57, 0x0a, 0x0e, 0x52,
|
||||
0x65, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x12, 0x20, 0x2e,
|
||||
0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x70, 0x61, 0x72, 0x65,
|
||||
0x6e, 0x74, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
|
||||
0x21, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x52, 0x65, 0x70, 0x61,
|
||||
0x72, 0x65, 0x6e, 0x74, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
|
||||
0x73, 0x65, 0x22, 0x00, 0x12, 0x78, 0x0a, 0x19, 0x53, 0x68, 0x61, 0x72, 0x64, 0x52, 0x65, 0x70,
|
||||
0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e,
|
||||
0x73, 0x12, 0x2b, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x53, 0x68,
|
||||
0x61, 0x72, 0x64, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x6f,
|
||||
0x73, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2c,
|
||||
0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x53, 0x68, 0x61, 0x72, 0x64,
|
||||
0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x6f, 0x73, 0x69, 0x74,
|
||||
0x69, 0x6f, 0x6e, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x7b,
|
||||
0x0a, 0x1a, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x45, 0x78, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c,
|
||||
0x6c, 0x79, 0x52, 0x65, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x65, 0x64, 0x12, 0x2c, 0x2e, 0x76,
|
||||
0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x45,
|
||||
0x78, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x6c, 0x79, 0x52, 0x65, 0x70, 0x61, 0x72, 0x65, 0x6e,
|
||||
0x74, 0x65, 0x64, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2d, 0x2e, 0x76, 0x74, 0x63,
|
||||
0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x74, 0x45, 0x78, 0x74,
|
||||
0x65, 0x72, 0x6e, 0x61, 0x6c, 0x6c, 0x79, 0x52, 0x65, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x65,
|
||||
0x64, 0x12, 0x2c, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x54, 0x61,
|
||||
0x62, 0x6c, 0x65, 0x74, 0x45, 0x78, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x6c, 0x79, 0x52, 0x65,
|
||||
0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x65, 0x64, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
|
||||
0x2d, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x54, 0x61, 0x62, 0x6c,
|
||||
0x65, 0x74, 0x45, 0x78, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x6c, 0x79, 0x52, 0x65, 0x70, 0x61,
|
||||
0x72, 0x65, 0x6e, 0x74, 0x65, 0x64, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00,
|
||||
0x12, 0x57, 0x0a, 0x0e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x49, 0x6e,
|
||||
0x66, 0x6f, 0x12, 0x20, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x55,
|
||||
0x70, 0x64, 0x61, 0x74, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x65, 0x71,
|
||||
0x75, 0x65, 0x73, 0x74, 0x1a, 0x21, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61,
|
||||
0x2e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x49, 0x6e, 0x66, 0x6f, 0x52,
|
||||
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x5d, 0x0a, 0x10, 0x55, 0x70, 0x64,
|
||||
0x61, 0x74, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x73, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x12, 0x22, 0x2e,
|
||||
0x64, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x12, 0x57, 0x0a, 0x0e, 0x55,
|
||||
0x70, 0x64, 0x61, 0x74, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x20, 0x2e,
|
||||
0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65,
|
||||
0x43, 0x65, 0x6c, 0x6c, 0x73, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
|
||||
0x74, 0x1a, 0x23, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x55, 0x70,
|
||||
0x64, 0x61, 0x74, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x73, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x52, 0x65,
|
||||
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x42, 0x2b, 0x5a, 0x29, 0x76, 0x69, 0x74, 0x65,
|
||||
0x73, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x76, 0x69, 0x74, 0x65, 0x73, 0x73, 0x2f, 0x67, 0x6f, 0x2f,
|
||||
0x76, 0x74, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x73, 0x65,
|
||||
0x72, 0x76, 0x69, 0x63, 0x65, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
0x43, 0x65, 0x6c, 0x6c, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
|
||||
0x21, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x55, 0x70, 0x64, 0x61,
|
||||
0x74, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
|
||||
0x73, 0x65, 0x22, 0x00, 0x12, 0x5d, 0x0a, 0x10, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x43, 0x65,
|
||||
0x6c, 0x6c, 0x73, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x12, 0x22, 0x2e, 0x76, 0x74, 0x63, 0x74, 0x6c,
|
||||
0x64, 0x61, 0x74, 0x61, 0x2e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x43, 0x65, 0x6c, 0x6c, 0x73,
|
||||
0x41, 0x6c, 0x69, 0x61, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x23, 0x2e, 0x76,
|
||||
0x74, 0x63, 0x74, 0x6c, 0x64, 0x61, 0x74, 0x61, 0x2e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x43,
|
||||
0x65, 0x6c, 0x6c, 0x73, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
|
||||
0x65, 0x22, 0x00, 0x42, 0x2b, 0x5a, 0x29, 0x76, 0x69, 0x74, 0x65, 0x73, 0x73, 0x2e, 0x69, 0x6f,
|
||||
0x2f, 0x76, 0x69, 0x74, 0x65, 0x73, 0x73, 0x2f, 0x67, 0x6f, 0x2f, 0x76, 0x74, 0x2f, 0x70, 0x72,
|
||||
0x6f, 0x74, 0x6f, 0x2f, 0x76, 0x74, 0x63, 0x74, 0x6c, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65,
|
||||
0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var file_vtctlservice_proto_goTypes = []interface{}{
|
||||
|
@ -319,54 +331,58 @@ var file_vtctlservice_proto_goTypes = []interface{}{
|
|||
(*vtctldata.InitShardPrimaryRequest)(nil), // 31: vtctldata.InitShardPrimaryRequest
|
||||
(*vtctldata.PlannedReparentShardRequest)(nil), // 32: vtctldata.PlannedReparentShardRequest
|
||||
(*vtctldata.RebuildVSchemaGraphRequest)(nil), // 33: vtctldata.RebuildVSchemaGraphRequest
|
||||
(*vtctldata.RemoveKeyspaceCellRequest)(nil), // 34: vtctldata.RemoveKeyspaceCellRequest
|
||||
(*vtctldata.RemoveShardCellRequest)(nil), // 35: vtctldata.RemoveShardCellRequest
|
||||
(*vtctldata.ReparentTabletRequest)(nil), // 36: vtctldata.ReparentTabletRequest
|
||||
(*vtctldata.ShardReplicationPositionsRequest)(nil), // 37: vtctldata.ShardReplicationPositionsRequest
|
||||
(*vtctldata.TabletExternallyReparentedRequest)(nil), // 38: vtctldata.TabletExternallyReparentedRequest
|
||||
(*vtctldata.UpdateCellInfoRequest)(nil), // 39: vtctldata.UpdateCellInfoRequest
|
||||
(*vtctldata.UpdateCellsAliasRequest)(nil), // 40: vtctldata.UpdateCellsAliasRequest
|
||||
(*vtctldata.ExecuteVtctlCommandResponse)(nil), // 41: vtctldata.ExecuteVtctlCommandResponse
|
||||
(*vtctldata.AddCellInfoResponse)(nil), // 42: vtctldata.AddCellInfoResponse
|
||||
(*vtctldata.AddCellsAliasResponse)(nil), // 43: vtctldata.AddCellsAliasResponse
|
||||
(*vtctldata.ApplyRoutingRulesResponse)(nil), // 44: vtctldata.ApplyRoutingRulesResponse
|
||||
(*vtctldata.ApplyVSchemaResponse)(nil), // 45: vtctldata.ApplyVSchemaResponse
|
||||
(*vtctldata.ChangeTabletTypeResponse)(nil), // 46: vtctldata.ChangeTabletTypeResponse
|
||||
(*vtctldata.CreateKeyspaceResponse)(nil), // 47: vtctldata.CreateKeyspaceResponse
|
||||
(*vtctldata.CreateShardResponse)(nil), // 48: vtctldata.CreateShardResponse
|
||||
(*vtctldata.DeleteCellInfoResponse)(nil), // 49: vtctldata.DeleteCellInfoResponse
|
||||
(*vtctldata.DeleteCellsAliasResponse)(nil), // 50: vtctldata.DeleteCellsAliasResponse
|
||||
(*vtctldata.DeleteKeyspaceResponse)(nil), // 51: vtctldata.DeleteKeyspaceResponse
|
||||
(*vtctldata.DeleteShardsResponse)(nil), // 52: vtctldata.DeleteShardsResponse
|
||||
(*vtctldata.DeleteTabletsResponse)(nil), // 53: vtctldata.DeleteTabletsResponse
|
||||
(*vtctldata.EmergencyReparentShardResponse)(nil), // 54: vtctldata.EmergencyReparentShardResponse
|
||||
(*vtctldata.FindAllShardsInKeyspaceResponse)(nil), // 55: vtctldata.FindAllShardsInKeyspaceResponse
|
||||
(*vtctldata.GetBackupsResponse)(nil), // 56: vtctldata.GetBackupsResponse
|
||||
(*vtctldata.GetCellInfoResponse)(nil), // 57: vtctldata.GetCellInfoResponse
|
||||
(*vtctldata.GetCellInfoNamesResponse)(nil), // 58: vtctldata.GetCellInfoNamesResponse
|
||||
(*vtctldata.GetCellsAliasesResponse)(nil), // 59: vtctldata.GetCellsAliasesResponse
|
||||
(*vtctldata.GetKeyspaceResponse)(nil), // 60: vtctldata.GetKeyspaceResponse
|
||||
(*vtctldata.GetKeyspacesResponse)(nil), // 61: vtctldata.GetKeyspacesResponse
|
||||
(*vtctldata.GetRoutingRulesResponse)(nil), // 62: vtctldata.GetRoutingRulesResponse
|
||||
(*vtctldata.GetSchemaResponse)(nil), // 63: vtctldata.GetSchemaResponse
|
||||
(*vtctldata.GetShardResponse)(nil), // 64: vtctldata.GetShardResponse
|
||||
(*vtctldata.GetSrvKeyspacesResponse)(nil), // 65: vtctldata.GetSrvKeyspacesResponse
|
||||
(*vtctldata.GetSrvVSchemaResponse)(nil), // 66: vtctldata.GetSrvVSchemaResponse
|
||||
(*vtctldata.GetSrvVSchemasResponse)(nil), // 67: vtctldata.GetSrvVSchemasResponse
|
||||
(*vtctldata.GetTabletResponse)(nil), // 68: vtctldata.GetTabletResponse
|
||||
(*vtctldata.GetTabletsResponse)(nil), // 69: vtctldata.GetTabletsResponse
|
||||
(*vtctldata.GetVSchemaResponse)(nil), // 70: vtctldata.GetVSchemaResponse
|
||||
(*vtctldata.GetWorkflowsResponse)(nil), // 71: vtctldata.GetWorkflowsResponse
|
||||
(*vtctldata.InitShardPrimaryResponse)(nil), // 72: vtctldata.InitShardPrimaryResponse
|
||||
(*vtctldata.PlannedReparentShardResponse)(nil), // 73: vtctldata.PlannedReparentShardResponse
|
||||
(*vtctldata.RebuildVSchemaGraphResponse)(nil), // 74: vtctldata.RebuildVSchemaGraphResponse
|
||||
(*vtctldata.RemoveKeyspaceCellResponse)(nil), // 75: vtctldata.RemoveKeyspaceCellResponse
|
||||
(*vtctldata.RemoveShardCellResponse)(nil), // 76: vtctldata.RemoveShardCellResponse
|
||||
(*vtctldata.ReparentTabletResponse)(nil), // 77: vtctldata.ReparentTabletResponse
|
||||
(*vtctldata.ShardReplicationPositionsResponse)(nil), // 78: vtctldata.ShardReplicationPositionsResponse
|
||||
(*vtctldata.TabletExternallyReparentedResponse)(nil), // 79: vtctldata.TabletExternallyReparentedResponse
|
||||
(*vtctldata.UpdateCellInfoResponse)(nil), // 80: vtctldata.UpdateCellInfoResponse
|
||||
(*vtctldata.UpdateCellsAliasResponse)(nil), // 81: vtctldata.UpdateCellsAliasResponse
|
||||
(*vtctldata.RefreshStateRequest)(nil), // 34: vtctldata.RefreshStateRequest
|
||||
(*vtctldata.RefreshStateByShardRequest)(nil), // 35: vtctldata.RefreshStateByShardRequest
|
||||
(*vtctldata.RemoveKeyspaceCellRequest)(nil), // 36: vtctldata.RemoveKeyspaceCellRequest
|
||||
(*vtctldata.RemoveShardCellRequest)(nil), // 37: vtctldata.RemoveShardCellRequest
|
||||
(*vtctldata.ReparentTabletRequest)(nil), // 38: vtctldata.ReparentTabletRequest
|
||||
(*vtctldata.ShardReplicationPositionsRequest)(nil), // 39: vtctldata.ShardReplicationPositionsRequest
|
||||
(*vtctldata.TabletExternallyReparentedRequest)(nil), // 40: vtctldata.TabletExternallyReparentedRequest
|
||||
(*vtctldata.UpdateCellInfoRequest)(nil), // 41: vtctldata.UpdateCellInfoRequest
|
||||
(*vtctldata.UpdateCellsAliasRequest)(nil), // 42: vtctldata.UpdateCellsAliasRequest
|
||||
(*vtctldata.ExecuteVtctlCommandResponse)(nil), // 43: vtctldata.ExecuteVtctlCommandResponse
|
||||
(*vtctldata.AddCellInfoResponse)(nil), // 44: vtctldata.AddCellInfoResponse
|
||||
(*vtctldata.AddCellsAliasResponse)(nil), // 45: vtctldata.AddCellsAliasResponse
|
||||
(*vtctldata.ApplyRoutingRulesResponse)(nil), // 46: vtctldata.ApplyRoutingRulesResponse
|
||||
(*vtctldata.ApplyVSchemaResponse)(nil), // 47: vtctldata.ApplyVSchemaResponse
|
||||
(*vtctldata.ChangeTabletTypeResponse)(nil), // 48: vtctldata.ChangeTabletTypeResponse
|
||||
(*vtctldata.CreateKeyspaceResponse)(nil), // 49: vtctldata.CreateKeyspaceResponse
|
||||
(*vtctldata.CreateShardResponse)(nil), // 50: vtctldata.CreateShardResponse
|
||||
(*vtctldata.DeleteCellInfoResponse)(nil), // 51: vtctldata.DeleteCellInfoResponse
|
||||
(*vtctldata.DeleteCellsAliasResponse)(nil), // 52: vtctldata.DeleteCellsAliasResponse
|
||||
(*vtctldata.DeleteKeyspaceResponse)(nil), // 53: vtctldata.DeleteKeyspaceResponse
|
||||
(*vtctldata.DeleteShardsResponse)(nil), // 54: vtctldata.DeleteShardsResponse
|
||||
(*vtctldata.DeleteTabletsResponse)(nil), // 55: vtctldata.DeleteTabletsResponse
|
||||
(*vtctldata.EmergencyReparentShardResponse)(nil), // 56: vtctldata.EmergencyReparentShardResponse
|
||||
(*vtctldata.FindAllShardsInKeyspaceResponse)(nil), // 57: vtctldata.FindAllShardsInKeyspaceResponse
|
||||
(*vtctldata.GetBackupsResponse)(nil), // 58: vtctldata.GetBackupsResponse
|
||||
(*vtctldata.GetCellInfoResponse)(nil), // 59: vtctldata.GetCellInfoResponse
|
||||
(*vtctldata.GetCellInfoNamesResponse)(nil), // 60: vtctldata.GetCellInfoNamesResponse
|
||||
(*vtctldata.GetCellsAliasesResponse)(nil), // 61: vtctldata.GetCellsAliasesResponse
|
||||
(*vtctldata.GetKeyspaceResponse)(nil), // 62: vtctldata.GetKeyspaceResponse
|
||||
(*vtctldata.GetKeyspacesResponse)(nil), // 63: vtctldata.GetKeyspacesResponse
|
||||
(*vtctldata.GetRoutingRulesResponse)(nil), // 64: vtctldata.GetRoutingRulesResponse
|
||||
(*vtctldata.GetSchemaResponse)(nil), // 65: vtctldata.GetSchemaResponse
|
||||
(*vtctldata.GetShardResponse)(nil), // 66: vtctldata.GetShardResponse
|
||||
(*vtctldata.GetSrvKeyspacesResponse)(nil), // 67: vtctldata.GetSrvKeyspacesResponse
|
||||
(*vtctldata.GetSrvVSchemaResponse)(nil), // 68: vtctldata.GetSrvVSchemaResponse
|
||||
(*vtctldata.GetSrvVSchemasResponse)(nil), // 69: vtctldata.GetSrvVSchemasResponse
|
||||
(*vtctldata.GetTabletResponse)(nil), // 70: vtctldata.GetTabletResponse
|
||||
(*vtctldata.GetTabletsResponse)(nil), // 71: vtctldata.GetTabletsResponse
|
||||
(*vtctldata.GetVSchemaResponse)(nil), // 72: vtctldata.GetVSchemaResponse
|
||||
(*vtctldata.GetWorkflowsResponse)(nil), // 73: vtctldata.GetWorkflowsResponse
|
||||
(*vtctldata.InitShardPrimaryResponse)(nil), // 74: vtctldata.InitShardPrimaryResponse
|
||||
(*vtctldata.PlannedReparentShardResponse)(nil), // 75: vtctldata.PlannedReparentShardResponse
|
||||
(*vtctldata.RebuildVSchemaGraphResponse)(nil), // 76: vtctldata.RebuildVSchemaGraphResponse
|
||||
(*vtctldata.RefreshStateResponse)(nil), // 77: vtctldata.RefreshStateResponse
|
||||
(*vtctldata.RefreshStateByShardResponse)(nil), // 78: vtctldata.RefreshStateByShardResponse
|
||||
(*vtctldata.RemoveKeyspaceCellResponse)(nil), // 79: vtctldata.RemoveKeyspaceCellResponse
|
||||
(*vtctldata.RemoveShardCellResponse)(nil), // 80: vtctldata.RemoveShardCellResponse
|
||||
(*vtctldata.ReparentTabletResponse)(nil), // 81: vtctldata.ReparentTabletResponse
|
||||
(*vtctldata.ShardReplicationPositionsResponse)(nil), // 82: vtctldata.ShardReplicationPositionsResponse
|
||||
(*vtctldata.TabletExternallyReparentedResponse)(nil), // 83: vtctldata.TabletExternallyReparentedResponse
|
||||
(*vtctldata.UpdateCellInfoResponse)(nil), // 84: vtctldata.UpdateCellInfoResponse
|
||||
(*vtctldata.UpdateCellsAliasResponse)(nil), // 85: vtctldata.UpdateCellsAliasResponse
|
||||
}
|
||||
var file_vtctlservice_proto_depIdxs = []int32{
|
||||
0, // 0: vtctlservice.Vtctl.ExecuteVtctlCommand:input_type -> vtctldata.ExecuteVtctlCommandRequest
|
||||
|
@ -403,56 +419,60 @@ var file_vtctlservice_proto_depIdxs = []int32{
|
|||
31, // 31: vtctlservice.Vtctld.InitShardPrimary:input_type -> vtctldata.InitShardPrimaryRequest
|
||||
32, // 32: vtctlservice.Vtctld.PlannedReparentShard:input_type -> vtctldata.PlannedReparentShardRequest
|
||||
33, // 33: vtctlservice.Vtctld.RebuildVSchemaGraph:input_type -> vtctldata.RebuildVSchemaGraphRequest
|
||||
34, // 34: vtctlservice.Vtctld.RemoveKeyspaceCell:input_type -> vtctldata.RemoveKeyspaceCellRequest
|
||||
35, // 35: vtctlservice.Vtctld.RemoveShardCell:input_type -> vtctldata.RemoveShardCellRequest
|
||||
36, // 36: vtctlservice.Vtctld.ReparentTablet:input_type -> vtctldata.ReparentTabletRequest
|
||||
37, // 37: vtctlservice.Vtctld.ShardReplicationPositions:input_type -> vtctldata.ShardReplicationPositionsRequest
|
||||
38, // 38: vtctlservice.Vtctld.TabletExternallyReparented:input_type -> vtctldata.TabletExternallyReparentedRequest
|
||||
39, // 39: vtctlservice.Vtctld.UpdateCellInfo:input_type -> vtctldata.UpdateCellInfoRequest
|
||||
40, // 40: vtctlservice.Vtctld.UpdateCellsAlias:input_type -> vtctldata.UpdateCellsAliasRequest
|
||||
41, // 41: vtctlservice.Vtctl.ExecuteVtctlCommand:output_type -> vtctldata.ExecuteVtctlCommandResponse
|
||||
42, // 42: vtctlservice.Vtctld.AddCellInfo:output_type -> vtctldata.AddCellInfoResponse
|
||||
43, // 43: vtctlservice.Vtctld.AddCellsAlias:output_type -> vtctldata.AddCellsAliasResponse
|
||||
44, // 44: vtctlservice.Vtctld.ApplyRoutingRules:output_type -> vtctldata.ApplyRoutingRulesResponse
|
||||
45, // 45: vtctlservice.Vtctld.ApplyVSchema:output_type -> vtctldata.ApplyVSchemaResponse
|
||||
46, // 46: vtctlservice.Vtctld.ChangeTabletType:output_type -> vtctldata.ChangeTabletTypeResponse
|
||||
47, // 47: vtctlservice.Vtctld.CreateKeyspace:output_type -> vtctldata.CreateKeyspaceResponse
|
||||
48, // 48: vtctlservice.Vtctld.CreateShard:output_type -> vtctldata.CreateShardResponse
|
||||
49, // 49: vtctlservice.Vtctld.DeleteCellInfo:output_type -> vtctldata.DeleteCellInfoResponse
|
||||
50, // 50: vtctlservice.Vtctld.DeleteCellsAlias:output_type -> vtctldata.DeleteCellsAliasResponse
|
||||
51, // 51: vtctlservice.Vtctld.DeleteKeyspace:output_type -> vtctldata.DeleteKeyspaceResponse
|
||||
52, // 52: vtctlservice.Vtctld.DeleteShards:output_type -> vtctldata.DeleteShardsResponse
|
||||
53, // 53: vtctlservice.Vtctld.DeleteTablets:output_type -> vtctldata.DeleteTabletsResponse
|
||||
54, // 54: vtctlservice.Vtctld.EmergencyReparentShard:output_type -> vtctldata.EmergencyReparentShardResponse
|
||||
55, // 55: vtctlservice.Vtctld.FindAllShardsInKeyspace:output_type -> vtctldata.FindAllShardsInKeyspaceResponse
|
||||
56, // 56: vtctlservice.Vtctld.GetBackups:output_type -> vtctldata.GetBackupsResponse
|
||||
57, // 57: vtctlservice.Vtctld.GetCellInfo:output_type -> vtctldata.GetCellInfoResponse
|
||||
58, // 58: vtctlservice.Vtctld.GetCellInfoNames:output_type -> vtctldata.GetCellInfoNamesResponse
|
||||
59, // 59: vtctlservice.Vtctld.GetCellsAliases:output_type -> vtctldata.GetCellsAliasesResponse
|
||||
60, // 60: vtctlservice.Vtctld.GetKeyspace:output_type -> vtctldata.GetKeyspaceResponse
|
||||
61, // 61: vtctlservice.Vtctld.GetKeyspaces:output_type -> vtctldata.GetKeyspacesResponse
|
||||
62, // 62: vtctlservice.Vtctld.GetRoutingRules:output_type -> vtctldata.GetRoutingRulesResponse
|
||||
63, // 63: vtctlservice.Vtctld.GetSchema:output_type -> vtctldata.GetSchemaResponse
|
||||
64, // 64: vtctlservice.Vtctld.GetShard:output_type -> vtctldata.GetShardResponse
|
||||
65, // 65: vtctlservice.Vtctld.GetSrvKeyspaces:output_type -> vtctldata.GetSrvKeyspacesResponse
|
||||
66, // 66: vtctlservice.Vtctld.GetSrvVSchema:output_type -> vtctldata.GetSrvVSchemaResponse
|
||||
67, // 67: vtctlservice.Vtctld.GetSrvVSchemas:output_type -> vtctldata.GetSrvVSchemasResponse
|
||||
68, // 68: vtctlservice.Vtctld.GetTablet:output_type -> vtctldata.GetTabletResponse
|
||||
69, // 69: vtctlservice.Vtctld.GetTablets:output_type -> vtctldata.GetTabletsResponse
|
||||
70, // 70: vtctlservice.Vtctld.GetVSchema:output_type -> vtctldata.GetVSchemaResponse
|
||||
71, // 71: vtctlservice.Vtctld.GetWorkflows:output_type -> vtctldata.GetWorkflowsResponse
|
||||
72, // 72: vtctlservice.Vtctld.InitShardPrimary:output_type -> vtctldata.InitShardPrimaryResponse
|
||||
73, // 73: vtctlservice.Vtctld.PlannedReparentShard:output_type -> vtctldata.PlannedReparentShardResponse
|
||||
74, // 74: vtctlservice.Vtctld.RebuildVSchemaGraph:output_type -> vtctldata.RebuildVSchemaGraphResponse
|
||||
75, // 75: vtctlservice.Vtctld.RemoveKeyspaceCell:output_type -> vtctldata.RemoveKeyspaceCellResponse
|
||||
76, // 76: vtctlservice.Vtctld.RemoveShardCell:output_type -> vtctldata.RemoveShardCellResponse
|
||||
77, // 77: vtctlservice.Vtctld.ReparentTablet:output_type -> vtctldata.ReparentTabletResponse
|
||||
78, // 78: vtctlservice.Vtctld.ShardReplicationPositions:output_type -> vtctldata.ShardReplicationPositionsResponse
|
||||
79, // 79: vtctlservice.Vtctld.TabletExternallyReparented:output_type -> vtctldata.TabletExternallyReparentedResponse
|
||||
80, // 80: vtctlservice.Vtctld.UpdateCellInfo:output_type -> vtctldata.UpdateCellInfoResponse
|
||||
81, // 81: vtctlservice.Vtctld.UpdateCellsAlias:output_type -> vtctldata.UpdateCellsAliasResponse
|
||||
41, // [41:82] is the sub-list for method output_type
|
||||
0, // [0:41] is the sub-list for method input_type
|
||||
34, // 34: vtctlservice.Vtctld.RefreshState:input_type -> vtctldata.RefreshStateRequest
|
||||
35, // 35: vtctlservice.Vtctld.RefreshStateByShard:input_type -> vtctldata.RefreshStateByShardRequest
|
||||
36, // 36: vtctlservice.Vtctld.RemoveKeyspaceCell:input_type -> vtctldata.RemoveKeyspaceCellRequest
|
||||
37, // 37: vtctlservice.Vtctld.RemoveShardCell:input_type -> vtctldata.RemoveShardCellRequest
|
||||
38, // 38: vtctlservice.Vtctld.ReparentTablet:input_type -> vtctldata.ReparentTabletRequest
|
||||
39, // 39: vtctlservice.Vtctld.ShardReplicationPositions:input_type -> vtctldata.ShardReplicationPositionsRequest
|
||||
40, // 40: vtctlservice.Vtctld.TabletExternallyReparented:input_type -> vtctldata.TabletExternallyReparentedRequest
|
||||
41, // 41: vtctlservice.Vtctld.UpdateCellInfo:input_type -> vtctldata.UpdateCellInfoRequest
|
||||
42, // 42: vtctlservice.Vtctld.UpdateCellsAlias:input_type -> vtctldata.UpdateCellsAliasRequest
|
||||
43, // 43: vtctlservice.Vtctl.ExecuteVtctlCommand:output_type -> vtctldata.ExecuteVtctlCommandResponse
|
||||
44, // 44: vtctlservice.Vtctld.AddCellInfo:output_type -> vtctldata.AddCellInfoResponse
|
||||
45, // 45: vtctlservice.Vtctld.AddCellsAlias:output_type -> vtctldata.AddCellsAliasResponse
|
||||
46, // 46: vtctlservice.Vtctld.ApplyRoutingRules:output_type -> vtctldata.ApplyRoutingRulesResponse
|
||||
47, // 47: vtctlservice.Vtctld.ApplyVSchema:output_type -> vtctldata.ApplyVSchemaResponse
|
||||
48, // 48: vtctlservice.Vtctld.ChangeTabletType:output_type -> vtctldata.ChangeTabletTypeResponse
|
||||
49, // 49: vtctlservice.Vtctld.CreateKeyspace:output_type -> vtctldata.CreateKeyspaceResponse
|
||||
50, // 50: vtctlservice.Vtctld.CreateShard:output_type -> vtctldata.CreateShardResponse
|
||||
51, // 51: vtctlservice.Vtctld.DeleteCellInfo:output_type -> vtctldata.DeleteCellInfoResponse
|
||||
52, // 52: vtctlservice.Vtctld.DeleteCellsAlias:output_type -> vtctldata.DeleteCellsAliasResponse
|
||||
53, // 53: vtctlservice.Vtctld.DeleteKeyspace:output_type -> vtctldata.DeleteKeyspaceResponse
|
||||
54, // 54: vtctlservice.Vtctld.DeleteShards:output_type -> vtctldata.DeleteShardsResponse
|
||||
55, // 55: vtctlservice.Vtctld.DeleteTablets:output_type -> vtctldata.DeleteTabletsResponse
|
||||
56, // 56: vtctlservice.Vtctld.EmergencyReparentShard:output_type -> vtctldata.EmergencyReparentShardResponse
|
||||
57, // 57: vtctlservice.Vtctld.FindAllShardsInKeyspace:output_type -> vtctldata.FindAllShardsInKeyspaceResponse
|
||||
58, // 58: vtctlservice.Vtctld.GetBackups:output_type -> vtctldata.GetBackupsResponse
|
||||
59, // 59: vtctlservice.Vtctld.GetCellInfo:output_type -> vtctldata.GetCellInfoResponse
|
||||
60, // 60: vtctlservice.Vtctld.GetCellInfoNames:output_type -> vtctldata.GetCellInfoNamesResponse
|
||||
61, // 61: vtctlservice.Vtctld.GetCellsAliases:output_type -> vtctldata.GetCellsAliasesResponse
|
||||
62, // 62: vtctlservice.Vtctld.GetKeyspace:output_type -> vtctldata.GetKeyspaceResponse
|
||||
63, // 63: vtctlservice.Vtctld.GetKeyspaces:output_type -> vtctldata.GetKeyspacesResponse
|
||||
64, // 64: vtctlservice.Vtctld.GetRoutingRules:output_type -> vtctldata.GetRoutingRulesResponse
|
||||
65, // 65: vtctlservice.Vtctld.GetSchema:output_type -> vtctldata.GetSchemaResponse
|
||||
66, // 66: vtctlservice.Vtctld.GetShard:output_type -> vtctldata.GetShardResponse
|
||||
67, // 67: vtctlservice.Vtctld.GetSrvKeyspaces:output_type -> vtctldata.GetSrvKeyspacesResponse
|
||||
68, // 68: vtctlservice.Vtctld.GetSrvVSchema:output_type -> vtctldata.GetSrvVSchemaResponse
|
||||
69, // 69: vtctlservice.Vtctld.GetSrvVSchemas:output_type -> vtctldata.GetSrvVSchemasResponse
|
||||
70, // 70: vtctlservice.Vtctld.GetTablet:output_type -> vtctldata.GetTabletResponse
|
||||
71, // 71: vtctlservice.Vtctld.GetTablets:output_type -> vtctldata.GetTabletsResponse
|
||||
72, // 72: vtctlservice.Vtctld.GetVSchema:output_type -> vtctldata.GetVSchemaResponse
|
||||
73, // 73: vtctlservice.Vtctld.GetWorkflows:output_type -> vtctldata.GetWorkflowsResponse
|
||||
74, // 74: vtctlservice.Vtctld.InitShardPrimary:output_type -> vtctldata.InitShardPrimaryResponse
|
||||
75, // 75: vtctlservice.Vtctld.PlannedReparentShard:output_type -> vtctldata.PlannedReparentShardResponse
|
||||
76, // 76: vtctlservice.Vtctld.RebuildVSchemaGraph:output_type -> vtctldata.RebuildVSchemaGraphResponse
|
||||
77, // 77: vtctlservice.Vtctld.RefreshState:output_type -> vtctldata.RefreshStateResponse
|
||||
78, // 78: vtctlservice.Vtctld.RefreshStateByShard:output_type -> vtctldata.RefreshStateByShardResponse
|
||||
79, // 79: vtctlservice.Vtctld.RemoveKeyspaceCell:output_type -> vtctldata.RemoveKeyspaceCellResponse
|
||||
80, // 80: vtctlservice.Vtctld.RemoveShardCell:output_type -> vtctldata.RemoveShardCellResponse
|
||||
81, // 81: vtctlservice.Vtctld.ReparentTablet:output_type -> vtctldata.ReparentTabletResponse
|
||||
82, // 82: vtctlservice.Vtctld.ShardReplicationPositions:output_type -> vtctldata.ShardReplicationPositionsResponse
|
||||
83, // 83: vtctlservice.Vtctld.TabletExternallyReparented:output_type -> vtctldata.TabletExternallyReparentedResponse
|
||||
84, // 84: vtctlservice.Vtctld.UpdateCellInfo:output_type -> vtctldata.UpdateCellInfoResponse
|
||||
85, // 85: vtctlservice.Vtctld.UpdateCellsAlias:output_type -> vtctldata.UpdateCellsAliasResponse
|
||||
43, // [43:86] is the sub-list for method output_type
|
||||
0, // [0:43] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
|
|
|
@ -237,6 +237,10 @@ type VtctldClient interface {
|
|||
// VSchema objects in the provided cells (or all cells in the topo none
|
||||
// provided).
|
||||
RebuildVSchemaGraph(ctx context.Context, in *vtctldata.RebuildVSchemaGraphRequest, opts ...grpc.CallOption) (*vtctldata.RebuildVSchemaGraphResponse, error)
|
||||
// RefreshState reloads the tablet record on the specified tablet.
|
||||
RefreshState(ctx context.Context, in *vtctldata.RefreshStateRequest, opts ...grpc.CallOption) (*vtctldata.RefreshStateResponse, error)
|
||||
// RefreshStateByShard calls RefreshState on all the tablets in the given shard.
|
||||
RefreshStateByShard(ctx context.Context, in *vtctldata.RefreshStateByShardRequest, opts ...grpc.CallOption) (*vtctldata.RefreshStateByShardResponse, error)
|
||||
// RemoveKeyspaceCell removes the specified cell from the Cells list for all
|
||||
// shards in the specified keyspace, as well as from the SrvKeyspace for that
|
||||
// keyspace in that cell.
|
||||
|
@ -576,6 +580,24 @@ func (c *vtctldClient) RebuildVSchemaGraph(ctx context.Context, in *vtctldata.Re
|
|||
return out, nil
|
||||
}
|
||||
|
||||
func (c *vtctldClient) RefreshState(ctx context.Context, in *vtctldata.RefreshStateRequest, opts ...grpc.CallOption) (*vtctldata.RefreshStateResponse, error) {
|
||||
out := new(vtctldata.RefreshStateResponse)
|
||||
err := c.cc.Invoke(ctx, "/vtctlservice.Vtctld/RefreshState", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *vtctldClient) RefreshStateByShard(ctx context.Context, in *vtctldata.RefreshStateByShardRequest, opts ...grpc.CallOption) (*vtctldata.RefreshStateByShardResponse, error) {
|
||||
out := new(vtctldata.RefreshStateByShardResponse)
|
||||
err := c.cc.Invoke(ctx, "/vtctlservice.Vtctld/RefreshStateByShard", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *vtctldClient) RemoveKeyspaceCell(ctx context.Context, in *vtctldata.RemoveKeyspaceCellRequest, opts ...grpc.CallOption) (*vtctldata.RemoveKeyspaceCellResponse, error) {
|
||||
out := new(vtctldata.RemoveKeyspaceCellResponse)
|
||||
err := c.cc.Invoke(ctx, "/vtctlservice.Vtctld/RemoveKeyspaceCell", in, out, opts...)
|
||||
|
@ -748,6 +770,10 @@ type VtctldServer interface {
|
|||
// VSchema objects in the provided cells (or all cells in the topo none
|
||||
// provided).
|
||||
RebuildVSchemaGraph(context.Context, *vtctldata.RebuildVSchemaGraphRequest) (*vtctldata.RebuildVSchemaGraphResponse, error)
|
||||
// RefreshState reloads the tablet record on the specified tablet.
|
||||
RefreshState(context.Context, *vtctldata.RefreshStateRequest) (*vtctldata.RefreshStateResponse, error)
|
||||
// RefreshStateByShard calls RefreshState on all the tablets in the given shard.
|
||||
RefreshStateByShard(context.Context, *vtctldata.RefreshStateByShardRequest) (*vtctldata.RefreshStateByShardResponse, error)
|
||||
// RemoveKeyspaceCell removes the specified cell from the Cells list for all
|
||||
// shards in the specified keyspace, as well as from the SrvKeyspace for that
|
||||
// keyspace in that cell.
|
||||
|
@ -886,6 +912,12 @@ func (UnimplementedVtctldServer) PlannedReparentShard(context.Context, *vtctldat
|
|||
func (UnimplementedVtctldServer) RebuildVSchemaGraph(context.Context, *vtctldata.RebuildVSchemaGraphRequest) (*vtctldata.RebuildVSchemaGraphResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method RebuildVSchemaGraph not implemented")
|
||||
}
|
||||
func (UnimplementedVtctldServer) RefreshState(context.Context, *vtctldata.RefreshStateRequest) (*vtctldata.RefreshStateResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method RefreshState not implemented")
|
||||
}
|
||||
func (UnimplementedVtctldServer) RefreshStateByShard(context.Context, *vtctldata.RefreshStateByShardRequest) (*vtctldata.RefreshStateByShardResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method RefreshStateByShard not implemented")
|
||||
}
|
||||
func (UnimplementedVtctldServer) RemoveKeyspaceCell(context.Context, *vtctldata.RemoveKeyspaceCellRequest) (*vtctldata.RemoveKeyspaceCellResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method RemoveKeyspaceCell not implemented")
|
||||
}
|
||||
|
@ -1514,6 +1546,42 @@ func _Vtctld_RebuildVSchemaGraph_Handler(srv interface{}, ctx context.Context, d
|
|||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Vtctld_RefreshState_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(vtctldata.RefreshStateRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(VtctldServer).RefreshState(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/vtctlservice.Vtctld/RefreshState",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(VtctldServer).RefreshState(ctx, req.(*vtctldata.RefreshStateRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Vtctld_RefreshStateByShard_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(vtctldata.RefreshStateByShardRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(VtctldServer).RefreshStateByShard(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/vtctlservice.Vtctld/RefreshStateByShard",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(VtctldServer).RefreshStateByShard(ctx, req.(*vtctldata.RefreshStateByShardRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Vtctld_RemoveKeyspaceCell_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(vtctldata.RemoveKeyspaceCellRequest)
|
||||
if err := dec(in); err != nil {
|
||||
|
@ -1779,6 +1847,14 @@ var Vtctld_ServiceDesc = grpc.ServiceDesc{
|
|||
MethodName: "RebuildVSchemaGraph",
|
||||
Handler: _Vtctld_RebuildVSchemaGraph_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "RefreshState",
|
||||
Handler: _Vtctld_RefreshState_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "RefreshStateByShard",
|
||||
Handler: _Vtctld_RefreshStateByShard_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "RemoveKeyspaceCell",
|
||||
Handler: _Vtctld_RemoveKeyspaceCell_Handler,
|
||||
|
|
|
@ -19,8 +19,10 @@ package schema
|
|||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"vitess.io/vitess/go/textutil"
|
||||
"vitess.io/vitess/go/vt/sqlparser"
|
||||
)
|
||||
|
||||
|
@ -51,6 +53,8 @@ var (
|
|||
}
|
||||
createTableRegexp = regexp.MustCompile(`(?s)(?i)(CREATE\s+TABLE\s+)` + "`" + `([^` + "`" + `]+)` + "`" + `(\s*[(].*$)`)
|
||||
revertStatementRegexp = regexp.MustCompile(`(?i)^revert\s+([\S]*)$`)
|
||||
|
||||
enumValuesRegexp = regexp.MustCompile("(?i)^enum[(](.*)[)]$")
|
||||
)
|
||||
|
||||
// ReplaceTableNameInCreateTableStatement returns a modified CREATE TABLE statement, such that the table name is replaced with given name.
|
||||
|
@ -101,3 +105,35 @@ func legacyParseRevertUUID(sql string) (uuid string, err error) {
|
|||
}
|
||||
return uuid, nil
|
||||
}
|
||||
|
||||
// ParseEnumValues parses the comma delimited part of an enum column definition
|
||||
func ParseEnumValues(enumColumnType string) string {
|
||||
if submatch := enumValuesRegexp.FindStringSubmatch(enumColumnType); len(submatch) > 0 {
|
||||
return submatch[1]
|
||||
}
|
||||
return enumColumnType
|
||||
}
|
||||
|
||||
// ParseEnumTokens parses the comma delimited part of an enum column definition and
|
||||
// returns the (unquoted) text values
|
||||
func ParseEnumTokens(enumValues string) []string {
|
||||
enumValues = ParseEnumValues(enumValues)
|
||||
tokens := textutil.SplitDelimitedList(enumValues)
|
||||
for i := range tokens {
|
||||
if strings.HasPrefix(tokens[i], `'`) && strings.HasSuffix(tokens[i], `'`) {
|
||||
tokens[i] = strings.Trim(tokens[i], `'`)
|
||||
}
|
||||
}
|
||||
return tokens
|
||||
}
|
||||
|
||||
// ParseEnumTokensMap parses the comma delimited part of an enum column definition
|
||||
// and returns a map where ["1"] is the first token, and ["<n>"] is th elast token
|
||||
func ParseEnumTokensMap(enumValues string) map[string]string {
|
||||
tokens := ParseEnumTokens(enumValues)
|
||||
tokensMap := map[string]string{}
|
||||
for i, token := range tokens {
|
||||
tokensMap[strconv.Itoa(i+1)] = token
|
||||
}
|
||||
return tokensMap
|
||||
}
|
||||
|
|
|
@ -109,3 +109,58 @@ func TestLegacyParseRevertUUID(t *testing.T) {
|
|||
assert.Error(t, err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseEnumValues(t *testing.T) {
|
||||
{
|
||||
inputs := []string{
|
||||
`enum('x-small','small','medium','large','x-large')`,
|
||||
`ENUM('x-small','small','medium','large','x-large')`,
|
||||
`'x-small','small','medium','large','x-large'`,
|
||||
}
|
||||
for _, input := range inputs {
|
||||
enumValues := ParseEnumValues(input)
|
||||
assert.Equal(t, `'x-small','small','medium','large','x-large'`, enumValues)
|
||||
}
|
||||
}
|
||||
{
|
||||
inputs := []string{
|
||||
``,
|
||||
`abc`,
|
||||
`func('x-small','small','medium','large','x-large')`,
|
||||
}
|
||||
for _, input := range inputs {
|
||||
enumValues := ParseEnumValues(input)
|
||||
assert.Equal(t, input, enumValues)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseEnumTokens(t *testing.T) {
|
||||
inputs := []string{
|
||||
`enum('x-small','small','medium','large','x-large')`,
|
||||
`'x-small','small','medium','large','x-large'`,
|
||||
}
|
||||
for _, input := range inputs {
|
||||
enumTokens := ParseEnumTokens(input)
|
||||
expect := []string{"x-small", "small", "medium", "large", "x-large"}
|
||||
assert.Equal(t, expect, enumTokens)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseEnumTokensMap(t *testing.T) {
|
||||
inputs := []string{
|
||||
`enum('x-small','small','medium','large','x-large')`,
|
||||
`'x-small','small','medium','large','x-large'`,
|
||||
}
|
||||
for _, input := range inputs {
|
||||
enumTokensMap := ParseEnumTokensMap(input)
|
||||
expect := map[string]string{
|
||||
"1": "x-small",
|
||||
"2": "small",
|
||||
"3": "medium",
|
||||
"4": "large",
|
||||
"5": "x-large",
|
||||
}
|
||||
assert.Equal(t, expect, enumTokensMap)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -33,7 +33,10 @@ import (
|
|||
// It only returns errors from looking up the tablet map from the topology;
|
||||
// errors returned from any RefreshState RPCs are logged and then ignored. Also,
|
||||
// any tablets without a .Hostname set in the topology are skipped.
|
||||
func RefreshTabletsByShard(ctx context.Context, ts *topo.Server, tmc tmclient.TabletManagerClient, si *topo.ShardInfo, cells []string, logger logutil.Logger) error {
|
||||
//
|
||||
// However, on partial errors from the topology, or errors from a RefreshState
|
||||
// RPC will cause a boolean flag to be returned indicating only partial success.
|
||||
func RefreshTabletsByShard(ctx context.Context, ts *topo.Server, tmc tmclient.TabletManagerClient, si *topo.ShardInfo, cells []string, logger logutil.Logger) (isPartialRefresh bool, err error) {
|
||||
logger.Infof("RefreshTabletsByShard called on shard %v/%v", si.Keyspace(), si.ShardName())
|
||||
|
||||
tabletMap, err := ts.GetTabletMapForShardByCell(ctx, si.Keyspace(), si.ShardName(), cells)
|
||||
|
@ -42,12 +45,16 @@ func RefreshTabletsByShard(ctx context.Context, ts *topo.Server, tmc tmclient.Ta
|
|||
// keep going
|
||||
case topo.IsErrType(err, topo.PartialResult):
|
||||
logger.Warningf("RefreshTabletsByShard: got partial result for shard %v/%v, may not refresh all tablets everywhere", si.Keyspace(), si.ShardName())
|
||||
isPartialRefresh = true
|
||||
default:
|
||||
return err
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Any errors from this point onward are ignored.
|
||||
var wg sync.WaitGroup
|
||||
var (
|
||||
m sync.Mutex
|
||||
wg sync.WaitGroup
|
||||
)
|
||||
for _, ti := range tabletMap {
|
||||
if ti.Hostname == "" {
|
||||
// The tablet is not running, we don't have the host
|
||||
|
@ -66,12 +73,15 @@ func RefreshTabletsByShard(ctx context.Context, ts *topo.Server, tmc tmclient.Ta
|
|||
|
||||
if err := tmc.RefreshState(ctx, ti.Tablet); err != nil {
|
||||
logger.Warningf("RefreshTabletsByShard: failed to refresh %v: %v", ti.AliasString(), err)
|
||||
m.Lock()
|
||||
isPartialRefresh = true
|
||||
m.Unlock()
|
||||
}
|
||||
}(ti)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
return nil
|
||||
return isPartialRefresh, nil
|
||||
}
|
||||
|
||||
// UpdateShardRecords updates the shard records based on 'from' or 'to'
|
||||
|
@ -111,7 +121,7 @@ func UpdateShardRecords(
|
|||
// For 'to' shards, refresh to make them serve. The 'from' shards will
|
||||
// be refreshed after traffic has migrated.
|
||||
if !isFrom {
|
||||
if err := RefreshTabletsByShard(ctx, ts, tmc, si, cells, logger); err != nil {
|
||||
if _, err := RefreshTabletsByShard(ctx, ts, tmc, si, cells, logger); err != nil {
|
||||
logger.Warningf("RefreshTabletsByShard(%v/%v, cells=%v) failed with %v; continuing ...", si.Keyspace(), si.ShardName(), cells, err)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -325,6 +325,24 @@ func (client *gRPCVtctldClient) RebuildVSchemaGraph(ctx context.Context, in *vtc
|
|||
return client.c.RebuildVSchemaGraph(ctx, in, opts...)
|
||||
}
|
||||
|
||||
// RefreshState is part of the vtctlservicepb.VtctldClient interface.
|
||||
func (client *gRPCVtctldClient) RefreshState(ctx context.Context, in *vtctldatapb.RefreshStateRequest, opts ...grpc.CallOption) (*vtctldatapb.RefreshStateResponse, error) {
|
||||
if client.c == nil {
|
||||
return nil, status.Error(codes.Unavailable, connClosedMsg)
|
||||
}
|
||||
|
||||
return client.c.RefreshState(ctx, in, opts...)
|
||||
}
|
||||
|
||||
// RefreshStateByShard is part of the vtctlservicepb.VtctldClient interface.
|
||||
func (client *gRPCVtctldClient) RefreshStateByShard(ctx context.Context, in *vtctldatapb.RefreshStateByShardRequest, opts ...grpc.CallOption) (*vtctldatapb.RefreshStateByShardResponse, error) {
|
||||
if client.c == nil {
|
||||
return nil, status.Error(codes.Unavailable, connClosedMsg)
|
||||
}
|
||||
|
||||
return client.c.RefreshStateByShard(ctx, in, opts...)
|
||||
}
|
||||
|
||||
// RemoveKeyspaceCell is part of the vtctlservicepb.VtctldClient interface.
|
||||
func (client *gRPCVtctldClient) RemoveKeyspaceCell(ctx context.Context, in *vtctldatapb.RemoveKeyspaceCellRequest, opts ...grpc.CallOption) (*vtctldatapb.RemoveKeyspaceCellResponse, error) {
|
||||
if client.c == nil {
|
||||
|
|
|
@ -654,6 +654,9 @@ func (s *VtctldServer) GetBackups(ctx context.Context, req *vtctldatapb.GetBacku
|
|||
|
||||
span.Annotate("keyspace", req.Keyspace)
|
||||
span.Annotate("shard", req.Shard)
|
||||
span.Annotate("limit", req.Limit)
|
||||
span.Annotate("detailed", req.Detailed)
|
||||
span.Annotate("detailed_limit", req.DetailedLimit)
|
||||
|
||||
bs, err := backupstorage.GetBackupStorage()
|
||||
if err != nil {
|
||||
|
@ -669,15 +672,42 @@ func (s *VtctldServer) GetBackups(ctx context.Context, req *vtctldatapb.GetBacku
|
|||
return nil, err
|
||||
}
|
||||
|
||||
resp := &vtctldatapb.GetBackupsResponse{
|
||||
Backups: make([]*mysqlctlpb.BackupInfo, len(bhs)),
|
||||
totalBackups := len(bhs)
|
||||
if req.Limit > 0 {
|
||||
totalBackups = int(req.Limit)
|
||||
}
|
||||
|
||||
totalDetailedBackups := len(bhs)
|
||||
if req.DetailedLimit > 0 {
|
||||
totalDetailedBackups = int(req.DetailedLimit)
|
||||
}
|
||||
|
||||
backups := make([]*mysqlctlpb.BackupInfo, 0, totalBackups)
|
||||
backupsToSkip := len(bhs) - totalBackups
|
||||
backupsToSkipDetails := totalBackups - totalDetailedBackups
|
||||
|
||||
for i, bh := range bhs {
|
||||
resp.Backups[i] = mysqlctlproto.BackupHandleToProto(bh)
|
||||
if i < backupsToSkip {
|
||||
continue
|
||||
}
|
||||
|
||||
bi := mysqlctlproto.BackupHandleToProto(bh)
|
||||
bi.Keyspace = req.Keyspace
|
||||
bi.Shard = req.Shard
|
||||
|
||||
if req.Detailed {
|
||||
if i >= backupsToSkipDetails { // nolint:staticcheck
|
||||
// (TODO:@ajm188) Update backupengine/backupstorage implementations
|
||||
// to get Status info for backups.
|
||||
}
|
||||
}
|
||||
|
||||
backups = append(backups, bi)
|
||||
}
|
||||
|
||||
return resp, nil
|
||||
return &vtctldatapb.GetBackupsResponse{
|
||||
Backups: backups,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// GetCellInfoNames is part of the vtctlservicepb.VtctldServer interface.
|
||||
|
@ -1521,6 +1551,64 @@ func (s *VtctldServer) RebuildVSchemaGraph(ctx context.Context, req *vtctldatapb
|
|||
return &vtctldatapb.RebuildVSchemaGraphResponse{}, nil
|
||||
}
|
||||
|
||||
// RefreshState is part of the vtctldservicepb.VtctldServer interface.
|
||||
func (s *VtctldServer) RefreshState(ctx context.Context, req *vtctldatapb.RefreshStateRequest) (*vtctldatapb.RefreshStateResponse, error) {
|
||||
if req.TabletAlias == nil {
|
||||
return nil, vterrors.Errorf(vtrpc.Code_INVALID_ARGUMENT, "RefreshState requires a tablet alias")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, *topo.RemoteOperationTimeout)
|
||||
defer cancel()
|
||||
|
||||
tablet, err := s.ts.GetTablet(ctx, req.TabletAlias)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to get tablet %s: %w", topoproto.TabletAliasString(req.TabletAlias), err)
|
||||
}
|
||||
|
||||
if err := s.tmc.RefreshState(ctx, tablet.Tablet); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &vtctldatapb.RefreshStateResponse{}, nil
|
||||
}
|
||||
|
||||
// RefreshStateByShard is part of the vtctldservicepb.VtctldServer interface.
|
||||
func (s *VtctldServer) RefreshStateByShard(ctx context.Context, req *vtctldatapb.RefreshStateByShardRequest) (*vtctldatapb.RefreshStateByShardResponse, error) {
|
||||
if req.Keyspace == "" {
|
||||
return nil, vterrors.Errorf(vtrpc.Code_INVALID_ARGUMENT, "RefreshStateByShard requires a keyspace")
|
||||
}
|
||||
|
||||
if req.Shard == "" {
|
||||
return nil, vterrors.Errorf(vtrpc.Code_INVALID_ARGUMENT, "RefreshStateByShard requires a shard")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, *topo.RemoteOperationTimeout)
|
||||
defer cancel()
|
||||
|
||||
si, err := s.ts.GetShard(ctx, req.Keyspace, req.Shard)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to get shard %s/%s/: %w", req.Keyspace, req.Shard, err)
|
||||
}
|
||||
|
||||
isPartial, err := topotools.RefreshTabletsByShard(ctx, s.ts, s.tmc, si, req.Cells, logutil.NewCallbackLogger(func(e *logutilpb.Event) {
|
||||
switch e.Level {
|
||||
case logutilpb.Level_WARNING:
|
||||
log.Warningf(e.Value)
|
||||
case logutilpb.Level_ERROR:
|
||||
log.Errorf(e.Value)
|
||||
default:
|
||||
log.Infof(e.Value)
|
||||
}
|
||||
}))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &vtctldatapb.RefreshStateByShardResponse{
|
||||
IsPartialRefresh: isPartial,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// RemoveKeyspaceCell is part of the vtctlservicepb.VtctldServer interface.
|
||||
func (s *VtctldServer) RemoveKeyspaceCell(ctx context.Context, req *vtctldatapb.RemoveKeyspaceCellRequest) (*vtctldatapb.RemoveKeyspaceCellResponse, error) {
|
||||
span, ctx := trace.NewSpan(ctx, "VtctldServer.RemoveKeyspaceCell")
|
||||
|
|
|
@ -2738,10 +2738,14 @@ func TestGetBackups(t *testing.T) {
|
|||
{
|
||||
Directory: "testkeyspace/-",
|
||||
Name: "backup1",
|
||||
Keyspace: "testkeyspace",
|
||||
Shard: "-",
|
||||
},
|
||||
{
|
||||
Directory: "testkeyspace/-",
|
||||
Name: "backup2",
|
||||
Keyspace: "testkeyspace",
|
||||
Shard: "-",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
@ -2774,6 +2778,53 @@ func TestGetBackups(t *testing.T) {
|
|||
})
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("parsing times and aliases", func(t *testing.T) {
|
||||
testutil.BackupStorage.Backups["ks2/-80"] = []string{
|
||||
"2021-06-11.123456.zone1-101",
|
||||
}
|
||||
|
||||
resp, err := vtctld.GetBackups(ctx, &vtctldatapb.GetBackupsRequest{
|
||||
Keyspace: "ks2",
|
||||
Shard: "-80",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
expected := &vtctldatapb.GetBackupsResponse{
|
||||
Backups: []*mysqlctlpb.BackupInfo{
|
||||
{
|
||||
Directory: "ks2/-80",
|
||||
Name: "2021-06-11.123456.zone1-101",
|
||||
Keyspace: "ks2",
|
||||
Shard: "-80",
|
||||
Time: protoutil.TimeToProto(time.Date(2021, time.June, 11, 12, 34, 56, 0, time.UTC)),
|
||||
TabletAlias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 101,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
utils.MustMatch(t, expected, resp)
|
||||
})
|
||||
|
||||
t.Run("limiting", func(t *testing.T) {
|
||||
unlimited, err := vtctld.GetBackups(ctx, &vtctldatapb.GetBackupsRequest{
|
||||
Keyspace: "testkeyspace",
|
||||
Shard: "-",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
limited, err := vtctld.GetBackups(ctx, &vtctldatapb.GetBackupsRequest{
|
||||
Keyspace: "testkeyspace",
|
||||
Shard: "-",
|
||||
Limit: 1,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, len(limited.Backups), 1, "expected limited backups to have length 1")
|
||||
assert.Less(t, len(limited.Backups), len(unlimited.Backups), "expected limited backups to be less than unlimited")
|
||||
utils.MustMatch(t, limited.Backups[0], unlimited.Backups[len(unlimited.Backups)-1], "expected limiting to keep N most recent")
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetKeyspace(t *testing.T) {
|
||||
|
@ -4535,6 +4586,294 @@ func TestRebuildVSchemaGraph(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRefreshState(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
tests := []struct {
|
||||
name string
|
||||
ts *topo.Server
|
||||
tablet *topodatapb.Tablet
|
||||
refreshStateError error
|
||||
req *vtctldatapb.RefreshStateRequest
|
||||
shouldErr bool
|
||||
}{
|
||||
{
|
||||
name: "success",
|
||||
ts: memorytopo.NewServer("zone1"),
|
||||
tablet: &topodatapb.Tablet{
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
},
|
||||
refreshStateError: nil,
|
||||
req: &vtctldatapb.RefreshStateRequest{
|
||||
TabletAlias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "tablet alias nil",
|
||||
ts: memorytopo.NewServer(),
|
||||
req: &vtctldatapb.RefreshStateRequest{},
|
||||
shouldErr: true,
|
||||
},
|
||||
{
|
||||
name: "tablet not found",
|
||||
ts: memorytopo.NewServer("zone1"),
|
||||
tablet: &topodatapb.Tablet{
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
},
|
||||
refreshStateError: nil,
|
||||
req: &vtctldatapb.RefreshStateRequest{
|
||||
TabletAlias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 400,
|
||||
},
|
||||
},
|
||||
shouldErr: true,
|
||||
},
|
||||
{
|
||||
name: "RefreshState failed",
|
||||
ts: memorytopo.NewServer("zone1"),
|
||||
tablet: &topodatapb.Tablet{
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
},
|
||||
refreshStateError: fmt.Errorf("%w: RefreshState failed", assert.AnError),
|
||||
req: &vtctldatapb.RefreshStateRequest{
|
||||
TabletAlias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
},
|
||||
shouldErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
tt := tt
|
||||
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
var tmc testutil.TabletManagerClient
|
||||
if tt.tablet != nil {
|
||||
testutil.AddTablet(ctx, t, tt.ts, tt.tablet, nil)
|
||||
tmc.RefreshStateResults = map[string]error{
|
||||
topoproto.TabletAliasString(tt.tablet.Alias): tt.refreshStateError,
|
||||
}
|
||||
}
|
||||
|
||||
vtctld := testutil.NewVtctldServerWithTabletManagerClient(t, tt.ts, &tmc, func(ts *topo.Server) vtctlservicepb.VtctldServer {
|
||||
return NewVtctldServer(ts)
|
||||
})
|
||||
_, err := vtctld.RefreshState(ctx, tt.req)
|
||||
if tt.shouldErr {
|
||||
assert.Error(t, err)
|
||||
return
|
||||
}
|
||||
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRefreshStateByShard(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
tests := []struct {
|
||||
name string
|
||||
ts *topo.Server
|
||||
tablets []*topodatapb.Tablet
|
||||
refreshStateErrors []error // must have len(tablets)
|
||||
req *vtctldatapb.RefreshStateByShardRequest
|
||||
expected *vtctldatapb.RefreshStateByShardResponse
|
||||
shouldErr bool
|
||||
}{
|
||||
{
|
||||
name: "success",
|
||||
ts: memorytopo.NewServer("zone1", "zone2"),
|
||||
tablets: []*topodatapb.Tablet{
|
||||
{
|
||||
Hostname: "zone1-100",
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
},
|
||||
{
|
||||
Hostname: "zone2-100",
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone2",
|
||||
Uid: 100,
|
||||
},
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
},
|
||||
},
|
||||
refreshStateErrors: []error{
|
||||
nil, // zone1-100
|
||||
nil, // zone2-100
|
||||
},
|
||||
req: &vtctldatapb.RefreshStateByShardRequest{
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
},
|
||||
expected: &vtctldatapb.RefreshStateByShardResponse{},
|
||||
},
|
||||
{
|
||||
name: "cell filtering",
|
||||
ts: memorytopo.NewServer("zone1", "zone2"),
|
||||
tablets: []*topodatapb.Tablet{
|
||||
{
|
||||
Hostname: "zone1-100",
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
},
|
||||
{
|
||||
Hostname: "zone2-100",
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone2",
|
||||
Uid: 100,
|
||||
},
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
},
|
||||
},
|
||||
refreshStateErrors: []error{
|
||||
nil,
|
||||
fmt.Errorf("%w: RefreshState failed on zone2-100", assert.AnError),
|
||||
},
|
||||
req: &vtctldatapb.RefreshStateByShardRequest{
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
Cells: []string{"zone1"}, // If we didn't filter, we would get IsPartialRefresh=true because of the failure in zone2.
|
||||
},
|
||||
expected: &vtctldatapb.RefreshStateByShardResponse{
|
||||
IsPartialRefresh: false,
|
||||
},
|
||||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "partial result",
|
||||
ts: memorytopo.NewServer("zone1", "zone2"),
|
||||
tablets: []*topodatapb.Tablet{
|
||||
{
|
||||
Hostname: "zone1-100",
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
},
|
||||
{
|
||||
Hostname: "zone2-100",
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone2",
|
||||
Uid: 100,
|
||||
},
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
},
|
||||
},
|
||||
refreshStateErrors: []error{
|
||||
nil,
|
||||
fmt.Errorf("%w: RefreshState failed on zone2-100", assert.AnError),
|
||||
},
|
||||
req: &vtctldatapb.RefreshStateByShardRequest{
|
||||
Keyspace: "ks",
|
||||
Shard: "-",
|
||||
},
|
||||
expected: &vtctldatapb.RefreshStateByShardResponse{
|
||||
IsPartialRefresh: true,
|
||||
},
|
||||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "missing keyspace argument",
|
||||
ts: memorytopo.NewServer(),
|
||||
req: &vtctldatapb.RefreshStateByShardRequest{},
|
||||
shouldErr: true,
|
||||
},
|
||||
{
|
||||
name: "missing shard argument",
|
||||
ts: memorytopo.NewServer(),
|
||||
req: &vtctldatapb.RefreshStateByShardRequest{
|
||||
Keyspace: "ks",
|
||||
},
|
||||
shouldErr: true,
|
||||
},
|
||||
{
|
||||
name: "shard not found",
|
||||
ts: memorytopo.NewServer("zone1"),
|
||||
tablets: []*topodatapb.Tablet{
|
||||
{
|
||||
Alias: &topodatapb.TabletAlias{
|
||||
Cell: "zone1",
|
||||
Uid: 100,
|
||||
},
|
||||
Keyspace: "ks",
|
||||
Shard: "-80",
|
||||
},
|
||||
},
|
||||
refreshStateErrors: []error{nil},
|
||||
req: &vtctldatapb.RefreshStateByShardRequest{
|
||||
Keyspace: "ks2",
|
||||
Shard: "-",
|
||||
},
|
||||
shouldErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
tt := tt
|
||||
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
require.Equal(t, len(tt.tablets), len(tt.refreshStateErrors), "Invalid test case: must have one refreshStateError for each tablet")
|
||||
|
||||
tmc := &testutil.TabletManagerClient{
|
||||
RefreshStateResults: make(map[string]error, len(tt.tablets)),
|
||||
}
|
||||
testutil.AddTablets(ctx, t, tt.ts, nil, tt.tablets...)
|
||||
for i, tablet := range tt.tablets {
|
||||
key := topoproto.TabletAliasString(tablet.Alias)
|
||||
tmc.RefreshStateResults[key] = tt.refreshStateErrors[i]
|
||||
}
|
||||
|
||||
vtctld := testutil.NewVtctldServerWithTabletManagerClient(t, tt.ts, tmc, func(ts *topo.Server) vtctlservicepb.VtctldServer {
|
||||
return NewVtctldServer(ts)
|
||||
})
|
||||
resp, err := vtctld.RefreshStateByShard(ctx, tt.req)
|
||||
if tt.shouldErr {
|
||||
assert.Error(t, err)
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
utils.MustMatch(t, tt.expected, resp)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoveKeyspaceCell(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
|
|
|
@ -162,6 +162,8 @@ type TabletManagerClient struct {
|
|||
Result string
|
||||
Error error
|
||||
}
|
||||
// keyed by tablet alias.
|
||||
RefreshStateResults map[string]error
|
||||
ReplicationStatusDelays map[string]time.Duration
|
||||
ReplicationStatusResults map[string]struct {
|
||||
Position *replicationdatapb.Status
|
||||
|
@ -366,6 +368,20 @@ func (fake *TabletManagerClient) PromoteReplica(ctx context.Context, tablet *top
|
|||
return "", assert.AnError
|
||||
}
|
||||
|
||||
// RefreshState is part of the tmclient.TabletManagerClient interface.
|
||||
func (fake *TabletManagerClient) RefreshState(ctx context.Context, tablet *topodatapb.Tablet) error {
|
||||
if fake.RefreshStateResults == nil {
|
||||
return fmt.Errorf("%w: no RefreshState results on fake TabletManagerClient", assert.AnError)
|
||||
}
|
||||
|
||||
key := topoproto.TabletAliasString(tablet.Alias)
|
||||
if err, ok := fake.RefreshStateResults[key]; ok {
|
||||
return err
|
||||
}
|
||||
|
||||
return fmt.Errorf("%w: no RefreshState result set for tablet %s", assert.AnError, key)
|
||||
}
|
||||
|
||||
// ReplicationStatus is part of the tmclient.TabletManagerClient interface.
|
||||
func (fake *TabletManagerClient) ReplicationStatus(ctx context.Context, tablet *topodatapb.Tablet) (*replicationdatapb.Status, error) {
|
||||
if fake.ReplicationStatusResults == nil {
|
||||
|
|
|
@ -941,11 +941,11 @@ func commandRefreshState(ctx context.Context, wr *wrangler.Wrangler, subFlags *f
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tabletInfo, err := wr.TopoServer().GetTablet(ctx, tabletAlias)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return wr.TabletManagerClient().RefreshState(ctx, tabletInfo.Tablet)
|
||||
|
||||
_, err = wr.VtctldServer().RefreshState(ctx, &vtctldatapb.RefreshStateRequest{
|
||||
TabletAlias: tabletAlias,
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
func commandRefreshStateByShard(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
|
||||
|
@ -961,16 +961,18 @@ func commandRefreshStateByShard(ctx context.Context, wr *wrangler.Wrangler, subF
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
si, err := wr.TopoServer().GetShard(ctx, keyspace, shard)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var cells []string
|
||||
if *cellsStr != "" {
|
||||
cells = strings.Split(*cellsStr, ",")
|
||||
}
|
||||
return wr.RefreshTabletsByShard(ctx, si, cells)
|
||||
|
||||
_, err = wr.VtctldServer().RefreshStateByShard(ctx, &vtctldatapb.RefreshStateByShardRequest{
|
||||
Keyspace: keyspace,
|
||||
Shard: shard,
|
||||
Cells: cells,
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
func commandRunHealthCheck(ctx context.Context, wr *wrangler.Wrangler, subFlags *flag.FlagSet, args []string) error {
|
||||
|
|
|
@ -20,7 +20,9 @@ import (
|
|||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"google.golang.org/protobuf/encoding/prototext"
|
||||
|
@ -465,6 +467,147 @@ func (s *Server) GetWorkflows(ctx context.Context, req *vtctldatapb.GetWorkflows
|
|||
if err := scanWorkflow(ctx, workflow, row, tablet); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Sort shard streams by stream_id ASC, to support an optimization
|
||||
// in fetchStreamLogs below.
|
||||
for _, shardStreams := range workflow.ShardStreams {
|
||||
sort.Slice(shardStreams.Streams, func(i, j int) bool {
|
||||
return shardStreams.Streams[i].Id < shardStreams.Streams[j].Id
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
wg sync.WaitGroup
|
||||
vrepLogQuery = strings.TrimSpace(`
|
||||
SELECT
|
||||
id,
|
||||
vrepl_id,
|
||||
type,
|
||||
state,
|
||||
message,
|
||||
created_at,
|
||||
updated_at,
|
||||
count
|
||||
FROM
|
||||
_vt.vreplication_log
|
||||
ORDER BY
|
||||
vrepl_id ASC,
|
||||
id ASC
|
||||
`)
|
||||
)
|
||||
|
||||
fetchStreamLogs := func(ctx context.Context, workflow *vtctldatapb.Workflow) {
|
||||
defer wg.Done()
|
||||
|
||||
results, err := vx.WithWorkflow(workflow.Name).QueryContext(ctx, vrepLogQuery)
|
||||
if err != nil {
|
||||
// Note that we do not return here. If there are any query results
|
||||
// in the map (i.e. some tablets returned successfully), we will
|
||||
// still try to read log rows from them on a best-effort basis. But,
|
||||
// we will also pre-emptively record the top-level fetch error on
|
||||
// every stream in every shard in the workflow. Further processing
|
||||
// below may override the error message for certain streams.
|
||||
for _, streams := range workflow.ShardStreams {
|
||||
for _, stream := range streams.Streams {
|
||||
stream.LogFetchError = err.Error()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for target, p3qr := range results {
|
||||
qr := sqltypes.Proto3ToResult(p3qr)
|
||||
shardStreamKey := fmt.Sprintf("%s/%s", target.Shard, target.AliasString())
|
||||
|
||||
ss, ok := workflow.ShardStreams[shardStreamKey]
|
||||
if !ok || ss == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
streams := ss.Streams
|
||||
streamIdx := 0
|
||||
markErrors := func(err error) {
|
||||
if streamIdx >= len(streams) {
|
||||
return
|
||||
}
|
||||
|
||||
streams[streamIdx].LogFetchError = err.Error()
|
||||
}
|
||||
|
||||
for _, row := range qr.Rows {
|
||||
id, err := evalengine.ToInt64(row[0])
|
||||
if err != nil {
|
||||
markErrors(err)
|
||||
continue
|
||||
}
|
||||
|
||||
streamID, err := evalengine.ToInt64(row[1])
|
||||
if err != nil {
|
||||
markErrors(err)
|
||||
continue
|
||||
}
|
||||
|
||||
typ := row[2].ToString()
|
||||
state := row[3].ToString()
|
||||
message := row[4].ToString()
|
||||
|
||||
createdAt, err := time.Parse("2006-01-02 15:04:05", row[5].ToString())
|
||||
if err != nil {
|
||||
markErrors(err)
|
||||
continue
|
||||
}
|
||||
|
||||
updatedAt, err := time.Parse("2006-01-02 15:04:05", row[6].ToString())
|
||||
if err != nil {
|
||||
markErrors(err)
|
||||
continue
|
||||
}
|
||||
|
||||
count, err := evalengine.ToInt64(row[7])
|
||||
if err != nil {
|
||||
markErrors(err)
|
||||
continue
|
||||
}
|
||||
|
||||
streamLog := &vtctldatapb.Workflow_Stream_Log{
|
||||
Id: id,
|
||||
StreamId: streamID,
|
||||
Type: typ,
|
||||
State: state,
|
||||
CreatedAt: &vttime.Time{
|
||||
Seconds: createdAt.Unix(),
|
||||
},
|
||||
UpdatedAt: &vttime.Time{
|
||||
Seconds: updatedAt.Unix(),
|
||||
},
|
||||
Message: message,
|
||||
Count: count,
|
||||
}
|
||||
|
||||
// Earlier, in the main loop where we called scanWorkflow for
|
||||
// each _vt.vreplication row, we also sorted each ShardStreams
|
||||
// slice by ascending id, and our _vt.vreplication_log query
|
||||
// ordered by (stream_id ASC, id ASC), so we can walk the
|
||||
// streams in index order in O(n) amortized over all the rows
|
||||
// for this tablet.
|
||||
for streamIdx < len(streams) {
|
||||
stream := streams[streamIdx]
|
||||
if stream.Id < streamLog.StreamId {
|
||||
streamIdx++
|
||||
continue
|
||||
}
|
||||
|
||||
if stream.Id > streamLog.StreamId {
|
||||
log.Warningf("Found stream log for nonexistent stream: %+v", streamLog)
|
||||
break
|
||||
}
|
||||
|
||||
// stream.Id == streamLog.StreamId
|
||||
stream.Logs = append(stream.Logs, streamLog)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -508,9 +651,16 @@ func (s *Server) GetWorkflows(ctx context.Context, req *vtctldatapb.GetWorkflows
|
|||
|
||||
workflow.MaxVReplicationLag = int64(maxVReplicationLag)
|
||||
|
||||
// Fetch logs for all streams associated with this workflow in the background.
|
||||
wg.Add(1)
|
||||
go fetchStreamLogs(ctx, workflow)
|
||||
|
||||
workflows = append(workflows, workflow)
|
||||
}
|
||||
|
||||
// Wait for all the log fetchers to finish.
|
||||
wg.Wait()
|
||||
|
||||
return &vtctldatapb.GetWorkflowsResponse{
|
||||
Workflows: workflows,
|
||||
}, nil
|
||||
|
|
|
@ -31,17 +31,28 @@ import (
|
|||
querypb "vitess.io/vitess/go/vt/proto/query"
|
||||
)
|
||||
|
||||
// QueryPlan wraps a planned query produced by a QueryPlanner. It is safe to
|
||||
// execute a QueryPlan repeatedly and in multiple goroutines.
|
||||
type QueryPlan struct {
|
||||
// QueryPlan defines the interface to executing a preprared vexec query on one
|
||||
// or more tablets. Implementations should ensure that it is safe to call the
|
||||
// various Execute* methods repeatedly and in multiple goroutines.
|
||||
type QueryPlan interface {
|
||||
// Execute executes the planned query on a single target.
|
||||
Execute(ctx context.Context, target *topo.TabletInfo) (*querypb.QueryResult, error)
|
||||
// ExecuteScatter executes the planned query on the specified targets concurrently,
|
||||
// returning a mapping of the target tablet to a querypb.QueryResult.
|
||||
ExecuteScatter(ctx context.Context, targets ...*topo.TabletInfo) (map[*topo.TabletInfo]*querypb.QueryResult, error)
|
||||
}
|
||||
|
||||
// FixedQueryPlan wraps a planned query produced by a QueryPlanner. It executes
|
||||
// the same query with the same bind vals, regardless of the target.
|
||||
type FixedQueryPlan struct {
|
||||
ParsedQuery *sqlparser.ParsedQuery
|
||||
|
||||
workflow string
|
||||
tmc tmclient.TabletManagerClient
|
||||
}
|
||||
|
||||
// Execute executes a QueryPlan on a single target.
|
||||
func (qp *QueryPlan) Execute(ctx context.Context, target *topo.TabletInfo) (qr *querypb.QueryResult, err error) {
|
||||
// Execute is part of the QueryPlan interface.
|
||||
func (qp *FixedQueryPlan) Execute(ctx context.Context, target *topo.TabletInfo) (qr *querypb.QueryResult, err error) {
|
||||
if qp.ParsedQuery == nil {
|
||||
return nil, fmt.Errorf("%w: call PlanQuery on a query planner first", ErrUnpreparedQuery)
|
||||
}
|
||||
|
@ -62,10 +73,10 @@ func (qp *QueryPlan) Execute(ctx context.Context, target *topo.TabletInfo) (qr *
|
|||
return qr, nil
|
||||
}
|
||||
|
||||
// ExecuteScatter executes a QueryPlan on multiple targets concurrently,
|
||||
// returning a mapping of target tablet to querypb.QueryResult. Errors from
|
||||
// individual targets are aggregated into a singular error.
|
||||
func (qp *QueryPlan) ExecuteScatter(ctx context.Context, targets ...*topo.TabletInfo) (map[*topo.TabletInfo]*querypb.QueryResult, error) {
|
||||
// ExecuteScatter is part of the QueryPlan interface. For a FixedQueryPlan, the
|
||||
// exact same query is executed on each target, and errors from individual
|
||||
// targets are aggregated into a singular error.
|
||||
func (qp *FixedQueryPlan) ExecuteScatter(ctx context.Context, targets ...*topo.TabletInfo) (map[*topo.TabletInfo]*querypb.QueryResult, error) {
|
||||
if qp.ParsedQuery == nil {
|
||||
// This check is an "optimization" on error handling. We check here,
|
||||
// even though we will check this during the individual Execute calls,
|
||||
|
@ -105,3 +116,88 @@ func (qp *QueryPlan) ExecuteScatter(ctx context.Context, targets ...*topo.Tablet
|
|||
|
||||
return results, rec.AggrError(vterrors.Aggregate)
|
||||
}
|
||||
|
||||
// PerTargetQueryPlan implements the QueryPlan interface. Unlike FixedQueryPlan,
|
||||
// this implementation implements different queries, keyed by tablet alias, on
|
||||
// different targets.
|
||||
//
|
||||
// It is the callers responsibility to ensure that the shape of the QueryResult
|
||||
// (i.e. fields returned) is consistent for each target's planned query, but
|
||||
// this is not enforced.
|
||||
type PerTargetQueryPlan struct {
|
||||
ParsedQueries map[string]*sqlparser.ParsedQuery
|
||||
|
||||
tmc tmclient.TabletManagerClient
|
||||
}
|
||||
|
||||
// Execute is part of the QueryPlan interface.
|
||||
//
|
||||
// It returns ErrUnpreparedQuery if there is no ParsedQuery for the target's
|
||||
// tablet alias.
|
||||
func (qp *PerTargetQueryPlan) Execute(ctx context.Context, target *topo.TabletInfo) (qr *querypb.QueryResult, err error) {
|
||||
if qp.ParsedQueries == nil {
|
||||
return nil, fmt.Errorf("%w: call PlanQuery on a query planner first", ErrUnpreparedQuery)
|
||||
}
|
||||
|
||||
targetAliasStr := target.AliasString()
|
||||
query, ok := qp.ParsedQueries[targetAliasStr]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("%w: no prepared query for target %s", ErrUnpreparedQuery, targetAliasStr)
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
log.Warningf("Result on %v: %v", targetAliasStr, err)
|
||||
return
|
||||
}
|
||||
}()
|
||||
|
||||
qr, err = qp.tmc.VReplicationExec(ctx, target.Tablet, query.Query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return qr, nil
|
||||
}
|
||||
|
||||
// ExecuteScatter is part of the QueryPlan interface.
|
||||
func (qp *PerTargetQueryPlan) ExecuteScatter(ctx context.Context, targets ...*topo.TabletInfo) (map[*topo.TabletInfo]*querypb.QueryResult, error) {
|
||||
if qp.ParsedQueries == nil {
|
||||
// This check is an "optimization" on error handling. We check here,
|
||||
// even though we will check this during the individual Execute calls,
|
||||
// so that we return one error, rather than the same error aggregated
|
||||
// len(targets) times.
|
||||
return nil, fmt.Errorf("%w: call PlanQuery on a query planner first", ErrUnpreparedQuery)
|
||||
}
|
||||
|
||||
var (
|
||||
m sync.Mutex
|
||||
wg sync.WaitGroup
|
||||
rec concurrency.AllErrorRecorder
|
||||
results = make(map[*topo.TabletInfo]*querypb.QueryResult, len(targets))
|
||||
)
|
||||
|
||||
for _, target := range targets {
|
||||
wg.Add(1)
|
||||
|
||||
go func(ctx context.Context, target *topo.TabletInfo) {
|
||||
defer wg.Done()
|
||||
|
||||
qr, err := qp.Execute(ctx, target)
|
||||
if err != nil {
|
||||
rec.RecordError(err)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
results[target] = qr
|
||||
}(ctx, target)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
return results, rec.AggrError(vterrors.Aggregate)
|
||||
}
|
||||
|
|
|
@ -38,7 +38,7 @@ func TestQueryPlanExecute(t *testing.T) {
|
|||
|
||||
tests := []struct {
|
||||
name string
|
||||
plan QueryPlan
|
||||
plan FixedQueryPlan
|
||||
target *topo.TabletInfo
|
||||
expected *querypb.QueryResult
|
||||
shouldErr bool
|
||||
|
@ -46,7 +46,7 @@ func TestQueryPlanExecute(t *testing.T) {
|
|||
}{
|
||||
{
|
||||
name: "success",
|
||||
plan: QueryPlan{
|
||||
plan: FixedQueryPlan{
|
||||
ParsedQuery: &sqlparser.ParsedQuery{
|
||||
Query: "SELECT id FROM _vt.vreplication",
|
||||
},
|
||||
|
@ -80,7 +80,7 @@ func TestQueryPlanExecute(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "no rows affected",
|
||||
plan: QueryPlan{
|
||||
plan: FixedQueryPlan{
|
||||
ParsedQuery: &sqlparser.ParsedQuery{
|
||||
Query: "SELECT id FROM _vt.vreplication",
|
||||
},
|
||||
|
@ -114,7 +114,7 @@ func TestQueryPlanExecute(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "error",
|
||||
plan: QueryPlan{
|
||||
plan: FixedQueryPlan{
|
||||
ParsedQuery: &sqlparser.ParsedQuery{
|
||||
Query: "SELECT id FROM _vt.vreplication",
|
||||
},
|
||||
|
@ -144,7 +144,7 @@ func TestQueryPlanExecute(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "unprepared query",
|
||||
plan: QueryPlan{
|
||||
plan: FixedQueryPlan{
|
||||
ParsedQuery: nil,
|
||||
},
|
||||
shouldErr: true,
|
||||
|
@ -182,7 +182,7 @@ func TestQueryPlanExecuteScatter(t *testing.T) {
|
|||
|
||||
tests := []struct {
|
||||
name string
|
||||
plan QueryPlan
|
||||
plan FixedQueryPlan
|
||||
targets []*topo.TabletInfo
|
||||
// This is different from our actual return type because guaranteeing
|
||||
// exact pointers in this table-driven style is a bit tough.
|
||||
|
@ -192,7 +192,7 @@ func TestQueryPlanExecuteScatter(t *testing.T) {
|
|||
}{
|
||||
{
|
||||
name: "success",
|
||||
plan: QueryPlan{
|
||||
plan: FixedQueryPlan{
|
||||
ParsedQuery: &sqlparser.ParsedQuery{
|
||||
Query: "SELECT id FROM _vt.vreplication",
|
||||
},
|
||||
|
@ -248,7 +248,7 @@ func TestQueryPlanExecuteScatter(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "some targets fail",
|
||||
plan: QueryPlan{
|
||||
plan: FixedQueryPlan{
|
||||
ParsedQuery: &sqlparser.ParsedQuery{
|
||||
Query: "SELECT id FROM _vt.vreplication",
|
||||
},
|
||||
|
@ -294,7 +294,7 @@ func TestQueryPlanExecuteScatter(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "unprepared query",
|
||||
plan: QueryPlan{
|
||||
plan: FixedQueryPlan{
|
||||
ParsedQuery: nil,
|
||||
},
|
||||
shouldErr: true,
|
||||
|
|
|
@ -62,7 +62,7 @@ type QueryPlanner interface {
|
|||
|
||||
// PlanQuery constructs and returns a QueryPlan for a given statement. The
|
||||
// resulting QueryPlan is suitable for repeated, concurrent use.
|
||||
PlanQuery(stmt sqlparser.Statement) (*QueryPlan, error)
|
||||
PlanQuery(stmt sqlparser.Statement) (QueryPlan, error)
|
||||
// QueryParams returns a struct of column parameters the QueryPlanner uses.
|
||||
// It is used primarily to abstract the adding of default WHERE clauses to
|
||||
// queries by a private function of this package, and may be removed from
|
||||
|
@ -116,7 +116,7 @@ func NewVReplicationQueryPlanner(tmc tmclient.TabletManagerClient, workflow stri
|
|||
//
|
||||
// For DELETE queries, USING, PARTITION, ORDER BY, and LIMIT clauses are not
|
||||
// supported.
|
||||
func (planner *VReplicationQueryPlanner) PlanQuery(stmt sqlparser.Statement) (plan *QueryPlan, err error) {
|
||||
func (planner *VReplicationQueryPlanner) PlanQuery(stmt sqlparser.Statement) (plan QueryPlan, err error) {
|
||||
switch stmt := stmt.(type) {
|
||||
case *sqlparser.Select:
|
||||
plan, err = planner.planSelect(stmt)
|
||||
|
@ -152,7 +152,7 @@ func (planner *VReplicationQueryPlanner) QueryParams() QueryParams {
|
|||
}
|
||||
}
|
||||
|
||||
func (planner *VReplicationQueryPlanner) planDelete(del *sqlparser.Delete) (*QueryPlan, error) {
|
||||
func (planner *VReplicationQueryPlanner) planDelete(del *sqlparser.Delete) (*FixedQueryPlan, error) {
|
||||
if del.Targets != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"%w: DELETE must not have USING clause (have: %v): %v",
|
||||
|
@ -186,27 +186,27 @@ func (planner *VReplicationQueryPlanner) planDelete(del *sqlparser.Delete) (*Que
|
|||
buf := sqlparser.NewTrackedBuffer(nil)
|
||||
buf.Myprintf("%v", del)
|
||||
|
||||
return &QueryPlan{
|
||||
return &FixedQueryPlan{
|
||||
ParsedQuery: buf.ParsedQuery(),
|
||||
workflow: planner.workflow,
|
||||
tmc: planner.tmc,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (planner *VReplicationQueryPlanner) planSelect(sel *sqlparser.Select) (*QueryPlan, error) {
|
||||
func (planner *VReplicationQueryPlanner) planSelect(sel *sqlparser.Select) (*FixedQueryPlan, error) {
|
||||
sel.Where = addDefaultWheres(planner, sel.Where)
|
||||
|
||||
buf := sqlparser.NewTrackedBuffer(nil)
|
||||
buf.Myprintf("%v", sel)
|
||||
|
||||
return &QueryPlan{
|
||||
return &FixedQueryPlan{
|
||||
ParsedQuery: buf.ParsedQuery(),
|
||||
workflow: planner.workflow,
|
||||
tmc: planner.tmc,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (planner *VReplicationQueryPlanner) planUpdate(upd *sqlparser.Update) (*QueryPlan, error) {
|
||||
func (planner *VReplicationQueryPlanner) planUpdate(upd *sqlparser.Update) (*FixedQueryPlan, error) {
|
||||
if upd.OrderBy != nil || upd.Limit != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"%w: UPDATE must not have explicit ordering (have: %v) or limit clauses (have: %v): %v",
|
||||
|
@ -235,13 +235,145 @@ func (planner *VReplicationQueryPlanner) planUpdate(upd *sqlparser.Update) (*Que
|
|||
buf := sqlparser.NewTrackedBuffer(nil)
|
||||
buf.Myprintf("%v", upd)
|
||||
|
||||
return &QueryPlan{
|
||||
return &FixedQueryPlan{
|
||||
ParsedQuery: buf.ParsedQuery(),
|
||||
workflow: planner.workflow,
|
||||
tmc: planner.tmc,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// VReplicationLogQueryPlanner implements the QueryPlanner interface for queries
|
||||
// on the _vt.vreplication_log table.
|
||||
type VReplicationLogQueryPlanner struct {
|
||||
tmc tmclient.TabletManagerClient
|
||||
tabletStreamIDs map[string][]int64
|
||||
}
|
||||
|
||||
// NewVReplicationLogQueryPlanner returns a new VReplicationLogQueryPlanner. The
|
||||
// tabletStreamIDs map determines what stream_ids are expected to have vrep_log
|
||||
// rows, keyed by tablet alias string.
|
||||
func NewVReplicationLogQueryPlanner(tmc tmclient.TabletManagerClient, tabletStreamIDs map[string][]int64) *VReplicationLogQueryPlanner {
|
||||
return &VReplicationLogQueryPlanner{
|
||||
tmc: tmc,
|
||||
tabletStreamIDs: tabletStreamIDs,
|
||||
}
|
||||
}
|
||||
|
||||
// PlanQuery is part of the QueryPlanner interface.
|
||||
//
|
||||
// For vreplication_log query planners, only SELECT queries are supported.
|
||||
func (planner *VReplicationLogQueryPlanner) PlanQuery(stmt sqlparser.Statement) (plan QueryPlan, err error) {
|
||||
switch stmt := stmt.(type) {
|
||||
case *sqlparser.Select:
|
||||
plan, err = planner.planSelect(stmt)
|
||||
case *sqlparser.Insert:
|
||||
err = ErrUnsupportedQuery
|
||||
case *sqlparser.Update:
|
||||
err = ErrUnsupportedQuery
|
||||
case *sqlparser.Delete:
|
||||
err = ErrUnsupportedQuery
|
||||
default:
|
||||
err = ErrUnsupportedQuery
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%w: %s", err, sqlparser.String(stmt))
|
||||
}
|
||||
|
||||
return plan, nil
|
||||
}
|
||||
|
||||
// QueryParams is part of the QueryPlanner interface.
|
||||
func (planner *VReplicationLogQueryPlanner) QueryParams() QueryParams {
|
||||
return QueryParams{}
|
||||
}
|
||||
|
||||
func (planner *VReplicationLogQueryPlanner) planSelect(sel *sqlparser.Select) (QueryPlan, error) {
|
||||
where := sel.Where
|
||||
cols := extractWhereComparisonColumns(where)
|
||||
hasVReplIDCol := false
|
||||
|
||||
for _, col := range cols {
|
||||
if col == "vrepl_id" {
|
||||
hasVReplIDCol = true
|
||||
}
|
||||
}
|
||||
|
||||
if hasVReplIDCol { // we're not injecting per-target parameters, return a Fixed plan
|
||||
buf := sqlparser.NewTrackedBuffer(nil)
|
||||
buf.Myprintf("%v", sel)
|
||||
|
||||
return &FixedQueryPlan{
|
||||
ParsedQuery: buf.ParsedQuery(),
|
||||
tmc: planner.tmc,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Construct a where clause to filter by vrepl_id, parameterized by target
|
||||
// streamIDs.
|
||||
queriesByTarget := make(map[string]*sqlparser.ParsedQuery, len(planner.tabletStreamIDs))
|
||||
for target, streamIDs := range planner.tabletStreamIDs {
|
||||
targetWhere := &sqlparser.Where{
|
||||
Type: sqlparser.WhereClause,
|
||||
}
|
||||
|
||||
var expr sqlparser.Expr
|
||||
switch len(streamIDs) {
|
||||
case 0: // WHERE vreplication_log.vrepl_id IN () => WHERE 1 != 1
|
||||
one := sqlparser.NewIntLiteral("1")
|
||||
expr = &sqlparser.ComparisonExpr{
|
||||
Operator: sqlparser.NotEqualOp,
|
||||
Left: one,
|
||||
Right: one,
|
||||
}
|
||||
case 1: // WHERE vreplication_log.vrepl_id = ?
|
||||
expr = &sqlparser.ComparisonExpr{
|
||||
Operator: sqlparser.EqualOp,
|
||||
Left: &sqlparser.ColName{
|
||||
Name: sqlparser.NewColIdent("vrepl_id"),
|
||||
},
|
||||
Right: sqlparser.NewIntLiteral(fmt.Sprintf("%d", streamIDs[0])),
|
||||
}
|
||||
default: // WHERE vreplication_log.vrepl_id IN (?)
|
||||
vals := []sqlparser.Expr{}
|
||||
for _, streamID := range streamIDs {
|
||||
vals = append(vals, sqlparser.NewIntLiteral(fmt.Sprintf("%d", streamID)))
|
||||
}
|
||||
|
||||
var tuple sqlparser.ValTuple = vals
|
||||
expr = &sqlparser.ComparisonExpr{
|
||||
Operator: sqlparser.InOp,
|
||||
Left: &sqlparser.ColName{
|
||||
Name: sqlparser.NewColIdent("vrepl_id"),
|
||||
},
|
||||
Right: tuple,
|
||||
}
|
||||
}
|
||||
|
||||
switch where {
|
||||
case nil:
|
||||
targetWhere.Expr = expr
|
||||
default:
|
||||
targetWhere.Expr = &sqlparser.AndExpr{
|
||||
Left: expr,
|
||||
Right: where.Expr,
|
||||
}
|
||||
}
|
||||
|
||||
sel.Where = targetWhere
|
||||
|
||||
buf := sqlparser.NewTrackedBuffer(nil)
|
||||
buf.Myprintf("%v", sel)
|
||||
|
||||
queriesByTarget[target] = buf.ParsedQuery()
|
||||
}
|
||||
|
||||
return &PerTargetQueryPlan{
|
||||
ParsedQueries: queriesByTarget,
|
||||
tmc: planner.tmc,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func addDefaultWheres(planner QueryPlanner, where *sqlparser.Where) *sqlparser.Where {
|
||||
cols := extractWhereComparisonColumns(where)
|
||||
|
||||
|
|
|
@ -21,7 +21,9 @@ import (
|
|||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"vitess.io/vitess/go/vt/sqlparser"
|
||||
"vitess.io/vitess/go/vt/vtctl/workflow/vexec/testutil"
|
||||
)
|
||||
|
||||
|
@ -122,7 +124,9 @@ func TestVReplicationQueryPlanner_planSelect(t *testing.T) {
|
|||
qp, err := planner.PlanQuery(stmt)
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, testutil.ParsedQueryFromString(t, tt.expectedPlannedQuery), qp.ParsedQuery)
|
||||
fixedqp, ok := qp.(*FixedQueryPlan)
|
||||
require.True(t, ok, "VReplicationQueryPlanner should always return a FixedQueryPlan from PlanQuery, got %T", qp)
|
||||
assert.Equal(t, testutil.ParsedQueryFromString(t, tt.expectedPlannedQuery), fixedqp.ParsedQuery)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
@ -179,7 +183,9 @@ func TestVReplicationQueryPlanner_planUpdate(t *testing.T) {
|
|||
return
|
||||
}
|
||||
|
||||
assert.Equal(t, testutil.ParsedQueryFromString(t, tt.expectedPlannedQuery), qp.ParsedQuery)
|
||||
fixedqp, ok := qp.(*FixedQueryPlan)
|
||||
require.True(t, ok, "VReplicationQueryPlanner should always return a FixedQueryPlan from PlanQuery, got %T", qp)
|
||||
assert.Equal(t, testutil.ParsedQueryFromString(t, tt.expectedPlannedQuery), fixedqp.ParsedQuery)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
@ -238,7 +244,140 @@ func TestVReplicationQueryPlanner_planDelete(t *testing.T) {
|
|||
return
|
||||
}
|
||||
|
||||
assert.Equal(t, testutil.ParsedQueryFromString(t, tt.expectedPlannedQuery), qp.ParsedQuery)
|
||||
fixedqp, ok := qp.(*FixedQueryPlan)
|
||||
require.True(t, ok, "VReplicationQueryPlanner should always return a FixedQueryPlan from PlanQuery, got %T", qp)
|
||||
assert.Equal(t, testutil.ParsedQueryFromString(t, tt.expectedPlannedQuery), fixedqp.ParsedQuery)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestVReplicationLogQueryPlanner(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
t.Run("planSelect", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
targetStreamIDs map[string][]int64
|
||||
query string
|
||||
assertion func(t *testing.T, plan QueryPlan)
|
||||
shouldErr bool
|
||||
}{
|
||||
{
|
||||
targetStreamIDs: map[string][]int64{
|
||||
"a": {1, 2},
|
||||
},
|
||||
query: "select * from _vt.vreplication_log",
|
||||
assertion: func(t *testing.T, plan QueryPlan) {
|
||||
t.Helper()
|
||||
qp, ok := plan.(*PerTargetQueryPlan)
|
||||
if !ok {
|
||||
require.FailNow(t, "failed type check", "expected plan to be PerTargetQueryPlan, got %T: %v", plan, plan)
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
"a": "select * from _vt.vreplication_log where vrepl_id in (1, 2)",
|
||||
}
|
||||
assertQueryMapsMatch(t, expected, qp.ParsedQueries)
|
||||
},
|
||||
},
|
||||
{
|
||||
targetStreamIDs: map[string][]int64{
|
||||
"a": nil,
|
||||
},
|
||||
query: "select * from _vt.vreplication_log",
|
||||
assertion: func(t *testing.T, plan QueryPlan) {
|
||||
t.Helper()
|
||||
qp, ok := plan.(*PerTargetQueryPlan)
|
||||
if !ok {
|
||||
require.FailNow(t, "failed type check", "expected plan to be PerTargetQueryPlan, got %T: %v", plan, plan)
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
"a": "select * from _vt.vreplication_log where 1 != 1",
|
||||
}
|
||||
assertQueryMapsMatch(t, expected, qp.ParsedQueries)
|
||||
},
|
||||
},
|
||||
{
|
||||
targetStreamIDs: map[string][]int64{
|
||||
"a": {1},
|
||||
},
|
||||
query: "select * from _vt.vreplication_log",
|
||||
assertion: func(t *testing.T, plan QueryPlan) {
|
||||
t.Helper()
|
||||
qp, ok := plan.(*PerTargetQueryPlan)
|
||||
if !ok {
|
||||
require.FailNow(t, "failed type check", "expected plan to be PerTargetQueryPlan, got %T: %v", plan, plan)
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
"a": "select * from _vt.vreplication_log where vrepl_id = 1",
|
||||
}
|
||||
assertQueryMapsMatch(t, expected, qp.ParsedQueries)
|
||||
},
|
||||
},
|
||||
{
|
||||
query: "select * from _vt.vreplication_log where vrepl_id = 1",
|
||||
assertion: func(t *testing.T, plan QueryPlan) {
|
||||
t.Helper()
|
||||
qp, ok := plan.(*FixedQueryPlan)
|
||||
if !ok {
|
||||
require.FailNow(t, "failed type check", "expected plan to be FixedQueryPlan, got %T: %v", plan, plan)
|
||||
}
|
||||
|
||||
assert.Equal(t, "select * from _vt.vreplication_log where vrepl_id = 1", qp.ParsedQuery.Query)
|
||||
},
|
||||
},
|
||||
{
|
||||
targetStreamIDs: map[string][]int64{
|
||||
"a": {1, 2},
|
||||
},
|
||||
query: "select * from _vt.vreplication_log where foo = 'bar'",
|
||||
assertion: func(t *testing.T, plan QueryPlan) {
|
||||
t.Helper()
|
||||
qp, ok := plan.(*PerTargetQueryPlan)
|
||||
if !ok {
|
||||
require.FailNow(t, "failed type check", "expected plan to be PerTargetQueryPlan, got %T: %v", plan, plan)
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
"a": "select * from _vt.vreplication_log where vrepl_id in (1, 2) and foo = 'bar'",
|
||||
}
|
||||
assertQueryMapsMatch(t, expected, qp.ParsedQueries)
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
tt := tt
|
||||
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
planner := NewVReplicationLogQueryPlanner(nil, tt.targetStreamIDs)
|
||||
stmt, err := sqlparser.Parse(tt.query)
|
||||
require.NoError(t, err, "could not parse query %q", tt.query)
|
||||
qp, err := planner.planSelect(stmt.(*sqlparser.Select))
|
||||
if tt.shouldErr {
|
||||
assert.Error(t, err)
|
||||
return
|
||||
}
|
||||
|
||||
tt.assertion(t, qp)
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func assertQueryMapsMatch(t *testing.T, expected map[string]string, actual map[string]*sqlparser.ParsedQuery, msgAndArgs ...interface{}) {
|
||||
t.Helper()
|
||||
|
||||
actualQueryMap := make(map[string]string, len(actual))
|
||||
for k, v := range actual {
|
||||
actualQueryMap[k] = v.Query
|
||||
}
|
||||
|
||||
assert.Equal(t, expected, actualQueryMap, msgAndArgs...)
|
||||
}
|
||||
|
|
|
@ -21,9 +21,11 @@ import (
|
|||
"errors"
|
||||
"fmt"
|
||||
|
||||
"vitess.io/vitess/go/sqltypes"
|
||||
"vitess.io/vitess/go/vt/sqlparser"
|
||||
"vitess.io/vitess/go/vt/topo"
|
||||
"vitess.io/vitess/go/vt/topo/topoproto"
|
||||
"vitess.io/vitess/go/vt/vtgate/evalengine"
|
||||
"vitess.io/vitess/go/vt/vttablet/tmclient"
|
||||
|
||||
querypb "vitess.io/vitess/go/vt/proto/query"
|
||||
|
@ -37,6 +39,9 @@ const (
|
|||
// SchemaMigrationsTableName is the unqualified name of the schema
|
||||
// migrations table supported by vexec.
|
||||
SchemaMigrationsTableName = "schema_migrations"
|
||||
// VReplicationLogTableName is the unqualified name of the vreplication_log
|
||||
// table supported by vexec.
|
||||
VReplicationLogTableName = "vreplication_log"
|
||||
// VReplicationTableName is the unqualified name of the vreplication table
|
||||
// supported by vexec.
|
||||
VReplicationTableName = "vreplication"
|
||||
|
@ -208,6 +213,30 @@ func (vx *VExec) GetPlanner(ctx context.Context, table string) (QueryPlanner, er
|
|||
switch table {
|
||||
case qualifiedTableName(VReplicationTableName):
|
||||
return NewVReplicationQueryPlanner(vx.tmc, vx.workflow, vx.primaries[0].DbName()), nil
|
||||
case qualifiedTableName(VReplicationLogTableName):
|
||||
results, err := vx.QueryContext(ctx, "select id from _vt.vreplication")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tabletStreamIDMap := make(map[string][]int64, len(results))
|
||||
|
||||
for tablet, p3qr := range results {
|
||||
qr := sqltypes.Proto3ToResult(p3qr)
|
||||
aliasStr := tablet.AliasString()
|
||||
tabletStreamIDMap[aliasStr] = make([]int64, len(qr.Rows))
|
||||
|
||||
for i, row := range qr.Rows {
|
||||
id, err := evalengine.ToInt64(row[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tabletStreamIDMap[aliasStr][i] = id
|
||||
}
|
||||
}
|
||||
|
||||
return NewVReplicationLogQueryPlanner(vx.tmc, tabletStreamIDMap), nil
|
||||
case qualifiedTableName(SchemaMigrationsTableName):
|
||||
return nil, errors.New("Schema Migrations not yet supported in new workflow package")
|
||||
default:
|
||||
|
@ -215,6 +244,18 @@ func (vx *VExec) GetPlanner(ctx context.Context, table string) (QueryPlanner, er
|
|||
}
|
||||
}
|
||||
|
||||
// WithWorkflow returns a copy of VExec with the Workflow field updated. Used so
|
||||
// callers to reuse a VExec's primaries list without needing to initialize a new
|
||||
// VExec instance.
|
||||
func (vx *VExec) WithWorkflow(workflow string) *VExec {
|
||||
return &VExec{
|
||||
ts: vx.ts,
|
||||
tmc: vx.tmc,
|
||||
primaries: vx.primaries,
|
||||
workflow: workflow,
|
||||
}
|
||||
}
|
||||
|
||||
func extractTableName(stmt sqlparser.Statement) (string, error) {
|
||||
switch stmt := stmt.(type) {
|
||||
case *sqlparser.Update:
|
||||
|
|
|
@ -676,7 +676,7 @@ func (e *Executor) cutOverVReplMigration(ctx context.Context, s *VReplStream) er
|
|||
}()
|
||||
|
||||
// Tables are now swapped! Migration is successful
|
||||
_ = e.onSchemaMigrationStatus(ctx, onlineDDL.UUID, schema.OnlineDDLStatusComplete, false, progressPctFull, etaSecondsNow, rowsCopiedUnknown)
|
||||
_ = e.onSchemaMigrationStatus(ctx, onlineDDL.UUID, schema.OnlineDDLStatusComplete, false, progressPctFull, etaSecondsNow, s.rowsCopied)
|
||||
return nil
|
||||
|
||||
// deferred function will re-enable writes now
|
||||
|
@ -789,7 +789,9 @@ func (e *Executor) ExecuteWithVReplication(ctx context.Context, onlineDDL *schem
|
|||
if err := v.analyze(ctx, conn); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := e.updateMigrationTableRows(ctx, onlineDDL.UUID, v.tableRows); err != nil {
|
||||
return err
|
||||
}
|
||||
if revertMigration == nil {
|
||||
// Original ALTER TABLE request for vreplication
|
||||
if err := e.validateTableForAlterAction(ctx, onlineDDL); err != nil {
|
||||
|
@ -2110,6 +2112,7 @@ func (e *Executor) readVReplStream(ctx context.Context, uuid string, okIfMissing
|
|||
transactionTimestamp: row.AsInt64("transaction_timestamp", 0),
|
||||
state: row.AsString("state", ""),
|
||||
message: row.AsString("message", ""),
|
||||
rowsCopied: row.AsInt64("rows_copied", 0),
|
||||
bls: &binlogdatapb.BinlogSource{},
|
||||
}
|
||||
if err := prototext.Unmarshal([]byte(s.source), s.bls); err != nil {
|
||||
|
@ -2174,6 +2177,7 @@ func (e *Executor) isVReplMigrationReadyToCutOver(ctx context.Context, s *VReplS
|
|||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
|
@ -2232,6 +2236,11 @@ func (e *Executor) reviewRunningMigrations(ctx context.Context) (countRunnning i
|
|||
// migrationMutex lock and it's now safe to ensure vreplMigrationRunning is 1
|
||||
atomic.StoreInt64(&e.vreplMigrationRunning, 1)
|
||||
_ = e.updateMigrationTimestamp(ctx, "liveness_timestamp", uuid)
|
||||
|
||||
_ = e.updateRowsCopied(ctx, uuid, s.rowsCopied)
|
||||
_ = e.updateMigrationProgressByRowsCopied(ctx, uuid, s.rowsCopied)
|
||||
_ = e.updateMigrationETASecondsByProgress(ctx, uuid)
|
||||
|
||||
isReady, err := e.isVReplMigrationReadyToCutOver(ctx, s)
|
||||
if err != nil {
|
||||
return countRunnning, cancellable, err
|
||||
|
@ -2579,7 +2588,7 @@ func (e *Executor) updateMySQLTable(ctx context.Context, uuid string, tableName
|
|||
return err
|
||||
}
|
||||
|
||||
func (e *Executor) updateETASeconds(ctx context.Context, uuid string, etaSeconds int64) error {
|
||||
func (e *Executor) updateMigrationETASeconds(ctx context.Context, uuid string, etaSeconds int64) error {
|
||||
query, err := sqlparser.ParseAndBind(sqlUpdateMigrationETASeconds,
|
||||
sqltypes.Int64BindVariable(etaSeconds),
|
||||
sqltypes.StringBindVariable(uuid),
|
||||
|
@ -2609,6 +2618,41 @@ func (e *Executor) updateMigrationProgress(ctx context.Context, uuid string, pro
|
|||
return err
|
||||
}
|
||||
|
||||
func (e *Executor) updateMigrationProgressByRowsCopied(ctx context.Context, uuid string, rowsCopied int64) error {
|
||||
query, err := sqlparser.ParseAndBind(sqlUpdateMigrationProgressByRowsCopied,
|
||||
sqltypes.Int64BindVariable(rowsCopied),
|
||||
sqltypes.StringBindVariable(uuid),
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = e.execQuery(ctx, query)
|
||||
return err
|
||||
}
|
||||
|
||||
func (e *Executor) updateMigrationETASecondsByProgress(ctx context.Context, uuid string) error {
|
||||
query, err := sqlparser.ParseAndBind(sqlUpdateMigrationETASecondsByProgress,
|
||||
sqltypes.StringBindVariable(uuid),
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = e.execQuery(ctx, query)
|
||||
return err
|
||||
}
|
||||
|
||||
func (e *Executor) updateMigrationTableRows(ctx context.Context, uuid string, tableRows int64) error {
|
||||
query, err := sqlparser.ParseAndBind(sqlUpdateMigrationTableRows,
|
||||
sqltypes.Int64BindVariable(tableRows),
|
||||
sqltypes.StringBindVariable(uuid),
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = e.execQuery(ctx, query)
|
||||
return err
|
||||
}
|
||||
|
||||
func (e *Executor) updateRowsCopied(ctx context.Context, uuid string, rowsCopied int64) error {
|
||||
if rowsCopied <= 0 {
|
||||
// Number of rows can only be positive. Zero or negative must mean "no information" and
|
||||
|
@ -2769,7 +2813,7 @@ func (e *Executor) onSchemaMigrationStatus(ctx context.Context,
|
|||
if err = e.updateMigrationProgress(ctx, uuid, progressPct); err != nil {
|
||||
return err
|
||||
}
|
||||
if err = e.updateETASeconds(ctx, uuid, etaSeconds); err != nil {
|
||||
if err = e.updateMigrationETASeconds(ctx, uuid, etaSeconds); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := e.updateRowsCopied(ctx, uuid, rowsCopied); err != nil {
|
||||
|
|
|
@ -57,6 +57,7 @@ const (
|
|||
alterSchemaMigrationsTableTableCompleteIndex = "ALTER TABLE _vt.schema_migrations add KEY table_complete_idx (migration_status, keyspace(64), mysql_table(64), completed_timestamp)"
|
||||
alterSchemaMigrationsTableETASeconds = "ALTER TABLE _vt.schema_migrations add column eta_seconds bigint NOT NULL DEFAULT -1"
|
||||
alterSchemaMigrationsTableRowsCopied = "ALTER TABLE _vt.schema_migrations add column rows_copied bigint unsigned NOT NULL DEFAULT 0"
|
||||
alterSchemaMigrationsTableTableRows = "ALTER TABLE _vt.schema_migrations add column table_rows bigint NOT NULL DEFAULT 0"
|
||||
|
||||
sqlInsertMigration = `INSERT IGNORE INTO _vt.schema_migrations (
|
||||
migration_uuid,
|
||||
|
@ -151,6 +152,32 @@ const (
|
|||
WHERE
|
||||
migration_uuid=%a
|
||||
`
|
||||
sqlUpdateMigrationTableRows = `UPDATE _vt.schema_migrations
|
||||
SET table_rows=%a
|
||||
WHERE
|
||||
migration_uuid=%a
|
||||
`
|
||||
sqlUpdateMigrationProgressByRowsCopied = `UPDATE _vt.schema_migrations
|
||||
SET
|
||||
progress=CASE
|
||||
WHEN table_rows=0 THEN 100
|
||||
ELSE LEAST(100, 100*%a/table_rows)
|
||||
END
|
||||
WHERE
|
||||
migration_uuid=%a
|
||||
`
|
||||
sqlUpdateMigrationETASecondsByProgress = `UPDATE _vt.schema_migrations
|
||||
SET
|
||||
eta_seconds=CASE
|
||||
WHEN progress=0 THEN -1
|
||||
WHEN table_rows=0 THEN 0
|
||||
ELSE GREATEST(0,
|
||||
TIMESTAMPDIFF(SECOND, started_timestamp, NOW())*((100/progress)-1)
|
||||
)
|
||||
END
|
||||
WHERE
|
||||
migration_uuid=%a
|
||||
`
|
||||
sqlRetryMigrationWhere = `UPDATE _vt.schema_migrations
|
||||
SET
|
||||
migration_status='queued',
|
||||
|
@ -330,6 +357,7 @@ const (
|
|||
sqlDropTable = "DROP TABLE `%a`"
|
||||
sqlAlterTableOptions = "ALTER TABLE `%a` %s"
|
||||
sqlShowColumnsFrom = "SHOW COLUMNS FROM `%a`"
|
||||
sqlShowTableStatus = "SHOW TABLE STATUS LIKE '%a'"
|
||||
sqlGetAutoIncrement = `
|
||||
SELECT
|
||||
AUTO_INCREMENT
|
||||
|
@ -351,7 +379,8 @@ const (
|
|||
time_updated,
|
||||
transaction_timestamp,
|
||||
state,
|
||||
message
|
||||
message,
|
||||
rows_copied
|
||||
FROM _vt.vreplication
|
||||
WHERE
|
||||
workflow=%a
|
||||
|
@ -402,4 +431,5 @@ var applyDDL = []string{
|
|||
alterSchemaMigrationsTableTableCompleteIndex,
|
||||
alterSchemaMigrationsTableETASeconds,
|
||||
alterSchemaMigrationsTableRowsCopied,
|
||||
alterSchemaMigrationsTableTableRows,
|
||||
}
|
||||
|
|
|
@ -36,6 +36,7 @@ import (
|
|||
"vitess.io/vitess/go/vt/dbconnpool"
|
||||
binlogdatapb "vitess.io/vitess/go/vt/proto/binlogdata"
|
||||
vtrpcpb "vitess.io/vitess/go/vt/proto/vtrpc"
|
||||
"vitess.io/vitess/go/vt/schema"
|
||||
"vitess.io/vitess/go/vt/sqlparser"
|
||||
"vitess.io/vitess/go/vt/vterrors"
|
||||
"vitess.io/vitess/go/vt/vttablet/onlineddl/vrepl"
|
||||
|
@ -52,6 +53,7 @@ type VReplStream struct {
|
|||
transactionTimestamp int64
|
||||
state string
|
||||
message string
|
||||
rowsCopied int64
|
||||
bls *binlogdatapb.BinlogSource
|
||||
}
|
||||
|
||||
|
@ -65,6 +67,7 @@ type VRepl struct {
|
|||
targetTable string
|
||||
pos string
|
||||
alterOptions string
|
||||
tableRows int64
|
||||
|
||||
sharedPKColumns *vrepl.ColumnList
|
||||
|
||||
|
@ -73,8 +76,9 @@ type VRepl struct {
|
|||
sharedColumnsMap map[string]string
|
||||
sourceAutoIncrement uint64
|
||||
|
||||
filterQuery string
|
||||
bls *binlogdatapb.BinlogSource
|
||||
filterQuery string
|
||||
enumToTextMap map[string]string
|
||||
bls *binlogdatapb.BinlogSource
|
||||
|
||||
parser *vrepl.AlterTableParser
|
||||
|
||||
|
@ -92,6 +96,7 @@ func NewVRepl(workflow, keyspace, shard, dbName, sourceTable, targetTable, alter
|
|||
targetTable: targetTable,
|
||||
alterOptions: alterOptions,
|
||||
parser: vrepl.NewAlterTableParser(),
|
||||
enumToTextMap: map[string]string{},
|
||||
convertCharset: map[string](*binlogdatapb.CharsetConversion){},
|
||||
}
|
||||
}
|
||||
|
@ -177,6 +182,21 @@ func (v *VRepl) readTableColumns(ctx context.Context, conn *dbconnpool.DBConnect
|
|||
return vrepl.NewColumnList(columnNames), vrepl.NewColumnList(virtualColumnNames), vrepl.NewColumnList(pkColumnNames), nil
|
||||
}
|
||||
|
||||
// readTableStatus reads table status information
|
||||
func (v *VRepl) readTableStatus(ctx context.Context, conn *dbconnpool.DBConnection, tableName string) (tableRows int64, err error) {
|
||||
parsed := sqlparser.BuildParsedQuery(sqlShowTableStatus, tableName)
|
||||
rs, err := conn.ExecuteFetch(parsed.Query, math.MaxInt64, true)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
row := rs.Named().Row()
|
||||
if row == nil {
|
||||
return 0, vterrors.Errorf(vtrpcpb.Code_INTERNAL, "Cannot SHOW TABLE STATUS LIKE '%s'", tableName)
|
||||
}
|
||||
tableRows, err = row.ToInt64("Rows")
|
||||
return tableRows, err
|
||||
}
|
||||
|
||||
// applyColumnTypes
|
||||
func (v *VRepl) applyColumnTypes(ctx context.Context, conn *dbconnpool.DBConnection, tableName string, columnsLists ...*vrepl.ColumnList) error {
|
||||
query, err := sqlparser.ParseAndBind(sqlSelectColumnTypes,
|
||||
|
@ -221,7 +241,7 @@ func (v *VRepl) applyColumnTypes(ctx context.Context, conn *dbconnpool.DBConnect
|
|||
}
|
||||
if strings.HasPrefix(columnType, "enum") {
|
||||
column.Type = vrepl.EnumColumnType
|
||||
column.EnumValues = vrepl.ParseEnumValues(columnType)
|
||||
column.EnumValues = schema.ParseEnumValues(columnType)
|
||||
}
|
||||
if strings.HasPrefix(columnType, "binary") {
|
||||
column.Type = vrepl.BinaryColumnType
|
||||
|
@ -340,7 +360,11 @@ func (v *VRepl) analyzeAlter(ctx context.Context) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (v *VRepl) analyzeTables(ctx context.Context, conn *dbconnpool.DBConnection) error {
|
||||
func (v *VRepl) analyzeTables(ctx context.Context, conn *dbconnpool.DBConnection) (err error) {
|
||||
v.tableRows, err = v.readTableStatus(ctx, conn, v.sourceTable)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// columns:
|
||||
sourceColumns, sourceVirtualColumns, sourcePKColumns, err := v.readTableColumns(ctx, conn, v.sourceTable)
|
||||
if err != nil {
|
||||
|
@ -366,6 +390,17 @@ func (v *VRepl) analyzeTables(ctx context.Context, conn *dbconnpool.DBConnection
|
|||
return err
|
||||
}
|
||||
|
||||
for i := range v.sourceSharedColumns.Columns() {
|
||||
sourceColumn := v.sourceSharedColumns.Columns()[i]
|
||||
mappedColumn := v.targetSharedColumns.Columns()[i]
|
||||
if sourceColumn.Type == vrepl.EnumColumnType && mappedColumn.Type != vrepl.EnumColumnType && mappedColumn.Charset != "" {
|
||||
// A column is converted from ENUM type to textual type
|
||||
v.targetSharedColumns.SetEnumToTextConversion(mappedColumn.Name, sourceColumn.EnumValues)
|
||||
|
||||
v.enumToTextMap[sourceColumn.Name] = sourceColumn.EnumValues
|
||||
}
|
||||
}
|
||||
|
||||
v.sourceAutoIncrement, err = v.readAutoIncrement(ctx, conn, v.sourceTable)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -389,10 +424,10 @@ func (v *VRepl) generateFilterQuery(ctx context.Context) error {
|
|||
if i > 0 {
|
||||
sb.WriteString(", ")
|
||||
}
|
||||
switch sourceCol.Type {
|
||||
case vrepl.JSONColumnType:
|
||||
switch {
|
||||
case sourceCol.Type == vrepl.JSONColumnType:
|
||||
sb.WriteString(fmt.Sprintf("convert(%s using utf8mb4)", escapeName(name)))
|
||||
case vrepl.StringColumnType:
|
||||
case sourceCol.Type == vrepl.StringColumnType:
|
||||
targetCol := v.targetSharedColumns.GetColumn(targetName)
|
||||
if targetCol == nil {
|
||||
return vterrors.Errorf(vtrpcpb.Code_INTERNAL, "Cannot find target column %s", targetName)
|
||||
|
@ -418,6 +453,8 @@ func (v *VRepl) generateFilterQuery(ctx context.Context) error {
|
|||
}
|
||||
// We will always read strings as utf8mb4.
|
||||
sb.WriteString(fmt.Sprintf("convert(%s using utf8mb4)", escapeName(name)))
|
||||
case sourceCol.EnumToTextConversion:
|
||||
sb.WriteString(fmt.Sprintf("CONCAT(%s)", escapeName(name)))
|
||||
default:
|
||||
sb.WriteString(escapeName(name))
|
||||
}
|
||||
|
@ -445,6 +482,10 @@ func (v *VRepl) analyzeBinlogSource(ctx context.Context) {
|
|||
if len(v.convertCharset) > 0 {
|
||||
rule.ConvertCharset = v.convertCharset
|
||||
}
|
||||
if len(v.enumToTextMap) > 0 {
|
||||
rule.ConvertEnumToText = v.enumToTextMap
|
||||
}
|
||||
|
||||
bls.Filter.Rules = append(bls.Filter.Rules, rule)
|
||||
v.bls = bls
|
||||
}
|
||||
|
|
|
@ -19,7 +19,6 @@ var (
|
|||
dropColumnRegexp = regexp.MustCompile(`(?i)\bdrop\s+(column\s+|)([\S]+)$`)
|
||||
renameTableRegexp = regexp.MustCompile(`(?i)\brename\s+(to|as)\s+`)
|
||||
autoIncrementRegexp = regexp.MustCompile(`(?i)\bauto_increment[\s]*[=]?[\s]*([0-9]+)`)
|
||||
enumValuesRegexp = regexp.MustCompile("^enum[(](.*)[)]$")
|
||||
)
|
||||
|
||||
// AlterTableParser is a parser tool for ALTER TABLE statements
|
||||
|
@ -199,11 +198,3 @@ func (p *AlterTableParser) GetAlterStatementOptions() string {
|
|||
func (p *AlterTableParser) ColumnRenameMap() map[string]string {
|
||||
return p.columnRenameMap
|
||||
}
|
||||
|
||||
// ParseEnumValues parses the comma delimited part of an enum column definition
|
||||
func ParseEnumValues(enumColumnType string) string {
|
||||
if submatch := enumValuesRegexp.FindStringSubmatch(enumColumnType); len(submatch) > 0 {
|
||||
return submatch[1]
|
||||
}
|
||||
return enumColumnType
|
||||
}
|
||||
|
|
|
@ -173,6 +173,17 @@ func (l *ColumnList) Len() int {
|
|||
return len(l.columns)
|
||||
}
|
||||
|
||||
// SetEnumToTextConversion tells this column list that an enum is conveted to text
|
||||
func (l *ColumnList) SetEnumToTextConversion(columnName string, enumValues string) {
|
||||
l.GetColumn(columnName).EnumToTextConversion = true
|
||||
l.GetColumn(columnName).EnumValues = enumValues
|
||||
}
|
||||
|
||||
// IsEnumToTextConversion tells whether an enum was converted to text
|
||||
func (l *ColumnList) IsEnumToTextConversion(columnName string) bool {
|
||||
return l.GetColumn(columnName).EnumToTextConversion
|
||||
}
|
||||
|
||||
// UniqueKey is the combination of a key's name and columns
|
||||
type UniqueKey struct {
|
||||
Name string
|
||||
|
|
|
@ -93,6 +93,9 @@ var tabletTypesStr = flag.String("vreplication_tablet_type", "MASTER,REPLICA", "
|
|||
// stop replicating.
|
||||
var waitRetryTime = 1 * time.Second
|
||||
|
||||
// How frequently vcopier will update _vt.vreplication rows_copied
|
||||
var rowsCopiedUpdateInterval = 30 * time.Second
|
||||
|
||||
// Engine is the engine for handling vreplication.
|
||||
type Engine struct {
|
||||
// mu synchronizes isOpen, cancelRetry, controllers and wg.
|
||||
|
|
|
@ -188,10 +188,11 @@ type TablePlan struct {
|
|||
// If the plan is an insertIgnore type, then Insert
|
||||
// and Update contain 'insert ignore' statements and
|
||||
// Delete is nil.
|
||||
Insert *sqlparser.ParsedQuery
|
||||
Update *sqlparser.ParsedQuery
|
||||
Delete *sqlparser.ParsedQuery
|
||||
Fields []*querypb.Field
|
||||
Insert *sqlparser.ParsedQuery
|
||||
Update *sqlparser.ParsedQuery
|
||||
Delete *sqlparser.ParsedQuery
|
||||
Fields []*querypb.Field
|
||||
EnumValuesMap map[string](map[string]string)
|
||||
// PKReferences is used to check if an event changed
|
||||
// a primary key column (row move).
|
||||
PKReferences []string
|
||||
|
@ -313,6 +314,18 @@ func (tp *TablePlan) bindFieldVal(field *querypb.Field, val *sqltypes.Value) (*q
|
|||
}
|
||||
return sqltypes.StringBindVariable(valString), nil
|
||||
}
|
||||
if enumValues, ok := tp.EnumValuesMap[field.Name]; ok && !val.IsNull() {
|
||||
// The fact that this fielkd has a EnumValuesMap entry, means we must
|
||||
// use the enum's text value as opposed to the enum's numerical value.
|
||||
// Once known use case is with Online DDL, when a column is converted from
|
||||
// ENUM to a VARCHAR/TEXT.
|
||||
enumValue, enumValueOK := enumValues[val.ToString()]
|
||||
if !enumValueOK {
|
||||
return nil, vterrors.Errorf(vtrpcpb.Code_INTERNAL, "Invalid enum value: %v for field %s", val, field.Name)
|
||||
}
|
||||
// get the enum text fir this val
|
||||
return sqltypes.StringBindVariable(enumValue), nil
|
||||
}
|
||||
return sqltypes.ValueBindVariable(*val), nil
|
||||
}
|
||||
|
||||
|
|
|
@ -22,13 +22,12 @@ import (
|
|||
"sort"
|
||||
"strings"
|
||||
|
||||
"vitess.io/vitess/go/vt/binlog/binlogplayer"
|
||||
|
||||
querypb "vitess.io/vitess/go/vt/proto/query"
|
||||
|
||||
"vitess.io/vitess/go/sqltypes"
|
||||
"vitess.io/vitess/go/vt/binlog/binlogplayer"
|
||||
"vitess.io/vitess/go/vt/key"
|
||||
binlogdatapb "vitess.io/vitess/go/vt/proto/binlogdata"
|
||||
querypb "vitess.io/vitess/go/vt/proto/query"
|
||||
"vitess.io/vitess/go/vt/schema"
|
||||
"vitess.io/vitess/go/vt/sqlparser"
|
||||
)
|
||||
|
||||
|
@ -144,7 +143,7 @@ func buildReplicatorPlan(filter *binlogdatapb.Filter, colInfoMap map[string][]*C
|
|||
if rule == nil {
|
||||
continue
|
||||
}
|
||||
tablePlan, err := buildTablePlan(tableName, rule.Filter, colInfoMap, lastpk, rule.ConvertCharset, stats)
|
||||
tablePlan, err := buildTablePlan(tableName, rule, colInfoMap, lastpk, stats)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -183,7 +182,8 @@ func MatchTable(tableName string, filter *binlogdatapb.Filter) (*binlogdatapb.Ru
|
|||
return nil, nil
|
||||
}
|
||||
|
||||
func buildTablePlan(tableName, filter string, colInfoMap map[string][]*ColumnInfo, lastpk *sqltypes.Result, convertCharset map[string](*binlogdatapb.CharsetConversion), stats *binlogplayer.Stats) (*TablePlan, error) {
|
||||
func buildTablePlan(tableName string, rule *binlogdatapb.Rule, colInfoMap map[string][]*ColumnInfo, lastpk *sqltypes.Result, stats *binlogplayer.Stats) (*TablePlan, error) {
|
||||
filter := rule.Filter
|
||||
query := filter
|
||||
// generate equivalent select statement if filter is empty or a keyrange.
|
||||
switch {
|
||||
|
@ -206,6 +206,12 @@ func buildTablePlan(tableName, filter string, colInfoMap map[string][]*ColumnInf
|
|||
Match: fromTable,
|
||||
}
|
||||
|
||||
enumValuesMap := map[string](map[string]string){}
|
||||
for k, v := range rule.ConvertEnumToText {
|
||||
tokensMap := schema.ParseEnumTokensMap(v)
|
||||
enumValuesMap[k] = tokensMap
|
||||
}
|
||||
|
||||
if expr, ok := sel.SelectExprs[0].(*sqlparser.StarExpr); ok {
|
||||
// If it's a "select *", we return a partial plan, and complete
|
||||
// it when we get back field info from the stream.
|
||||
|
@ -221,8 +227,10 @@ func buildTablePlan(tableName, filter string, colInfoMap map[string][]*ColumnInf
|
|||
SendRule: sendRule,
|
||||
Lastpk: lastpk,
|
||||
Stats: stats,
|
||||
ConvertCharset: convertCharset,
|
||||
EnumValuesMap: enumValuesMap,
|
||||
ConvertCharset: rule.ConvertCharset,
|
||||
}
|
||||
|
||||
return tablePlan, nil
|
||||
}
|
||||
|
||||
|
@ -273,7 +281,8 @@ func buildTablePlan(tableName, filter string, colInfoMap map[string][]*ColumnInf
|
|||
|
||||
tablePlan := tpb.generate()
|
||||
tablePlan.SendRule = sendRule
|
||||
tablePlan.ConvertCharset = convertCharset
|
||||
tablePlan.EnumValuesMap = enumValuesMap
|
||||
tablePlan.ConvertCharset = rule.ConvertCharset
|
||||
return tablePlan, nil
|
||||
}
|
||||
|
||||
|
|
|
@ -218,6 +218,9 @@ func (vc *vcopier) copyTable(ctx context.Context, tableName string, copyState ma
|
|||
lastpkpb = sqltypes.ResultToProto3(lastpkqr)
|
||||
}
|
||||
|
||||
rowsCopiedTicker := time.NewTicker(rowsCopiedUpdateInterval)
|
||||
defer rowsCopiedTicker.Stop()
|
||||
|
||||
var pkfields []*querypb.Field
|
||||
var updateCopyState *sqlparser.ParsedQuery
|
||||
var bv map[string]*querypb.BindVariable
|
||||
|
@ -225,6 +228,9 @@ func (vc *vcopier) copyTable(ctx context.Context, tableName string, copyState ma
|
|||
err = vc.vr.sourceVStreamer.VStreamRows(ctx, initialPlan.SendRule.Filter, lastpkpb, func(rows *binlogdatapb.VStreamRowsResponse) error {
|
||||
for {
|
||||
select {
|
||||
case <-rowsCopiedTicker.C:
|
||||
update := binlogplayer.GenerateUpdateRowsCopied(vc.vr.id, vc.vr.stats.CopyRowCount.Get())
|
||||
_, _ = vc.vr.dbClient.Execute(update)
|
||||
case <-ctx.Done():
|
||||
return io.EOF
|
||||
default:
|
||||
|
@ -234,7 +240,6 @@ func (vc *vcopier) copyTable(ctx context.Context, tableName string, copyState ma
|
|||
break
|
||||
}
|
||||
}
|
||||
|
||||
if vc.tablePlan == nil {
|
||||
if len(rows.Fields) == 0 {
|
||||
return fmt.Errorf("expecting field event first, got: %v", rows)
|
||||
|
|
|
@ -1359,7 +1359,8 @@ func (wr *Wrangler) SetKeyspaceServedFrom(ctx context.Context, keyspace string,
|
|||
|
||||
// RefreshTabletsByShard calls RefreshState on all the tablets in a given shard.
|
||||
func (wr *Wrangler) RefreshTabletsByShard(ctx context.Context, si *topo.ShardInfo, cells []string) error {
|
||||
return topotools.RefreshTabletsByShard(ctx, wr.ts, wr.tmc, si, cells, wr.Logger())
|
||||
_, err := topotools.RefreshTabletsByShard(ctx, wr.ts, wr.tmc, si, cells, wr.Logger())
|
||||
return err
|
||||
}
|
||||
|
||||
// DeleteKeyspace will do all the necessary changes in the topology server
|
||||
|
|
|
@ -146,8 +146,19 @@ message Rule {
|
|||
// to be excluded.
|
||||
// TODO(sougou): support this on vstreamer side also.
|
||||
string filter = 2;
|
||||
// ConvertEnumToText: optional, list per enum column name, the list of textual values.
|
||||
// When reading the binary log, all enum values are numeric. But sometimes it
|
||||
// is useful/needed to know what the textual mapping are.
|
||||
// Online DDL provides such use case.
|
||||
|
||||
map<string, CharsetConversion> convert_charset = 3;
|
||||
// Example: key="color", value="'red','green','blue'"
|
||||
map<string, string> convert_enum_to_text = 3;
|
||||
|
||||
// ConvertCharset: optional mapping, between column name and a CharsetConversion.
|
||||
// This hints to vreplication that columns are encoded from/to non-trivial charsets
|
||||
// The map is only populated when either "from" or "to" charset of a column are non-trivial
|
||||
// trivial charsets are utf8 and ascii variants.
|
||||
map<string, CharsetConversion> convert_charset = 4;
|
||||
}
|
||||
|
||||
// Filter represents a list of ordered rules. The first
|
||||
|
|
|
@ -22,6 +22,9 @@ option go_package = "vitess.io/vitess/go/vt/proto/mysqlctl";
|
|||
|
||||
package mysqlctl;
|
||||
|
||||
import "topodata.proto";
|
||||
import "vttime.proto";
|
||||
|
||||
message StartRequest{
|
||||
repeated string mysqld_args = 1;
|
||||
}
|
||||
|
@ -59,4 +62,36 @@ service MysqlCtl {
|
|||
message BackupInfo {
|
||||
string name = 1;
|
||||
string directory = 2;
|
||||
|
||||
string keyspace = 3;
|
||||
string shard = 4;
|
||||
|
||||
// The following fields will be extracted from the .Name field. If an error
|
||||
// occurs during extraction/parsing, these fields may not be set, but
|
||||
// VtctldServer.GetBackups will not fail.
|
||||
|
||||
topodata.TabletAlias tablet_alias = 5;
|
||||
vttime.Time time = 6;
|
||||
|
||||
// The following fields are may or may not be currently set. Work is inflight
|
||||
// to fully-support these fields in all backupengine/storage implementations.
|
||||
// See https://github.com/vitessio/vitess/issues/8332.
|
||||
|
||||
// Engine is the name of the backupengine implementation used to create
|
||||
// this backup.
|
||||
string engine = 7;
|
||||
Status status = 8;
|
||||
|
||||
// Status is an enum representing the possible status of a backup.
|
||||
enum Status {
|
||||
UNKNOWN = 0;
|
||||
INCOMPLETE = 1;
|
||||
COMPLETE = 2;
|
||||
// A backup status of INVALID should be set if the backup is complete
|
||||
// but unusable in some way (partial upload, corrupt file, etc).
|
||||
INVALID = 3;
|
||||
// A backup status of VALID should be set if the backup is both
|
||||
// complete and usuable.
|
||||
VALID = 4;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -117,11 +117,32 @@ message Workflow {
|
|||
vttime.Time time_updated = 10;
|
||||
string message = 11;
|
||||
repeated CopyState copy_states = 12;
|
||||
repeated Log logs = 13;
|
||||
// LogFetchError is set if we fail to fetch some logs for this stream. We
|
||||
// will never fail to fetch workflows because we cannot fetch the logs, but
|
||||
// we will still forward log-fetch errors to the caller, should that be
|
||||
// relevant to the context in which they are fetching workflows.
|
||||
//
|
||||
// Note that this field being set does not necessarily mean that Logs is nil;
|
||||
// if there are N logs that exist for the stream, and we fail to fetch the
|
||||
// ith log, we will still return logs in [0, i) + (i, N].
|
||||
string log_fetch_error = 14;
|
||||
|
||||
message CopyState {
|
||||
string table = 1;
|
||||
string last_pk = 2;
|
||||
}
|
||||
|
||||
message Log {
|
||||
int64 id = 1;
|
||||
int64 stream_id = 2;
|
||||
string type = 3;
|
||||
string state = 4;
|
||||
vttime.Time created_at = 5;
|
||||
vttime.Time updated_at = 6;
|
||||
string message = 7;
|
||||
int64 count = 8;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -337,6 +358,20 @@ message FindAllShardsInKeyspaceResponse {
|
|||
message GetBackupsRequest {
|
||||
string keyspace = 1;
|
||||
string shard = 2;
|
||||
// Limit, if nonzero, will return only the most N recent backups.
|
||||
uint32 limit = 3;
|
||||
// Detailed indicates whether to use the backupengine, if supported, to
|
||||
// populate additional fields, such as Engine and Status, on BackupInfo
|
||||
// objects in the response. If not set, or if the backupengine does not
|
||||
// support populating these fields, Engine will always be empty, and Status
|
||||
// will always be UNKNOWN.
|
||||
bool detailed = 4;
|
||||
// DetailedLimit, if nonzero, will only populate additional fields (see Detailed)
|
||||
// on the N most recent backups. The Limit field still dictates the total
|
||||
// number of backup info objects returned, so, in reality, min(Limit, DetailedLimit)
|
||||
// backup infos will have additional fields set, and any remaining backups
|
||||
// will not.
|
||||
uint32 detailed_limit = 5;
|
||||
}
|
||||
|
||||
message GetBackupsResponse {
|
||||
|
@ -558,6 +593,23 @@ message RebuildVSchemaGraphRequest {
|
|||
message RebuildVSchemaGraphResponse {
|
||||
}
|
||||
|
||||
message RefreshStateRequest {
|
||||
topodata.TabletAlias tablet_alias = 1;
|
||||
}
|
||||
|
||||
message RefreshStateResponse {
|
||||
}
|
||||
|
||||
message RefreshStateByShardRequest {
|
||||
string keyspace = 1;
|
||||
string shard = 2;
|
||||
repeated string cells = 3;
|
||||
}
|
||||
|
||||
message RefreshStateByShardResponse {
|
||||
bool is_partial_refresh = 1;
|
||||
}
|
||||
|
||||
message RemoveKeyspaceCellRequest {
|
||||
string keyspace = 1;
|
||||
string cell = 2;
|
||||
|
|
|
@ -136,6 +136,10 @@ service Vtctld {
|
|||
// VSchema objects in the provided cells (or all cells in the topo none
|
||||
// provided).
|
||||
rpc RebuildVSchemaGraph(vtctldata.RebuildVSchemaGraphRequest) returns (vtctldata.RebuildVSchemaGraphResponse) {};
|
||||
// RefreshState reloads the tablet record on the specified tablet.
|
||||
rpc RefreshState(vtctldata.RefreshStateRequest) returns (vtctldata.RefreshStateResponse) {};
|
||||
// RefreshStateByShard calls RefreshState on all the tablets in the given shard.
|
||||
rpc RefreshStateByShard(vtctldata.RefreshStateByShardRequest) returns (vtctldata.RefreshStateByShardResponse) {};
|
||||
// RemoveKeyspaceCell removes the specified cell from the Cells list for all
|
||||
// shards in the specified keyspace, as well as from the SrvKeyspace for that
|
||||
// keyspace in that cell.
|
||||
|
|
|
@ -24307,6 +24307,12 @@ export namespace vtctldata {
|
|||
|
||||
/** Stream copy_states */
|
||||
copy_states?: (vtctldata.Workflow.Stream.ICopyState[]|null);
|
||||
|
||||
/** Stream logs */
|
||||
logs?: (vtctldata.Workflow.Stream.ILog[]|null);
|
||||
|
||||
/** Stream log_fetch_error */
|
||||
log_fetch_error?: (string|null);
|
||||
}
|
||||
|
||||
/** Represents a Stream. */
|
||||
|
@ -24354,6 +24360,12 @@ export namespace vtctldata {
|
|||
/** Stream copy_states. */
|
||||
public copy_states: vtctldata.Workflow.Stream.ICopyState[];
|
||||
|
||||
/** Stream logs. */
|
||||
public logs: vtctldata.Workflow.Stream.ILog[];
|
||||
|
||||
/** Stream log_fetch_error. */
|
||||
public log_fetch_error: string;
|
||||
|
||||
/**
|
||||
* Creates a new Stream instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
|
@ -24522,6 +24534,138 @@ export namespace vtctldata {
|
|||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of a Log. */
|
||||
interface ILog {
|
||||
|
||||
/** Log id */
|
||||
id?: (number|Long|null);
|
||||
|
||||
/** Log stream_id */
|
||||
stream_id?: (number|Long|null);
|
||||
|
||||
/** Log type */
|
||||
type?: (string|null);
|
||||
|
||||
/** Log state */
|
||||
state?: (string|null);
|
||||
|
||||
/** Log created_at */
|
||||
created_at?: (vttime.ITime|null);
|
||||
|
||||
/** Log updated_at */
|
||||
updated_at?: (vttime.ITime|null);
|
||||
|
||||
/** Log message */
|
||||
message?: (string|null);
|
||||
|
||||
/** Log count */
|
||||
count?: (number|Long|null);
|
||||
}
|
||||
|
||||
/** Represents a Log. */
|
||||
class Log implements ILog {
|
||||
|
||||
/**
|
||||
* Constructs a new Log.
|
||||
* @param [properties] Properties to set
|
||||
*/
|
||||
constructor(properties?: vtctldata.Workflow.Stream.ILog);
|
||||
|
||||
/** Log id. */
|
||||
public id: (number|Long);
|
||||
|
||||
/** Log stream_id. */
|
||||
public stream_id: (number|Long);
|
||||
|
||||
/** Log type. */
|
||||
public type: string;
|
||||
|
||||
/** Log state. */
|
||||
public state: string;
|
||||
|
||||
/** Log created_at. */
|
||||
public created_at?: (vttime.ITime|null);
|
||||
|
||||
/** Log updated_at. */
|
||||
public updated_at?: (vttime.ITime|null);
|
||||
|
||||
/** Log message. */
|
||||
public message: string;
|
||||
|
||||
/** Log count. */
|
||||
public count: (number|Long);
|
||||
|
||||
/**
|
||||
* Creates a new Log instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
* @returns Log instance
|
||||
*/
|
||||
public static create(properties?: vtctldata.Workflow.Stream.ILog): vtctldata.Workflow.Stream.Log;
|
||||
|
||||
/**
|
||||
* Encodes the specified Log message. Does not implicitly {@link vtctldata.Workflow.Stream.Log.verify|verify} messages.
|
||||
* @param message Log message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encode(message: vtctldata.Workflow.Stream.ILog, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Encodes the specified Log message, length delimited. Does not implicitly {@link vtctldata.Workflow.Stream.Log.verify|verify} messages.
|
||||
* @param message Log message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encodeDelimited(message: vtctldata.Workflow.Stream.ILog, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Decodes a Log message from the specified reader or buffer.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @param [length] Message length if known beforehand
|
||||
* @returns Log
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decode(reader: ($protobuf.Reader|Uint8Array), length?: number): vtctldata.Workflow.Stream.Log;
|
||||
|
||||
/**
|
||||
* Decodes a Log message from the specified reader or buffer, length delimited.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @returns Log
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decodeDelimited(reader: ($protobuf.Reader|Uint8Array)): vtctldata.Workflow.Stream.Log;
|
||||
|
||||
/**
|
||||
* Verifies a Log message.
|
||||
* @param message Plain object to verify
|
||||
* @returns `null` if valid, otherwise the reason why it is not
|
||||
*/
|
||||
public static verify(message: { [k: string]: any }): (string|null);
|
||||
|
||||
/**
|
||||
* Creates a Log message from a plain object. Also converts values to their respective internal types.
|
||||
* @param object Plain object
|
||||
* @returns Log
|
||||
*/
|
||||
public static fromObject(object: { [k: string]: any }): vtctldata.Workflow.Stream.Log;
|
||||
|
||||
/**
|
||||
* Creates a plain object from a Log message. Also converts values to other types if specified.
|
||||
* @param message Log
|
||||
* @param [options] Conversion options
|
||||
* @returns Plain object
|
||||
*/
|
||||
public static toObject(message: vtctldata.Workflow.Stream.Log, options?: $protobuf.IConversionOptions): { [k: string]: any };
|
||||
|
||||
/**
|
||||
* Converts this Log to JSON.
|
||||
* @returns JSON object
|
||||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -25071,6 +25215,216 @@ export namespace vtctldata {
|
|||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of an ApplyVSchemaRequest. */
|
||||
interface IApplyVSchemaRequest {
|
||||
|
||||
/** ApplyVSchemaRequest keyspace */
|
||||
keyspace?: (string|null);
|
||||
|
||||
/** ApplyVSchemaRequest skip_rebuild */
|
||||
skip_rebuild?: (boolean|null);
|
||||
|
||||
/** ApplyVSchemaRequest dry_run */
|
||||
dry_run?: (boolean|null);
|
||||
|
||||
/** ApplyVSchemaRequest cells */
|
||||
cells?: (string[]|null);
|
||||
|
||||
/** ApplyVSchemaRequest v_schema */
|
||||
v_schema?: (vschema.IKeyspace|null);
|
||||
|
||||
/** ApplyVSchemaRequest sql */
|
||||
sql?: (string|null);
|
||||
}
|
||||
|
||||
/** Represents an ApplyVSchemaRequest. */
|
||||
class ApplyVSchemaRequest implements IApplyVSchemaRequest {
|
||||
|
||||
/**
|
||||
* Constructs a new ApplyVSchemaRequest.
|
||||
* @param [properties] Properties to set
|
||||
*/
|
||||
constructor(properties?: vtctldata.IApplyVSchemaRequest);
|
||||
|
||||
/** ApplyVSchemaRequest keyspace. */
|
||||
public keyspace: string;
|
||||
|
||||
/** ApplyVSchemaRequest skip_rebuild. */
|
||||
public skip_rebuild: boolean;
|
||||
|
||||
/** ApplyVSchemaRequest dry_run. */
|
||||
public dry_run: boolean;
|
||||
|
||||
/** ApplyVSchemaRequest cells. */
|
||||
public cells: string[];
|
||||
|
||||
/** ApplyVSchemaRequest v_schema. */
|
||||
public v_schema?: (vschema.IKeyspace|null);
|
||||
|
||||
/** ApplyVSchemaRequest sql. */
|
||||
public sql: string;
|
||||
|
||||
/**
|
||||
* Creates a new ApplyVSchemaRequest instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
* @returns ApplyVSchemaRequest instance
|
||||
*/
|
||||
public static create(properties?: vtctldata.IApplyVSchemaRequest): vtctldata.ApplyVSchemaRequest;
|
||||
|
||||
/**
|
||||
* Encodes the specified ApplyVSchemaRequest message. Does not implicitly {@link vtctldata.ApplyVSchemaRequest.verify|verify} messages.
|
||||
* @param message ApplyVSchemaRequest message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encode(message: vtctldata.IApplyVSchemaRequest, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Encodes the specified ApplyVSchemaRequest message, length delimited. Does not implicitly {@link vtctldata.ApplyVSchemaRequest.verify|verify} messages.
|
||||
* @param message ApplyVSchemaRequest message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encodeDelimited(message: vtctldata.IApplyVSchemaRequest, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Decodes an ApplyVSchemaRequest message from the specified reader or buffer.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @param [length] Message length if known beforehand
|
||||
* @returns ApplyVSchemaRequest
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decode(reader: ($protobuf.Reader|Uint8Array), length?: number): vtctldata.ApplyVSchemaRequest;
|
||||
|
||||
/**
|
||||
* Decodes an ApplyVSchemaRequest message from the specified reader or buffer, length delimited.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @returns ApplyVSchemaRequest
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decodeDelimited(reader: ($protobuf.Reader|Uint8Array)): vtctldata.ApplyVSchemaRequest;
|
||||
|
||||
/**
|
||||
* Verifies an ApplyVSchemaRequest message.
|
||||
* @param message Plain object to verify
|
||||
* @returns `null` if valid, otherwise the reason why it is not
|
||||
*/
|
||||
public static verify(message: { [k: string]: any }): (string|null);
|
||||
|
||||
/**
|
||||
* Creates an ApplyVSchemaRequest message from a plain object. Also converts values to their respective internal types.
|
||||
* @param object Plain object
|
||||
* @returns ApplyVSchemaRequest
|
||||
*/
|
||||
public static fromObject(object: { [k: string]: any }): vtctldata.ApplyVSchemaRequest;
|
||||
|
||||
/**
|
||||
* Creates a plain object from an ApplyVSchemaRequest message. Also converts values to other types if specified.
|
||||
* @param message ApplyVSchemaRequest
|
||||
* @param [options] Conversion options
|
||||
* @returns Plain object
|
||||
*/
|
||||
public static toObject(message: vtctldata.ApplyVSchemaRequest, options?: $protobuf.IConversionOptions): { [k: string]: any };
|
||||
|
||||
/**
|
||||
* Converts this ApplyVSchemaRequest to JSON.
|
||||
* @returns JSON object
|
||||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of an ApplyVSchemaResponse. */
|
||||
interface IApplyVSchemaResponse {
|
||||
|
||||
/** ApplyVSchemaResponse v_schema */
|
||||
v_schema?: (vschema.IKeyspace|null);
|
||||
}
|
||||
|
||||
/** Represents an ApplyVSchemaResponse. */
|
||||
class ApplyVSchemaResponse implements IApplyVSchemaResponse {
|
||||
|
||||
/**
|
||||
* Constructs a new ApplyVSchemaResponse.
|
||||
* @param [properties] Properties to set
|
||||
*/
|
||||
constructor(properties?: vtctldata.IApplyVSchemaResponse);
|
||||
|
||||
/** ApplyVSchemaResponse v_schema. */
|
||||
public v_schema?: (vschema.IKeyspace|null);
|
||||
|
||||
/**
|
||||
* Creates a new ApplyVSchemaResponse instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
* @returns ApplyVSchemaResponse instance
|
||||
*/
|
||||
public static create(properties?: vtctldata.IApplyVSchemaResponse): vtctldata.ApplyVSchemaResponse;
|
||||
|
||||
/**
|
||||
* Encodes the specified ApplyVSchemaResponse message. Does not implicitly {@link vtctldata.ApplyVSchemaResponse.verify|verify} messages.
|
||||
* @param message ApplyVSchemaResponse message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encode(message: vtctldata.IApplyVSchemaResponse, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Encodes the specified ApplyVSchemaResponse message, length delimited. Does not implicitly {@link vtctldata.ApplyVSchemaResponse.verify|verify} messages.
|
||||
* @param message ApplyVSchemaResponse message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encodeDelimited(message: vtctldata.IApplyVSchemaResponse, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Decodes an ApplyVSchemaResponse message from the specified reader or buffer.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @param [length] Message length if known beforehand
|
||||
* @returns ApplyVSchemaResponse
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decode(reader: ($protobuf.Reader|Uint8Array), length?: number): vtctldata.ApplyVSchemaResponse;
|
||||
|
||||
/**
|
||||
* Decodes an ApplyVSchemaResponse message from the specified reader or buffer, length delimited.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @returns ApplyVSchemaResponse
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decodeDelimited(reader: ($protobuf.Reader|Uint8Array)): vtctldata.ApplyVSchemaResponse;
|
||||
|
||||
/**
|
||||
* Verifies an ApplyVSchemaResponse message.
|
||||
* @param message Plain object to verify
|
||||
* @returns `null` if valid, otherwise the reason why it is not
|
||||
*/
|
||||
public static verify(message: { [k: string]: any }): (string|null);
|
||||
|
||||
/**
|
||||
* Creates an ApplyVSchemaResponse message from a plain object. Also converts values to their respective internal types.
|
||||
* @param object Plain object
|
||||
* @returns ApplyVSchemaResponse
|
||||
*/
|
||||
public static fromObject(object: { [k: string]: any }): vtctldata.ApplyVSchemaResponse;
|
||||
|
||||
/**
|
||||
* Creates a plain object from an ApplyVSchemaResponse message. Also converts values to other types if specified.
|
||||
* @param message ApplyVSchemaResponse
|
||||
* @param [options] Conversion options
|
||||
* @returns Plain object
|
||||
*/
|
||||
public static toObject(message: vtctldata.ApplyVSchemaResponse, options?: $protobuf.IConversionOptions): { [k: string]: any };
|
||||
|
||||
/**
|
||||
* Converts this ApplyVSchemaResponse to JSON.
|
||||
* @returns JSON object
|
||||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of a ChangeTabletTypeRequest. */
|
||||
interface IChangeTabletTypeRequest {
|
||||
|
||||
|
@ -27023,6 +27377,15 @@ export namespace vtctldata {
|
|||
|
||||
/** GetBackupsRequest shard */
|
||||
shard?: (string|null);
|
||||
|
||||
/** GetBackupsRequest limit */
|
||||
limit?: (number|null);
|
||||
|
||||
/** GetBackupsRequest detailed */
|
||||
detailed?: (boolean|null);
|
||||
|
||||
/** GetBackupsRequest detailed_limit */
|
||||
detailed_limit?: (number|null);
|
||||
}
|
||||
|
||||
/** Represents a GetBackupsRequest. */
|
||||
|
@ -27040,6 +27403,15 @@ export namespace vtctldata {
|
|||
/** GetBackupsRequest shard. */
|
||||
public shard: string;
|
||||
|
||||
/** GetBackupsRequest limit. */
|
||||
public limit: number;
|
||||
|
||||
/** GetBackupsRequest detailed. */
|
||||
public detailed: boolean;
|
||||
|
||||
/** GetBackupsRequest detailed_limit. */
|
||||
public detailed_limit: number;
|
||||
|
||||
/**
|
||||
* Creates a new GetBackupsRequest instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
|
@ -30549,6 +30921,372 @@ export namespace vtctldata {
|
|||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of a RefreshStateRequest. */
|
||||
interface IRefreshStateRequest {
|
||||
|
||||
/** RefreshStateRequest tablet_alias */
|
||||
tablet_alias?: (topodata.ITabletAlias|null);
|
||||
}
|
||||
|
||||
/** Represents a RefreshStateRequest. */
|
||||
class RefreshStateRequest implements IRefreshStateRequest {
|
||||
|
||||
/**
|
||||
* Constructs a new RefreshStateRequest.
|
||||
* @param [properties] Properties to set
|
||||
*/
|
||||
constructor(properties?: vtctldata.IRefreshStateRequest);
|
||||
|
||||
/** RefreshStateRequest tablet_alias. */
|
||||
public tablet_alias?: (topodata.ITabletAlias|null);
|
||||
|
||||
/**
|
||||
* Creates a new RefreshStateRequest instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
* @returns RefreshStateRequest instance
|
||||
*/
|
||||
public static create(properties?: vtctldata.IRefreshStateRequest): vtctldata.RefreshStateRequest;
|
||||
|
||||
/**
|
||||
* Encodes the specified RefreshStateRequest message. Does not implicitly {@link vtctldata.RefreshStateRequest.verify|verify} messages.
|
||||
* @param message RefreshStateRequest message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encode(message: vtctldata.IRefreshStateRequest, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Encodes the specified RefreshStateRequest message, length delimited. Does not implicitly {@link vtctldata.RefreshStateRequest.verify|verify} messages.
|
||||
* @param message RefreshStateRequest message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encodeDelimited(message: vtctldata.IRefreshStateRequest, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Decodes a RefreshStateRequest message from the specified reader or buffer.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @param [length] Message length if known beforehand
|
||||
* @returns RefreshStateRequest
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decode(reader: ($protobuf.Reader|Uint8Array), length?: number): vtctldata.RefreshStateRequest;
|
||||
|
||||
/**
|
||||
* Decodes a RefreshStateRequest message from the specified reader or buffer, length delimited.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @returns RefreshStateRequest
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decodeDelimited(reader: ($protobuf.Reader|Uint8Array)): vtctldata.RefreshStateRequest;
|
||||
|
||||
/**
|
||||
* Verifies a RefreshStateRequest message.
|
||||
* @param message Plain object to verify
|
||||
* @returns `null` if valid, otherwise the reason why it is not
|
||||
*/
|
||||
public static verify(message: { [k: string]: any }): (string|null);
|
||||
|
||||
/**
|
||||
* Creates a RefreshStateRequest message from a plain object. Also converts values to their respective internal types.
|
||||
* @param object Plain object
|
||||
* @returns RefreshStateRequest
|
||||
*/
|
||||
public static fromObject(object: { [k: string]: any }): vtctldata.RefreshStateRequest;
|
||||
|
||||
/**
|
||||
* Creates a plain object from a RefreshStateRequest message. Also converts values to other types if specified.
|
||||
* @param message RefreshStateRequest
|
||||
* @param [options] Conversion options
|
||||
* @returns Plain object
|
||||
*/
|
||||
public static toObject(message: vtctldata.RefreshStateRequest, options?: $protobuf.IConversionOptions): { [k: string]: any };
|
||||
|
||||
/**
|
||||
* Converts this RefreshStateRequest to JSON.
|
||||
* @returns JSON object
|
||||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of a RefreshStateResponse. */
|
||||
interface IRefreshStateResponse {
|
||||
}
|
||||
|
||||
/** Represents a RefreshStateResponse. */
|
||||
class RefreshStateResponse implements IRefreshStateResponse {
|
||||
|
||||
/**
|
||||
* Constructs a new RefreshStateResponse.
|
||||
* @param [properties] Properties to set
|
||||
*/
|
||||
constructor(properties?: vtctldata.IRefreshStateResponse);
|
||||
|
||||
/**
|
||||
* Creates a new RefreshStateResponse instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
* @returns RefreshStateResponse instance
|
||||
*/
|
||||
public static create(properties?: vtctldata.IRefreshStateResponse): vtctldata.RefreshStateResponse;
|
||||
|
||||
/**
|
||||
* Encodes the specified RefreshStateResponse message. Does not implicitly {@link vtctldata.RefreshStateResponse.verify|verify} messages.
|
||||
* @param message RefreshStateResponse message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encode(message: vtctldata.IRefreshStateResponse, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Encodes the specified RefreshStateResponse message, length delimited. Does not implicitly {@link vtctldata.RefreshStateResponse.verify|verify} messages.
|
||||
* @param message RefreshStateResponse message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encodeDelimited(message: vtctldata.IRefreshStateResponse, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Decodes a RefreshStateResponse message from the specified reader or buffer.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @param [length] Message length if known beforehand
|
||||
* @returns RefreshStateResponse
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decode(reader: ($protobuf.Reader|Uint8Array), length?: number): vtctldata.RefreshStateResponse;
|
||||
|
||||
/**
|
||||
* Decodes a RefreshStateResponse message from the specified reader or buffer, length delimited.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @returns RefreshStateResponse
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decodeDelimited(reader: ($protobuf.Reader|Uint8Array)): vtctldata.RefreshStateResponse;
|
||||
|
||||
/**
|
||||
* Verifies a RefreshStateResponse message.
|
||||
* @param message Plain object to verify
|
||||
* @returns `null` if valid, otherwise the reason why it is not
|
||||
*/
|
||||
public static verify(message: { [k: string]: any }): (string|null);
|
||||
|
||||
/**
|
||||
* Creates a RefreshStateResponse message from a plain object. Also converts values to their respective internal types.
|
||||
* @param object Plain object
|
||||
* @returns RefreshStateResponse
|
||||
*/
|
||||
public static fromObject(object: { [k: string]: any }): vtctldata.RefreshStateResponse;
|
||||
|
||||
/**
|
||||
* Creates a plain object from a RefreshStateResponse message. Also converts values to other types if specified.
|
||||
* @param message RefreshStateResponse
|
||||
* @param [options] Conversion options
|
||||
* @returns Plain object
|
||||
*/
|
||||
public static toObject(message: vtctldata.RefreshStateResponse, options?: $protobuf.IConversionOptions): { [k: string]: any };
|
||||
|
||||
/**
|
||||
* Converts this RefreshStateResponse to JSON.
|
||||
* @returns JSON object
|
||||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of a RefreshStateByShardRequest. */
|
||||
interface IRefreshStateByShardRequest {
|
||||
|
||||
/** RefreshStateByShardRequest keyspace */
|
||||
keyspace?: (string|null);
|
||||
|
||||
/** RefreshStateByShardRequest shard */
|
||||
shard?: (string|null);
|
||||
|
||||
/** RefreshStateByShardRequest cells */
|
||||
cells?: (string[]|null);
|
||||
}
|
||||
|
||||
/** Represents a RefreshStateByShardRequest. */
|
||||
class RefreshStateByShardRequest implements IRefreshStateByShardRequest {
|
||||
|
||||
/**
|
||||
* Constructs a new RefreshStateByShardRequest.
|
||||
* @param [properties] Properties to set
|
||||
*/
|
||||
constructor(properties?: vtctldata.IRefreshStateByShardRequest);
|
||||
|
||||
/** RefreshStateByShardRequest keyspace. */
|
||||
public keyspace: string;
|
||||
|
||||
/** RefreshStateByShardRequest shard. */
|
||||
public shard: string;
|
||||
|
||||
/** RefreshStateByShardRequest cells. */
|
||||
public cells: string[];
|
||||
|
||||
/**
|
||||
* Creates a new RefreshStateByShardRequest instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
* @returns RefreshStateByShardRequest instance
|
||||
*/
|
||||
public static create(properties?: vtctldata.IRefreshStateByShardRequest): vtctldata.RefreshStateByShardRequest;
|
||||
|
||||
/**
|
||||
* Encodes the specified RefreshStateByShardRequest message. Does not implicitly {@link vtctldata.RefreshStateByShardRequest.verify|verify} messages.
|
||||
* @param message RefreshStateByShardRequest message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encode(message: vtctldata.IRefreshStateByShardRequest, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Encodes the specified RefreshStateByShardRequest message, length delimited. Does not implicitly {@link vtctldata.RefreshStateByShardRequest.verify|verify} messages.
|
||||
* @param message RefreshStateByShardRequest message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encodeDelimited(message: vtctldata.IRefreshStateByShardRequest, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Decodes a RefreshStateByShardRequest message from the specified reader or buffer.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @param [length] Message length if known beforehand
|
||||
* @returns RefreshStateByShardRequest
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decode(reader: ($protobuf.Reader|Uint8Array), length?: number): vtctldata.RefreshStateByShardRequest;
|
||||
|
||||
/**
|
||||
* Decodes a RefreshStateByShardRequest message from the specified reader or buffer, length delimited.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @returns RefreshStateByShardRequest
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decodeDelimited(reader: ($protobuf.Reader|Uint8Array)): vtctldata.RefreshStateByShardRequest;
|
||||
|
||||
/**
|
||||
* Verifies a RefreshStateByShardRequest message.
|
||||
* @param message Plain object to verify
|
||||
* @returns `null` if valid, otherwise the reason why it is not
|
||||
*/
|
||||
public static verify(message: { [k: string]: any }): (string|null);
|
||||
|
||||
/**
|
||||
* Creates a RefreshStateByShardRequest message from a plain object. Also converts values to their respective internal types.
|
||||
* @param object Plain object
|
||||
* @returns RefreshStateByShardRequest
|
||||
*/
|
||||
public static fromObject(object: { [k: string]: any }): vtctldata.RefreshStateByShardRequest;
|
||||
|
||||
/**
|
||||
* Creates a plain object from a RefreshStateByShardRequest message. Also converts values to other types if specified.
|
||||
* @param message RefreshStateByShardRequest
|
||||
* @param [options] Conversion options
|
||||
* @returns Plain object
|
||||
*/
|
||||
public static toObject(message: vtctldata.RefreshStateByShardRequest, options?: $protobuf.IConversionOptions): { [k: string]: any };
|
||||
|
||||
/**
|
||||
* Converts this RefreshStateByShardRequest to JSON.
|
||||
* @returns JSON object
|
||||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of a RefreshStateByShardResponse. */
|
||||
interface IRefreshStateByShardResponse {
|
||||
|
||||
/** RefreshStateByShardResponse is_partial_refresh */
|
||||
is_partial_refresh?: (boolean|null);
|
||||
}
|
||||
|
||||
/** Represents a RefreshStateByShardResponse. */
|
||||
class RefreshStateByShardResponse implements IRefreshStateByShardResponse {
|
||||
|
||||
/**
|
||||
* Constructs a new RefreshStateByShardResponse.
|
||||
* @param [properties] Properties to set
|
||||
*/
|
||||
constructor(properties?: vtctldata.IRefreshStateByShardResponse);
|
||||
|
||||
/** RefreshStateByShardResponse is_partial_refresh. */
|
||||
public is_partial_refresh: boolean;
|
||||
|
||||
/**
|
||||
* Creates a new RefreshStateByShardResponse instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
* @returns RefreshStateByShardResponse instance
|
||||
*/
|
||||
public static create(properties?: vtctldata.IRefreshStateByShardResponse): vtctldata.RefreshStateByShardResponse;
|
||||
|
||||
/**
|
||||
* Encodes the specified RefreshStateByShardResponse message. Does not implicitly {@link vtctldata.RefreshStateByShardResponse.verify|verify} messages.
|
||||
* @param message RefreshStateByShardResponse message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encode(message: vtctldata.IRefreshStateByShardResponse, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Encodes the specified RefreshStateByShardResponse message, length delimited. Does not implicitly {@link vtctldata.RefreshStateByShardResponse.verify|verify} messages.
|
||||
* @param message RefreshStateByShardResponse message or plain object to encode
|
||||
* @param [writer] Writer to encode to
|
||||
* @returns Writer
|
||||
*/
|
||||
public static encodeDelimited(message: vtctldata.IRefreshStateByShardResponse, writer?: $protobuf.Writer): $protobuf.Writer;
|
||||
|
||||
/**
|
||||
* Decodes a RefreshStateByShardResponse message from the specified reader or buffer.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @param [length] Message length if known beforehand
|
||||
* @returns RefreshStateByShardResponse
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decode(reader: ($protobuf.Reader|Uint8Array), length?: number): vtctldata.RefreshStateByShardResponse;
|
||||
|
||||
/**
|
||||
* Decodes a RefreshStateByShardResponse message from the specified reader or buffer, length delimited.
|
||||
* @param reader Reader or buffer to decode from
|
||||
* @returns RefreshStateByShardResponse
|
||||
* @throws {Error} If the payload is not a reader or valid buffer
|
||||
* @throws {$protobuf.util.ProtocolError} If required fields are missing
|
||||
*/
|
||||
public static decodeDelimited(reader: ($protobuf.Reader|Uint8Array)): vtctldata.RefreshStateByShardResponse;
|
||||
|
||||
/**
|
||||
* Verifies a RefreshStateByShardResponse message.
|
||||
* @param message Plain object to verify
|
||||
* @returns `null` if valid, otherwise the reason why it is not
|
||||
*/
|
||||
public static verify(message: { [k: string]: any }): (string|null);
|
||||
|
||||
/**
|
||||
* Creates a RefreshStateByShardResponse message from a plain object. Also converts values to their respective internal types.
|
||||
* @param object Plain object
|
||||
* @returns RefreshStateByShardResponse
|
||||
*/
|
||||
public static fromObject(object: { [k: string]: any }): vtctldata.RefreshStateByShardResponse;
|
||||
|
||||
/**
|
||||
* Creates a plain object from a RefreshStateByShardResponse message. Also converts values to other types if specified.
|
||||
* @param message RefreshStateByShardResponse
|
||||
* @param [options] Conversion options
|
||||
* @returns Plain object
|
||||
*/
|
||||
public static toObject(message: vtctldata.RefreshStateByShardResponse, options?: $protobuf.IConversionOptions): { [k: string]: any };
|
||||
|
||||
/**
|
||||
* Converts this RefreshStateByShardResponse to JSON.
|
||||
* @returns JSON object
|
||||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
/** Properties of a RemoveKeyspaceCellRequest. */
|
||||
interface IRemoveKeyspaceCellRequest {
|
||||
|
||||
|
@ -35879,6 +36617,24 @@ export namespace mysqlctl {
|
|||
|
||||
/** BackupInfo directory */
|
||||
directory?: (string|null);
|
||||
|
||||
/** BackupInfo keyspace */
|
||||
keyspace?: (string|null);
|
||||
|
||||
/** BackupInfo shard */
|
||||
shard?: (string|null);
|
||||
|
||||
/** BackupInfo tablet_alias */
|
||||
tablet_alias?: (topodata.ITabletAlias|null);
|
||||
|
||||
/** BackupInfo time */
|
||||
time?: (vttime.ITime|null);
|
||||
|
||||
/** BackupInfo engine */
|
||||
engine?: (string|null);
|
||||
|
||||
/** BackupInfo status */
|
||||
status?: (mysqlctl.BackupInfo.Status|null);
|
||||
}
|
||||
|
||||
/** Represents a BackupInfo. */
|
||||
|
@ -35896,6 +36652,24 @@ export namespace mysqlctl {
|
|||
/** BackupInfo directory. */
|
||||
public directory: string;
|
||||
|
||||
/** BackupInfo keyspace. */
|
||||
public keyspace: string;
|
||||
|
||||
/** BackupInfo shard. */
|
||||
public shard: string;
|
||||
|
||||
/** BackupInfo tablet_alias. */
|
||||
public tablet_alias?: (topodata.ITabletAlias|null);
|
||||
|
||||
/** BackupInfo time. */
|
||||
public time?: (vttime.ITime|null);
|
||||
|
||||
/** BackupInfo engine. */
|
||||
public engine: string;
|
||||
|
||||
/** BackupInfo status. */
|
||||
public status: mysqlctl.BackupInfo.Status;
|
||||
|
||||
/**
|
||||
* Creates a new BackupInfo instance using the specified properties.
|
||||
* @param [properties] Properties to set
|
||||
|
@ -35966,4 +36740,16 @@ export namespace mysqlctl {
|
|||
*/
|
||||
public toJSON(): { [k: string]: any };
|
||||
}
|
||||
|
||||
namespace BackupInfo {
|
||||
|
||||
/** Status enum. */
|
||||
enum Status {
|
||||
UNKNOWN = 0,
|
||||
INCOMPLETE = 1,
|
||||
COMPLETE = 2,
|
||||
INVALID = 3,
|
||||
VALID = 4
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Загрузка…
Ссылка в новой задаче