[VEP-4, phase 1] Flag Deprecation Warnings (#9733)

* Add helper function to issue deprecation warnings when parsing flags

This package is internal, and extremely temporary. After the
necessary backwards-compatability deprecation cycle to switch to
`pflag`, will be removed.

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Update `cmd` callsites that don't parse via `servenv` to use deprecation-aware parser

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Override `Usage` func to show flags with double-dashes in `-h` output

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Extract and rework the deprecation warning text, plus a little cleaner structure

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Fixup import ordering for `context` while I'm here

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Handle positional arguments to filter out the double-dash for backwards compatibility

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Refactor to support subflag usages and custom usage funcs

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Add ability to filter out certain flags from the Usage

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Documentation for SetUsage nonsense

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Override custom Usages, preserving intended effects

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Special consideration for v1 reshard workflow cli

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Update flags in tests

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Filter out double-dash separator when running `vtctl`/`vtctlclient` <= v13.0.0

Signed-off-by: Andrew Mason <andrew@planetscale.com>

* Add release notes for CLI deprecations with advice

Signed-off-by: Andrew Mason <andrew@planetscale.com>
This commit is contained in:
Andrew Mason 2022-03-08 17:27:31 -05:00 коммит произвёл GitHub
Родитель 99216bd2bb
Коммит 8fde81bc42
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
115 изменённых файлов: 1371 добавлений и 1015 удалений

Просмотреть файл

@ -1,5 +1,57 @@
## Major Changes
## Command-line syntax deprecations
Vitess has begun a transition to a new library for CLI flag parsing.
In order to facilitate a smooth transition, certain syntaxes that will not be supported in the future now issue deprecation warnings when used.
The messages you will likely see, along with explanations and migrations, are:
### "Use of single-dash long flags is deprecated"
Single-dash usage will be only possible for short flags (e.g. `-v` is okay, but `-verbose` is not).
To migrate, update your CLI scripts from:
```
$ vttablet -tablet_alias zone1-100 -init_keyspace mykeyspace ... # old way
```
To:
```
$ vttablet --tablet_alias zone1-100 --init_keyspace mykeyspace ... # new way
```
### "Detected a dashed argument after a position argument."
As the full deprecation text goes on to (attempt to) explain, mixing flags and positional arguments will change in a future version that will break scripts.
Currently, when invoking a binary like
```
$ vtctl --topo_implementation etcd2 AddCellInfo --root "/vitess/global"
```
everything after the `AddCellInfo` is treated by `package flag` as a positional argument, and we then use a sub FlagSet to parse flags specific to the subcommand.
So, at the top-level, `flag.Args()` returns `["AddCellInfo", "--root", "/vitess/global"]`.
The library we are transitioning to is more flexible, allowing flags and positional arguments to be interwoven on the command-line.
For the above example, this means that we would attempt to parse `--root` as a top-level flag for the `vtctl` binary.
This will cause the program to exit on error, because that flag is only defined on the `AddCellInfo` subcommand.
In order to transition, a standalone double-dash (literally, `--`) will cause the new flag library to treat everything following that as a positional argument, and also works with the current flag parsing code we use.
So, to transition the above example without breakage, update the command to:
```
$ vtctl --topo_implementation etcd2 AddCellInfo -- --root "/vitess/global"
$ # the following will also work
$ vtctl --topo_implementation etcd2 -- AddCellInfo --root "/vitess/global"
$ # the following will NOT work, because --topo_implementation is a top-level flag, not a sub-command flag
$ vtctl -- --topo_implementation etcd2 AddCellInfo --root "/vitess/global"
```
### Online DDL changes
#### ddl_strategy: 'vitess'

Просмотреть файл

@ -18,13 +18,13 @@ limitations under the License.
package main
import (
"context"
"flag"
"fmt"
"io"
"os"
"time"
"context"
"vitess.io/vitess/go/cmd"
"vitess.io/vitess/go/exit"
"vitess.io/vitess/go/flagutil"
@ -34,6 +34,9 @@ import (
"vitess.io/vitess/go/vt/log"
"vitess.io/vitess/go/vt/logutil"
"vitess.io/vitess/go/vt/mysqlctl"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -222,43 +225,45 @@ func main() {
defer exit.Recover()
defer logutil.Flush()
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "Usage: %s [global parameters] command [command parameters]\n", os.Args[0])
fmt.Fprintf(os.Stderr, "\nThe global optional parameters are:\n")
flag.PrintDefaults()
fmt.Fprintf(os.Stderr, "\nThe commands are listed below. Use '%s <command> -h' for more help.\n\n", os.Args[0])
for _, cmd := range commands {
fmt.Fprintf(os.Stderr, " %s", cmd.name)
if cmd.params != "" {
fmt.Fprintf(os.Stderr, " %s", cmd.params)
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
Preface: func(w io.Writer) {
fmt.Fprintf(w, "Usage: %s [global parameters] command [command parameters]\n", os.Args[0])
fmt.Fprintf(w, "\nThe global optional parameters are:\n")
},
Epilogue: func(w io.Writer) {
fmt.Fprintf(w, "\nThe commands are listed below. Use '%s <command> -h' for more help.\n\n", os.Args[0])
for _, cmd := range commands {
fmt.Fprintf(w, " %s", cmd.name)
if cmd.params != "" {
fmt.Fprintf(w, " %s", cmd.params)
}
fmt.Fprintf(w, "\n")
}
fmt.Fprintf(os.Stderr, "\n")
}
fmt.Fprintf(os.Stderr, "\n")
}
fmt.Fprintf(w, "\n")
},
})
if cmd.IsRunningAsRoot() {
fmt.Fprintln(os.Stderr, "mysqlctl cannot be ran as root. Please run as a different user")
exit.Return(1)
}
dbconfigs.RegisterFlags(dbconfigs.Dba)
flag.Parse()
_flag.Parse()
tabletAddr = netutil.JoinHostPort("localhost", int32(*port))
action := flag.Arg(0)
action := _flag.Arg(0)
for _, cmd := range commands {
if cmd.name == action {
subFlags := flag.NewFlagSet(action, flag.ExitOnError)
subFlags.Usage = func() {
fmt.Fprintf(os.Stderr, "Usage: %s %s %s\n\n", os.Args[0], cmd.name, cmd.params)
fmt.Fprintf(os.Stderr, "%s\n\n", cmd.help)
subFlags.PrintDefaults()
}
_flag.SetUsage(subFlags, _flag.UsageOptions{
Preface: func(w io.Writer) {
fmt.Fprintf(w, "Usage: %s %s %s\n\n", os.Args[0], cmd.name, cmd.params)
fmt.Fprintf(w, "%s\n\n", cmd.help)
},
})
if err := cmd.method(subFlags, flag.Args()[1:]); err != nil {
if err := cmd.method(subFlags, _flag.Args()[1:]); err != nil {
log.Error(err)
exit.Return(1)
}

Просмотреть файл

@ -19,7 +19,6 @@ package main
import (
"bufio"
"bytes"
"flag"
"fmt"
"io"
"os"
@ -28,6 +27,9 @@ import (
"vitess.io/vitess/go/exit"
"vitess.io/vitess/go/vt/log"
"vitess.io/vitess/go/vt/sqlparser"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -57,8 +59,8 @@ func (a stats) Less(i, j int) bool { return a[i].Count > a[j].Count }
func main() {
defer exit.Recover()
flag.Parse()
for _, filename := range flag.Args() {
_flag.Parse()
for _, filename := range _flag.Args() {
fmt.Printf("processing: %s\n", filename)
if err := processFile(filename); err != nil {
log.Errorf("processFile error: %v", err)

Просмотреть файл

@ -17,17 +17,19 @@ limitations under the License.
package main
import (
"context"
"flag"
"fmt"
"os"
"context"
"vitess.io/vitess/go/exit"
"vitess.io/vitess/go/vt/log"
"vitess.io/vitess/go/vt/logutil"
"vitess.io/vitess/go/vt/topo"
"vitess.io/vitess/go/vt/topo/helpers"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -51,8 +53,8 @@ func main() {
defer exit.RecoverAll()
defer logutil.Flush()
flag.Parse()
args := flag.Args()
_flag.Parse()
args := _flag.Args()
if len(args) != 0 {
flag.Usage()
log.Exitf("topo2topo doesn't take any parameter.")

Просмотреть файл

@ -25,6 +25,9 @@ import (
"vitess.io/vitess/go/vt/logutil"
"vitess.io/vitess/go/vt/servenv"
"vitess.io/vitess/go/vt/vtaclcheck"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -38,47 +41,20 @@ var (
}
)
func usage() {
fmt.Printf("usage of vtaclcheck:\n")
for _, name := range vtaclcheckFlags {
f := flag.Lookup(name)
if f == nil {
panic("unknown flag " + name)
}
flagUsage(f)
}
}
// Cloned from the source to print out the usage for a given flag
func flagUsage(f *flag.Flag) {
s := fmt.Sprintf(" -%s", f.Name) // Two spaces before -; see next two comments.
name, usage := flag.UnquoteUsage(f)
if len(name) > 0 {
s += " " + name
}
// Boolean flags of one ASCII letter are so common we
// treat them specially, putting their usage on the same line.
if len(s) <= 4 { // space, space, '-', 'x'.
s += "\t"
} else {
// Four spaces before the tab triggers good alignment
// for both 4- and 8-space tab stops.
s += "\n \t"
}
s += usage
if name == "string" {
// put quotes on the value
s += fmt.Sprintf(" (default %q)", f.DefValue)
} else {
s += fmt.Sprintf(" (default %v)", f.DefValue)
}
fmt.Printf(s + "\n")
}
func init() {
logger := logutil.NewConsoleLogger()
flag.CommandLine.SetOutput(logutil.NewLoggerWriter(logger))
flag.Usage = usage
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
FlagFilter: func(f *flag.Flag) bool {
for _, name := range vtaclcheckFlags {
if f.Name == name {
return true
}
}
return false
},
})
}
func main() {

Просмотреть файл

@ -33,6 +33,9 @@ import (
_ "vitess.io/vitess/go/vt/vtgate/grpcvtgateconn"
// Import and register the gRPC tabletconn client
_ "vitess.io/vitess/go/vt/vttablet/grpctabletconn"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
/*
@ -101,7 +104,7 @@ func main() {
defer exit.Recover()
flag.Lookup("logtostderr").Value.Set("true")
flag.Parse()
_flag.Parse()
clientProto := vtbench.MySQL
switch *protocol {

Просмотреть файл

@ -23,6 +23,7 @@ import (
"errors"
"flag"
"fmt"
"io"
"math/rand"
"os"
"sort"
@ -40,6 +41,9 @@ import (
"vitess.io/vitess/go/vt/vtgate/vtgateconn"
vtrpcpb "vitess.io/vitess/go/vt/proto/vtrpc"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -76,11 +80,9 @@ var (
)
func init() {
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
flag.PrintDefaults()
fmt.Fprint(os.Stderr, usage)
}
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
Epilogue: func(w io.Writer) { fmt.Fprint(w, usage) },
})
}
type bindvars []interface{}
@ -145,8 +147,8 @@ func main() {
}
func run() (*results, error) {
flag.Parse()
args := flag.Args()
_flag.Parse()
args := _flag.Args()
if len(args) == 0 {
flag.Usage()

Просмотреть файл

@ -20,6 +20,7 @@ import (
"context"
"flag"
"fmt"
"io"
"log/syslog"
"os"
"os/signal"
@ -42,6 +43,9 @@ import (
"vitess.io/vitess/go/vt/vttablet/tmclient"
"vitess.io/vitess/go/vt/workflow"
"vitess.io/vitess/go/vt/wrangler"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -53,13 +57,16 @@ var (
func init() {
logger := logutil.NewConsoleLogger()
flag.CommandLine.SetOutput(logutil.NewLoggerWriter(logger))
flag.Usage = func() {
logger.Printf("Usage: %s [global parameters] command [command parameters]\n", os.Args[0])
logger.Printf("\nThe global optional parameters are:\n")
flag.PrintDefaults()
logger.Printf("\nThe commands are listed below, sorted by group. Use '%s <command> -h' for more help.\n\n", os.Args[0])
vtctl.PrintAllCommands(logger)
}
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
Preface: func(w io.Writer) {
logger.Printf("Usage: %s [global parameters] command [command parameters]\n", os.Args[0])
logger.Printf("\nThe global optional parameters are:\n")
},
Epilogue: func(w io.Writer) {
logger.Printf("\nThe commands are listed below, sorted by group. Use '%s <command> -h' for more help.\n\n", os.Args[0])
vtctl.PrintAllCommands(logger)
},
})
}
// signal handling, centralized here

Просмотреть файл

@ -17,6 +17,7 @@ limitations under the License.
package main
import (
"context"
"errors"
"flag"
"fmt"
@ -24,8 +25,6 @@ import (
"strings"
"time"
"context"
"vitess.io/vitess/go/exit"
"vitess.io/vitess/go/trace"
"vitess.io/vitess/go/vt/log"
@ -33,6 +32,9 @@ import (
"vitess.io/vitess/go/vt/vtctl/vtctlclient"
logutilpb "vitess.io/vitess/go/vt/proto/logutil"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
// The default values used by these flags cannot be taken from wrangler and
@ -68,16 +70,16 @@ func checkDeprecations(args []string) {
func main() {
defer exit.Recover()
flag.Parse()
_flag.Parse()
closer := trace.StartTracing("vtctlclient")
defer trace.LogErrorsWhenClosing(closer)
logger := logutil.NewConsoleLogger()
// We can't do much without a -server flag
// We can't do much without a --server flag
if *server == "" {
log.Error(errors.New("please specify -server <vtctld_host:vtctld_port> to specify the vtctld server to connect to"))
log.Error(errors.New("please specify --server <vtctld_host:vtctld_port> to specify the vtctld server to connect to"))
os.Exit(1)
}
@ -87,7 +89,7 @@ func main() {
checkDeprecations(flag.Args())
err := vtctlclient.RunCommandAndWait(
ctx, *server, flag.Args(),
ctx, *server, _flag.Args(),
func(e *logutilpb.Event) {
logutil.LogEvent(logger, e)
})
@ -97,7 +99,7 @@ func main() {
}
errStr := strings.Replace(err.Error(), "remote error: ", "", -1)
fmt.Printf("%s Error: %s\n", flag.Arg(0), errStr)
fmt.Printf("%s Error: %s\n", _flag.Arg(0), errStr)
log.Error(err)
os.Exit(1)
}

Просмотреть файл

@ -28,6 +28,9 @@ import (
"vitess.io/vitess/go/vt/vtexplain"
querypb "vitess.io/vitess/go/vt/proto/query"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -67,47 +70,20 @@ var (
}
)
func usage() {
fmt.Printf("usage of vtexplain:\n")
for _, name := range vtexplainFlags {
f := flag.Lookup(name)
if f == nil {
panic("unknown flag " + name)
}
flagUsage(f)
}
}
// Cloned from the source to print out the usage for a given flag
func flagUsage(f *flag.Flag) {
s := fmt.Sprintf(" -%s", f.Name) // Two spaces before -; see next two comments.
name, usage := flag.UnquoteUsage(f)
if len(name) > 0 {
s += " " + name
}
// Boolean flags of one ASCII letter are so common we
// treat them specially, putting their usage on the same line.
if len(s) <= 4 { // space, space, '-', 'x'.
s += "\t"
} else {
// Four spaces before the tab triggers good alignment
// for both 4- and 8-space tab stops.
s += "\n \t"
}
s += usage
if name == "string" {
// put quotes on the value
s += fmt.Sprintf(" (default %q)", f.DefValue)
} else {
s += fmt.Sprintf(" (default %v)", f.DefValue)
}
fmt.Printf(s + "\n")
}
func init() {
logger := logutil.NewConsoleLogger()
flag.CommandLine.SetOutput(logutil.NewLoggerWriter(logger))
flag.Usage = usage
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
FlagFilter: func(f *flag.Flag) bool {
for _, name := range vtexplainFlags {
if f.Name == name {
return true
}
}
return false
},
})
}
// getFileParam returns a string containing either flag is not "",

Просмотреть файл

@ -14,17 +14,19 @@ limitations under the License.
package main
import (
"context"
"flag"
"strings"
"golang.org/x/net/context"
"vitess.io/vitess/go/vt/vtgr"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
func main() {
clustersToWatch := flag.String("clusters_to_watch", "", "Comma-separated list of keyspaces or keyspace/shards that this instance will monitor and repair. Defaults to all clusters in the topology. Example: \"ks1,ks2/-80\"")
flag.Parse()
_flag.Parse()
// openTabletDiscovery will open up a connection to topo server
// and populate the tablets in memory

Просмотреть файл

@ -19,12 +19,15 @@ package main
import (
"flag"
"fmt"
"os"
"io"
"vitess.io/vitess/go/exit"
"vitess.io/vitess/go/vt/log"
"vitess.io/vitess/go/vt/logutil"
"vitess.io/vitess/go/vt/tlstest"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var doc = `
@ -110,13 +113,11 @@ func cmdCreateSignedCert(subFlags *flag.FlagSet, args []string) {
func main() {
defer exit.Recover()
defer logutil.Flush()
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "Usage of %v:\n", os.Args[0])
flag.PrintDefaults()
fmt.Fprint(os.Stderr, doc)
}
flag.Parse()
args := flag.Args()
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
Preface: func(w io.Writer) { fmt.Fprint(w, doc) },
})
_flag.Parse()
args := _flag.Args()
if len(args) == 0 {
flag.Usage()
exit.Return(1)

Просмотреть файл

@ -25,12 +25,12 @@ It has two modes: single command or interactive.
package main
import (
"context"
"flag"
"io"
"os"
"time"
"context"
"vitess.io/vitess/go/exit"
"vitess.io/vitess/go/vt/callerid"
"vitess.io/vitess/go/vt/log"
@ -39,6 +39,9 @@ import (
"vitess.io/vitess/go/vt/topo"
"vitess.io/vitess/go/vt/vtctl/reparentutil"
"vitess.io/vitess/go/vt/worker"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -53,13 +56,16 @@ func init() {
logger := logutil.NewConsoleLogger()
flag.CommandLine.SetOutput(logutil.NewLoggerWriter(logger))
flag.Usage = func() {
logger.Printf("Usage: %s [global parameters] command [command parameters]\n", os.Args[0])
logger.Printf("\nThe global optional parameters are:\n")
flag.PrintDefaults()
logger.Printf("\nThe commands are listed below, sorted by group. Use '%s <command> -h' for more help.\n\n", os.Args[0])
worker.PrintAllCommands(logger)
}
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
Preface: func(w io.Writer) {
logger.Printf("Usage: %s [global parameters] command [command parameters]\n", os.Args[0])
logger.Printf("\nThe global optional parameters are:\n")
},
Epilogue: func(w io.Writer) {
logger.Printf("\nThe commands are listed below, sorted by group. Use '%s <command> -h' for more help.\n\n", os.Args[0])
worker.PrintAllCommands(logger)
},
})
}
var (
@ -69,8 +75,8 @@ var (
func main() {
defer exit.Recover()
flag.Parse()
args := flag.Args()
_flag.Parse()
args := _flag.Args()
servenv.Init()
defer servenv.Close()

Просмотреть файл

@ -17,18 +17,20 @@ limitations under the License.
package main
import (
"context"
"flag"
"os"
"os/signal"
"syscall"
"context"
"vitess.io/vitess/go/vt/log"
"vitess.io/vitess/go/vt/logutil"
"vitess.io/vitess/go/vt/worker/vtworkerclient"
logutilpb "vitess.io/vitess/go/vt/proto/logutil"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -36,7 +38,7 @@ var (
)
func main() {
flag.Parse()
_flag.Parse()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -51,7 +53,7 @@ func main() {
logger := logutil.NewConsoleLogger()
err := vtworkerclient.RunCommandAndWait(
ctx, *server, flag.Args(),
ctx, *server, _flag.Args(),
func(e *logutilpb.Event) {
logutil.LogEvent(logger, e)
})

Просмотреть файл

@ -19,6 +19,7 @@ package main
import (
"archive/zip"
"bytes"
"context"
"flag"
"fmt"
"io"
@ -32,8 +33,6 @@ import (
"syscall"
"time"
"context"
"github.com/z-division/go-zookeeper/zk"
"golang.org/x/term"
@ -42,6 +41,9 @@ import (
"vitess.io/vitess/go/vt/logutil"
"vitess.io/vitess/go/vt/topo/zk2topo"
"vitess.io/vitess/go/vt/vtctl"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var doc = `
@ -137,13 +139,12 @@ var (
func main() {
defer exit.Recover()
defer logutil.Flush()
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "Usage of %v:\n", os.Args[0])
flag.PrintDefaults()
fmt.Fprint(os.Stderr, doc)
}
flag.Parse()
args := flag.Args()
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
Epilogue: func(w io.Writer) { fmt.Fprint(w, doc) },
})
_flag.Parse()
args := _flag.Args()
if len(args) == 0 {
flag.Usage()
exit.Return(1)
@ -156,6 +157,7 @@ func main() {
log.Exitf("Unknown command %v", cmdName)
}
subFlags := flag.NewFlagSet(cmdName, flag.ExitOnError)
_flag.SetUsage(subFlags, _flag.UsageOptions{})
// Create a context for the command, cancel it if we get a signal.
ctx, cancel := context.WithCancel(context.Background())

Просмотреть файл

@ -21,12 +21,16 @@ import (
"bufio"
"flag"
"fmt"
"io"
"os"
"vitess.io/vitess/go/exit"
"vitess.io/vitess/go/vt/log"
"vitess.io/vitess/go/vt/logutil"
"vitess.io/vitess/go/vt/zkctl"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var usage = `
@ -46,11 +50,9 @@ var (
)
func init() {
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
flag.PrintDefaults()
fmt.Fprint(os.Stderr, usage)
}
_flag.SetUsage(flag.CommandLine, _flag.UsageOptions{
Epilogue: func(w io.Writer) { fmt.Fprint(w, usage) },
})
stdin = bufio.NewReader(os.Stdin)
}
@ -58,8 +60,8 @@ func main() {
defer exit.Recover()
defer logutil.Flush()
flag.Parse()
args := flag.Args()
_flag.Parse()
args := _flag.Args()
if len(args) == 0 {
flag.Usage()
@ -69,7 +71,7 @@ func main() {
zkConfig := zkctl.MakeZkConfigFromString(*zkCfg, uint32(*myID))
zkd := zkctl.NewZkd(zkConfig)
action := flag.Arg(0)
action := _flag.Arg(0)
var err error
switch action {
case "init":

Просмотреть файл

@ -29,6 +29,9 @@ import (
"vitess.io/vitess/go/vt/log"
"vitess.io/vitess/go/vt/logutil"
"vitess.io/vitess/go/vt/zkctl"
// Include deprecation warnings for soon-to-be-unsupported flag invocations.
_flag "vitess.io/vitess/go/internal/flag"
)
var (
@ -42,7 +45,7 @@ func main() {
defer exit.Recover()
defer logutil.Flush()
flag.Parse()
_flag.Parse()
zkConfig := zkctl.MakeZkConfigFromString(*zkCfg, uint32(*myID))
zkd := zkctl.NewZkd(zkConfig)

151
go/internal/flag/flag.go Normal file
Просмотреть файл

@ -0,0 +1,151 @@
/*
Copyright 2022 The Vitess Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package flag is an internal package to allow us to gracefully transition
// from the standard library's flag package to pflag. See VEP-4 for details.
//
// In general, this package should not be imported or depended on, except in the
// cases of package servenv, and entrypoints in go/cmd. This package WILL be
// deleted after the migration to pflag is completed, without any support for
// compatibility.
package flag
import (
goflag "flag"
"os"
"reflect"
"strings"
"vitess.io/vitess/go/vt/log"
)
// Parse wraps the standard library's flag.Parse to perform some sanity checking
// and issue deprecation warnings in advance of our move to pflag.
//
// It also adjusts the global CommandLine's Usage func to print out flags with
// double-dashes when a user requests the help, attempting to otherwise leave
// the default Usage formatting unchanged.
//
// See VEP-4, phase 1 for details: https://github.com/vitessio/enhancements/blob/c766ea905e55409cddeb666d6073cd2ac4c9783e/veps/vep-4.md#phase-1-preparation
func Parse() {
// First, override the Usage func to make flags show in their double-dash
// forms to the user.
SetUsage(goflag.CommandLine, UsageOptions{})
// Then, parse as normal.
goflag.Parse()
// Finally, warn on deprecated flag usage.
warnOnSingleDashLongFlags(goflag.CommandLine, os.Args, log.Warningf)
warnOnMixedPositionalAndFlagArguments(goflag.Args(), log.Warningf)
}
// Args returns the positional arguments with the first double-dash ("--")
// removed. If no double-dash was specified on the command-line, this is
// equivalent to flag.Args() from the standard library flag package.
func Args() (args []string) {
doubleDashIdx := -1
for i, arg := range goflag.Args() {
if arg == "--" {
doubleDashIdx = i
break
}
args = append(args, arg)
}
if doubleDashIdx != -1 {
args = append(args, goflag.Args()[doubleDashIdx+1:]...)
}
return args
}
// Arg returns the ith command-line argument after flags have been processed,
// ignoring the first double-dash ("--") argument separator. If fewer than `i`
// arguments were specified, the empty string is returned. If no double-dash was
// specified, this is equivalent to flag.Arg(i) from the standard library flag
// package.
func Arg(i int) string {
if args := Args(); len(args) > i {
return args[i]
}
return ""
}
const (
singleDashLongFlagsWarning = "Use of single-dash long flags is deprecated and will be removed in the next version of Vitess. Please use --%s instead"
mixedFlagsAndPosargsWarning = "Detected a dashed argument after a positional argument. " +
"Currently these are treated as posargs that may be parsed by a subcommand, but in the next version of Vitess they will be parsed as top-level flags, which may not be defined, causing errors. " +
"To preserve existing behavior, please update your invocation to include a \"--\" after all top-level flags to continue treating %s as a positional argument."
)
// Check and warn on any single-dash flags.
func warnOnSingleDashLongFlags(fs *goflag.FlagSet, argv []string, warningf func(msg string, args ...interface{})) {
fs.Visit(func(f *goflag.Flag) {
// Boolean flags with single-character names are okay to use the
// single-dash form. I don't _think_ we have any of these, but I'm being
// conservative here.
if bf, ok := f.Value.(maybeBoolFlag); ok && bf.IsBoolFlag() && len(f.Name) == 1 {
return
}
for _, arg := range argv {
if strings.HasPrefix(arg, "-"+f.Name) {
warningf(singleDashLongFlagsWarning, f.Name)
}
}
})
}
// Check and warn for any mixed posarg / dashed-arg on the CLI.
func warnOnMixedPositionalAndFlagArguments(posargs []string, warningf func(msg string, args ...interface{})) {
for _, arg := range posargs {
if arg == "--" {
break
}
if strings.HasPrefix(arg, "-") {
log.Warningf(mixedFlagsAndPosargsWarning, arg)
}
}
}
// From the standard library documentation:
// > If a Value has an IsBoolFlag() bool method returning true, the
// > command-line parser makes -name equivalent to -name=true rather than
// > using the next command-line argument.
//
// This also has less-well-documented implications for the default Usage
// behavior, which is why we are duplicating it.
type maybeBoolFlag interface {
IsBoolFlag() bool
}
// isZeroValue determines whether the string represents the zero
// value for a flag.
// see https://cs.opensource.google/go/go/+/refs/tags/go1.17.7:src/flag/flag.go;l=451-465;drc=refs%2Ftags%2Fgo1.17.7
func isZeroValue(f *goflag.Flag, value string) bool {
typ := reflect.TypeOf(f.Value)
var z reflect.Value
if typ.Kind() == reflect.Ptr {
z = reflect.New(typ.Elem())
} else {
z = reflect.Zero(typ)
}
return value == z.Interface().(goflag.Value).String()
}

116
go/internal/flag/usage.go Normal file
Просмотреть файл

@ -0,0 +1,116 @@
/*
Copyright 2022 The Vitess Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package flag
import (
goflag "flag"
"fmt"
"io"
"os"
"strings"
)
// UsageOptions controls the custom behavior when overriding the Usage for a
// FlagSet.
type UsageOptions struct {
// Preface determines the beginning of the help text, before flag usages
// and defaults. If this function is nil, the Usage will print "Usage of <os.Args[0]:\n".
Preface func(w io.Writer)
// Epilogue optionally prints text after the flag usages and defaults. If
// this function is nil, the flag usage/defaults will be the end of the
// Usage text.
Epilogue func(w io.Writer)
// FlagFilter allows certain flags to be omitted from the flag usage and
// defaults. If non-nil, flags for which this function returns false are
// omitted.
FlagFilter func(f *goflag.Flag) bool
}
// SetUsage sets the Usage function for the given FlagSet according to the
// options. For VEP-4, all flags are printed in their double-dash form.
func SetUsage(fs *goflag.FlagSet, opts UsageOptions) {
flagFilter := opts.FlagFilter
if flagFilter == nil {
flagFilter = func(f *goflag.Flag) bool { return true }
}
fs.Usage = func() {
switch opts.Preface {
case nil:
fmt.Fprintf(fs.Output(), "Usage of %s:\n", os.Args[0])
default:
opts.Preface(fs.Output())
}
var buf strings.Builder
fs.VisitAll(func(f *goflag.Flag) {
if !flagFilter(f) {
return
}
defer buf.Reset()
defer func() { fmt.Fprintf(fs.Output(), "%s\n", buf.String()) }()
// See https://cs.opensource.google/go/go/+/refs/tags/go1.17.7:src/flag/flag.go;l=512;drc=refs%2Ftags%2Fgo1.17.7
// for why two leading spaces.
buf.WriteString(" ")
// We use `UnquoteUsage` to preserve the "name override"
// behavior of the standard flag package, documented here:
//
// > The listed type, here int, can be changed by placing a
// > back-quoted name in the flag's usage string; the first
// > such item in the message is taken to be a parameter name
// > to show in the message and the back quotes are stripped
// > from the message when displayed. For instance, given
// >
// > flag.String("I", "", "search `directory` for include files")
// >
// > the output will be
// >
// > -I directory
// > search directory for include files.
name, usage := goflag.UnquoteUsage(f)
// From the standard library documentation:
// > For bool flags, the type is omitted and if the flag name is
// > one byte the usage message appears on the same line.
if bf, ok := f.Value.(maybeBoolFlag); ok && bf.IsBoolFlag() && len(name) == 1 {
fmt.Fprintf(&buf, "-%s\t%s", f.Name, usage)
return
}
// First line: name, and, type or backticked name.
buf.WriteString("--")
buf.WriteString(f.Name)
if name != "" {
fmt.Fprintf(&buf, " %s", name)
}
buf.WriteString("\n\t")
// Second line: usage and optional default, if not the zero value
// for the type.
buf.WriteString(usage)
if !isZeroValue(f, f.DefValue) {
fmt.Fprintf(&buf, " (default %s)", f.DefValue)
}
})
if opts.Epilogue != nil {
opts.Epilogue(fs.Output())
}
}
}

Просмотреть файл

@ -49,14 +49,14 @@ var (
dbCredentialFile string
shardName = "0"
commonTabletArg = []string{
"-vreplication_healthcheck_topology_refresh", "1s",
"-vreplication_healthcheck_retry_delay", "1s",
"-vreplication_retry_delay", "1s",
"-degraded_threshold", "5s",
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-serving_state_grace_period", "1s"}
"--vreplication_healthcheck_topology_refresh", "1s",
"--vreplication_healthcheck_retry_delay", "1s",
"--vreplication_retry_delay", "1s",
"--degraded_threshold", "5s",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--serving_state_grace_period", "1s"}
)
// TestMainSetup sets up the basic test cluster
@ -68,7 +68,7 @@ func TestMainSetup(m *testing.M, useMysqlctld bool) {
localCluster = cluster.NewCluster(cell, hostname)
defer localCluster.Teardown()
localCluster.VtctldExtraArgs = append(localCluster.VtctldExtraArgs, "-durability_policy=semi_sync")
localCluster.VtctldExtraArgs = append(localCluster.VtctldExtraArgs, "--durability_policy=semi_sync")
// Start topo server
err := localCluster.StartTopo()
if err != nil {
@ -95,8 +95,8 @@ func TestMainSetup(m *testing.M, useMysqlctld bool) {
sql = sql + initialsharding.GetPasswordUpdateSQL(localCluster)
os.WriteFile(newInitDBFile, []byte(sql), 0666)
extraArgs := []string{"-db-credentials-file", dbCredentialFile}
commonTabletArg = append(commonTabletArg, "-db-credentials-file", dbCredentialFile)
extraArgs := []string{"--db-credentials-file", dbCredentialFile}
commonTabletArg = append(commonTabletArg, "--db-credentials-file", dbCredentialFile)
// start mysql process for all replicas and primary
var mysqlProcs []*exec.Cmd
@ -193,12 +193,12 @@ func TestBackupTransformImpl(t *testing.T) {
// restart the replica with transform hook parameter
replica1.VttabletProcess.TearDown()
replica1.VttabletProcess.ExtraArgs = []string{
"-db-credentials-file", dbCredentialFile,
"-backup_storage_hook", "test_backup_transform",
"-backup_storage_compress=false",
"-restore_from_backup",
"-backup_storage_implementation", "file",
"-file_backup_storage_root", localCluster.VtctldProcess.FileBackupStorageRoot}
"--db-credentials-file", dbCredentialFile,
"--backup_storage_hook", "test_backup_transform",
"--backup_storage_compress=false",
"--restore_from_backup",
"--backup_storage_implementation", "file",
"--file_backup_storage_root", localCluster.VtctldProcess.FileBackupStorageRoot}
replica1.VttabletProcess.ServingStatus = "SERVING"
err := replica1.VttabletProcess.Setup()
require.Nil(t, err)
@ -243,10 +243,10 @@ func TestBackupTransformImpl(t *testing.T) {
require.Nil(t, err)
replica2.VttabletProcess.CreateDB(keyspaceName)
replica2.VttabletProcess.ExtraArgs = []string{
"-db-credentials-file", dbCredentialFile,
"-restore_from_backup",
"-backup_storage_implementation", "file",
"-file_backup_storage_root", localCluster.VtctldProcess.FileBackupStorageRoot}
"--db-credentials-file", dbCredentialFile,
"--restore_from_backup",
"--backup_storage_implementation", "file",
"--file_backup_storage_root", localCluster.VtctldProcess.FileBackupStorageRoot}
replica2.VttabletProcess.ServingStatus = ""
err = replica2.VttabletProcess.Setup()
require.Nil(t, err)
@ -285,11 +285,11 @@ func TestBackupTransformErrorImpl(t *testing.T) {
require.Nil(t, err)
replica1.VttabletProcess.ExtraArgs = []string{
"-db-credentials-file", dbCredentialFile,
"-backup_storage_hook", "test_backup_error",
"-restore_from_backup",
"-backup_storage_implementation", "file",
"-file_backup_storage_root", localCluster.VtctldProcess.FileBackupStorageRoot}
"--db-credentials-file", dbCredentialFile,
"--backup_storage_hook", "test_backup_error",
"--restore_from_backup",
"--backup_storage_implementation", "file",
"--file_backup_storage_root", localCluster.VtctldProcess.FileBackupStorageRoot}
replica1.VttabletProcess.ServingStatus = "SERVING"
err = replica1.VttabletProcess.Setup()
require.Nil(t, err)

Просмотреть файл

@ -165,9 +165,9 @@ func firstBackupTest(t *testing.T, tabletType string) {
func vtBackup(t *testing.T, initialBackup bool, restartBeforeBackup bool) {
// Take the back using vtbackup executable
extraArgs := []string{"-allow_first_backup", "-db-credentials-file", dbCredentialFile}
extraArgs := []string{"--allow_first_backup", "--db-credentials-file", dbCredentialFile}
if restartBeforeBackup {
extraArgs = append(extraArgs, "-restart_before_backup")
extraArgs = append(extraArgs, "--restart_before_backup")
}
log.Infof("starting backup tablet %s", time.Now())
err := localCluster.StartVtbackup(newInitDBFile, initialBackup, keyspaceName, shardName, cell, extraArgs...)
@ -183,8 +183,8 @@ func verifyBackupCount(t *testing.T, shardKsName string, expected int) []string
func listBackups(shardKsName string) ([]string, error) {
backups, err := localCluster.VtctlProcess.ExecuteCommandWithOutput(
"-backup_storage_implementation", "file",
"-file_backup_storage_root",
"--backup_storage_implementation", "file",
"--file_backup_storage_root",
path.Join(os.Getenv("VTDATAROOT"), "tmp", "backupstorage"),
"ListBackups", shardKsName,
)
@ -207,8 +207,8 @@ func removeBackups(t *testing.T) {
require.Nil(t, err)
for _, backup := range backups {
_, err := localCluster.VtctlProcess.ExecuteCommandWithOutput(
"-backup_storage_implementation", "file",
"-file_backup_storage_root",
"--backup_storage_implementation", "file",
"--file_backup_storage_root",
path.Join(os.Getenv("VTDATAROOT"), "tmp", "backupstorage"),
"RemoveBackup", shardKsName, backup,
)
@ -245,7 +245,7 @@ func restore(t *testing.T, tablet *cluster.Vttablet, tabletType string, waitForS
require.Nil(t, err)
// Start tablets
tablet.VttabletProcess.ExtraArgs = []string{"-db-credentials-file", dbCredentialFile}
tablet.VttabletProcess.ExtraArgs = []string{"--db-credentials-file", dbCredentialFile}
tablet.VttabletProcess.TabletType = tabletType
tablet.VttabletProcess.ServingStatus = waitForState
tablet.VttabletProcess.SupportsBackup = true
@ -255,7 +255,7 @@ func restore(t *testing.T, tablet *cluster.Vttablet, tabletType string, waitForS
func resetTabletDirectory(t *testing.T, tablet cluster.Vttablet, initMysql bool) {
extraArgs := []string{"-db-credentials-file", dbCredentialFile}
extraArgs := []string{"--db-credentials-file", dbCredentialFile}
tablet.MysqlctlProcess.ExtraArgs = extraArgs
// Shutdown Mysql
@ -302,7 +302,7 @@ func tearDown(t *testing.T, initMysql bool) {
resetTabletDirectory(t, tablet, initMysql)
// DeleteTablet on a primary will cause tablet to shutdown, so should only call it after tablet is already shut down
err := localCluster.VtctlclientProcess.ExecuteCommand("DeleteTablet", "-allow_primary", tablet.Alias)
err := localCluster.VtctlclientProcess.ExecuteCommand("DeleteTablet", "--", "--allow_primary", tablet.Alias)
require.Nil(t, err)
}
}

Просмотреть файл

@ -43,14 +43,14 @@ var (
shardKsName = fmt.Sprintf("%s/%s", keyspaceName, shardName)
dbCredentialFile string
commonTabletArg = []string{
"-vreplication_healthcheck_topology_refresh", "1s",
"-vreplication_healthcheck_retry_delay", "1s",
"-vreplication_retry_delay", "1s",
"-degraded_threshold", "5s",
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-serving_state_grace_period", "1s"}
"--vreplication_healthcheck_topology_refresh", "1s",
"--vreplication_healthcheck_retry_delay", "1s",
"--vreplication_retry_delay", "1s",
"--degraded_threshold", "5s",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--serving_state_grace_period", "1s"}
)
func TestMain(m *testing.M) {
@ -61,7 +61,7 @@ func TestMain(m *testing.M) {
localCluster = cluster.NewCluster(cell, hostname)
defer localCluster.Teardown()
localCluster.VtctldExtraArgs = append(localCluster.VtctldExtraArgs, "-durability_policy=semi_sync")
localCluster.VtctldExtraArgs = append(localCluster.VtctldExtraArgs, "--durability_policy=semi_sync")
// Start topo server
err := localCluster.StartTopo()
if err != nil {
@ -93,8 +93,8 @@ func TestMain(m *testing.M) {
return 1, err
}
extraArgs := []string{"-db-credentials-file", dbCredentialFile}
commonTabletArg = append(commonTabletArg, "-db-credentials-file", dbCredentialFile)
extraArgs := []string{"--db-credentials-file", dbCredentialFile}
commonTabletArg = append(commonTabletArg, "--db-credentials-file", dbCredentialFile)
primary = localCluster.NewVttabletInstance("replica", 0, "")
replica1 = localCluster.NewVttabletInstance("replica", 0, "")

Просмотреть файл

@ -62,14 +62,14 @@ var (
dbCredentialFile string
shardName = "0"
commonTabletArg = []string{
"-vreplication_healthcheck_topology_refresh", "1s",
"-vreplication_healthcheck_retry_delay", "1s",
"-vreplication_retry_delay", "1s",
"-degraded_threshold", "5s",
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-serving_state_grace_period", "1s",
"--vreplication_healthcheck_topology_refresh", "1s",
"--vreplication_healthcheck_retry_delay", "1s",
"--vreplication_retry_delay", "1s",
"--degraded_threshold", "5s",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--serving_state_grace_period", "1s",
}
vtInsertTest = `
@ -83,7 +83,7 @@ var (
// LaunchCluster : starts the cluster as per given params.
func LaunchCluster(setupType int, streamMode string, stripes int) (int, error) {
localCluster = cluster.NewCluster(cell, hostname)
localCluster.VtctldExtraArgs = append(localCluster.VtctldExtraArgs, "-durability_policy=semi_sync")
localCluster.VtctldExtraArgs = append(localCluster.VtctldExtraArgs, "--durability_policy=semi_sync")
// Start topo server
err := localCluster.StartTopo()
@ -114,24 +114,24 @@ func LaunchCluster(setupType int, streamMode string, stripes int) (int, error) {
return 1, err
}
extraArgs := []string{"-db-credentials-file", dbCredentialFile}
commonTabletArg = append(commonTabletArg, "-db-credentials-file", dbCredentialFile)
extraArgs := []string{"--db-credentials-file", dbCredentialFile}
commonTabletArg = append(commonTabletArg, "--db-credentials-file", dbCredentialFile)
// Update arguments for xtrabackup
if setupType == XtraBackup {
useXtrabackup = true
xtrabackupArgs := []string{
"-backup_engine_implementation", "xtrabackup",
fmt.Sprintf("-xtrabackup_stream_mode=%s", streamMode),
"-xtrabackup_user=vt_dba",
fmt.Sprintf("-xtrabackup_stripes=%d", stripes),
"-xtrabackup_backup_flags", fmt.Sprintf("--password=%s", dbPassword),
"--backup_engine_implementation", "xtrabackup",
fmt.Sprintf("--xtrabackup_stream_mode=%s", streamMode),
"--xtrabackup_user=vt_dba",
fmt.Sprintf("--xtrabackup_stripes=%d", stripes),
"--xtrabackup_backup_flags", fmt.Sprintf("--password=%s", dbPassword),
}
// if streamMode is xbstream, add some additional args to test other xtrabackup flags
if streamMode == "xbstream" {
xtrabackupArgs = append(xtrabackupArgs, "-xtrabackup_prepare_flags", fmt.Sprintf("--use-memory=100M")) //nolint
xtrabackupArgs = append(xtrabackupArgs, "--xtrabackup_prepare_flags", fmt.Sprintf("--use-memory=100M")) //nolint
}
commonTabletArg = append(commonTabletArg, xtrabackupArgs...)
@ -293,7 +293,7 @@ func primaryBackup(t *testing.T) {
localCluster.VerifyBackupCount(t, shardKsName, 0)
err = localCluster.VtctlclientProcess.ExecuteCommand("Backup", "-allow_primary=true", primary.Alias)
err = localCluster.VtctlclientProcess.ExecuteCommand("Backup", "--", "--allow_primary=true", primary.Alias)
require.Nil(t, err)
// We'll restore this on the primary later to test restores using a backup timestamp
@ -314,7 +314,7 @@ func primaryBackup(t *testing.T) {
cluster.VerifyRowsInTablet(t, replica2, keyspaceName, 2)
cluster.VerifyLocalMetadata(t, replica2, keyspaceName, shardName, cell)
err = localCluster.VtctlclientProcess.ExecuteCommand("Backup", "-allow_primary=true", primary.Alias)
err = localCluster.VtctlclientProcess.ExecuteCommand("Backup", "--", "--allow_primary=true", primary.Alias)
require.Nil(t, err)
backups = localCluster.VerifyBackupCount(t, shardKsName, 2)
@ -322,20 +322,20 @@ func primaryBackup(t *testing.T) {
// Perform PRS to demote the primary tablet (primary) so that we can do a restore there and verify we don't have the
// data from after the older/first backup
err = localCluster.VtctlclientProcess.ExecuteCommand("PlannedReparentShard",
"-keyspace_shard", shardKsName,
"-new_primary", replica2.Alias)
err = localCluster.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "--",
"--keyspace_shard", shardKsName,
"--new_primary", replica2.Alias)
require.Nil(t, err)
// Delete the current primary tablet (replica2) so that the original primary tablet (primary) can be restored from the
// older/first backup w/o it replicating the subsequent insert done after the first backup was taken
err = localCluster.VtctlclientProcess.ExecuteCommand("DeleteTablet", "-allow_primary=true", replica2.Alias)
err = localCluster.VtctlclientProcess.ExecuteCommand("DeleteTablet", "--", "--allow_primary=true", replica2.Alias)
require.Nil(t, err)
err = replica2.VttabletProcess.TearDown()
require.Nil(t, err)
// Restore the older/first backup -- using the timestamp we saved -- on the original primary tablet (primary)
err = localCluster.VtctlclientProcess.ExecuteCommand("RestoreFromBackup", "-backup_timestamp", firstBackupTimestamp, primary.Alias)
err = localCluster.VtctlclientProcess.ExecuteCommand("RestoreFromBackup", "--", "--backup_timestamp", firstBackupTimestamp, primary.Alias)
require.Nil(t, err)
// Re-init the shard -- making the original primary tablet (primary) primary again -- for subsequent tests
@ -378,9 +378,9 @@ func primaryReplicaSameBackup(t *testing.T) {
cluster.VerifyRowsInTablet(t, replica2, keyspaceName, 2)
// Promote replica2 to primary
err = localCluster.VtctlclientProcess.ExecuteCommand("PlannedReparentShard",
"-keyspace_shard", shardKsName,
"-new_primary", replica2.Alias)
err = localCluster.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "--",
"--keyspace_shard", shardKsName,
"--new_primary", replica2.Alias)
require.Nil(t, err)
// insert more data on replica2 (current primary)
@ -450,9 +450,9 @@ func testRestoreOldPrimary(t *testing.T, method restoreMethod) {
require.Nil(t, err)
// reparent to replica1
err = localCluster.VtctlclientProcess.ExecuteCommand("PlannedReparentShard",
"-keyspace_shard", shardKsName,
"-new_primary", replica1.Alias)
err = localCluster.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "--",
"--keyspace_shard", shardKsName,
"--new_primary", replica1.Alias)
require.Nil(t, err)
// insert more data to new primary
@ -518,12 +518,12 @@ func stopAllTablets() {
tablet.VttabletProcess.TearDown()
if tablet.MysqlctldProcess.TabletUID > 0 {
tablet.MysqlctldProcess.Stop()
localCluster.VtctlclientProcess.ExecuteCommand("DeleteTablet", "-allow_primary", tablet.Alias)
localCluster.VtctlclientProcess.ExecuteCommand("DeleteTablet", "--", "--allow_primary", tablet.Alias)
continue
}
proc, _ := tablet.MysqlctlProcess.StopProcess()
mysqlProcs = append(mysqlProcs, proc)
localCluster.VtctlclientProcess.ExecuteCommand("DeleteTablet", "-allow_primary", tablet.Alias)
localCluster.VtctlclientProcess.ExecuteCommand("DeleteTablet", "--", "--allow_primary", tablet.Alias)
}
for _, proc := range mysqlProcs {
proc.Wait()
@ -551,9 +551,9 @@ func terminatedRestore(t *testing.T) {
require.Nil(t, err)
// reparent to replica1
err = localCluster.VtctlclientProcess.ExecuteCommand("PlannedReparentShard",
"-keyspace_shard", shardKsName,
"-new_primary", replica1.Alias)
err = localCluster.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "--",
"--keyspace_shard", shardKsName,
"--new_primary", replica1.Alias)
require.Nil(t, err)
// insert more data to new primary
@ -643,9 +643,9 @@ func restoreWaitForBackup(t *testing.T, tabletType string) {
replica2.Type = tabletType
replica2.ValidateTabletRestart(t)
replicaTabletArgs := commonTabletArg
replicaTabletArgs = append(replicaTabletArgs, "-backup_engine_implementation", "fake_implementation")
replicaTabletArgs = append(replicaTabletArgs, "-wait_for_backup_interval", "1s")
replicaTabletArgs = append(replicaTabletArgs, "-init_tablet_type", tabletType)
replicaTabletArgs = append(replicaTabletArgs, "--backup_engine_implementation", "fake_implementation")
replicaTabletArgs = append(replicaTabletArgs, "--wait_for_backup_interval", "1s")
replicaTabletArgs = append(replicaTabletArgs, "--init_tablet_type", tabletType)
replica2.VttabletProcess.ExtraArgs = replicaTabletArgs
replica2.VttabletProcess.ServingStatus = ""
err := replica2.VttabletProcess.Setup()
@ -704,7 +704,7 @@ func terminateRestore(t *testing.T) {
useXtrabackup = false
}
args := append([]string{"-server", localCluster.VtctlclientProcess.Server, "-alsologtostderr"}, "RestoreFromBackup", primary.Alias)
args := append([]string{"--server", localCluster.VtctlclientProcess.Server, "--alsologtostderr"}, "RestoreFromBackup", "--", primary.Alias)
tmpProcess := exec.Command(
"vtctlclient",
args...,

Просмотреть файл

@ -54,16 +54,16 @@ var (
) Engine=InnoDB
`
commonTabletArg = []string{
"-vreplication_healthcheck_topology_refresh", "1s",
"-vreplication_healthcheck_retry_delay", "1s",
"-vreplication_retry_delay", "1s",
"-degraded_threshold", "5s",
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-serving_state_grace_period", "1s",
"-binlog_player_protocol", "grpc",
"-enable-autocommit",
"--vreplication_healthcheck_topology_refresh", "1s",
"--vreplication_healthcheck_retry_delay", "1s",
"--vreplication_retry_delay", "1s",
"--degraded_threshold", "5s",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--serving_state_grace_period", "1s",
"--binlog_player_protocol", "grpc",
"--enable-autocommit",
}
vSchema = `
{
@ -246,12 +246,12 @@ func TestAlias(t *testing.T) {
sharding.CheckSrvKeyspace(t, cell2, keyspaceName, "", 0, expectedPartitions, *localCluster)
// Adds alias so vtgate can route to replica/rdonly tablets that are not in the same cell, but same alias
err = localCluster.VtctlclientProcess.ExecuteCommand("AddCellsAlias",
"-cells", allCells,
err = localCluster.VtctlclientProcess.ExecuteCommand("AddCellsAlias", "--",
"--cells", allCells,
"region_east_coast")
require.NoError(t, err)
err = localCluster.VtctlclientProcess.ExecuteCommand("UpdateCellsAlias",
"-cells", allCells,
err = localCluster.VtctlclientProcess.ExecuteCommand("UpdateCellsAlias", "--",
"--cells", allCells,
"region_east_coast")
require.NoError(t, err)
@ -325,8 +325,8 @@ func TestAddAliasWhileVtgateUp(t *testing.T) {
testQueriesOnTabletType(t, "rdonly", vtgateInstance.GrpcPort, true)
// Adds alias so vtgate can route to replica/rdonly tablets that are not in the same cell, but same alias
err = localCluster.VtctlclientProcess.ExecuteCommand("AddCellsAlias",
"-cells", allCells,
err = localCluster.VtctlclientProcess.ExecuteCommand("AddCellsAlias", "--",
"--cells", allCells,
"region_east_coast")
require.NoError(t, err)
@ -349,9 +349,9 @@ func waitTillAllTabletsAreHealthyInVtgate(t *testing.T, vtgateInstance cluster.V
}
func testQueriesOnTabletType(t *testing.T, tabletType string, vtgateGrpcPort int, shouldFail bool) {
output, err := localCluster.VtctlProcess.ExecuteCommandWithOutput("VtGateExecute", "-json",
"-server", fmt.Sprintf("%s:%d", localCluster.Hostname, vtgateGrpcPort),
"-target", "@"+tabletType,
output, err := localCluster.VtctlProcess.ExecuteCommandWithOutput("VtGateExecute", "--", "--json",
"--server", fmt.Sprintf("%s:%d", localCluster.Hostname, vtgateGrpcPort),
"--target", "@"+tabletType,
fmt.Sprintf(`select * from %s`, tableName))
if shouldFail {
require.Error(t, err)

Просмотреть файл

@ -265,3 +265,19 @@ func NewConnParams(port int, password, socketPath, keyspace string) mysql.ConnPa
return cp
}
func filterDoubleDashArgs(args []string, version int) (filtered []string) {
if version > 13 {
return args
}
for _, arg := range args {
if arg == "--" {
continue
}
filtered = append(filtered, arg)
}
return filtered
}

Просмотреть файл

@ -49,13 +49,13 @@ type MysqlctlProcess struct {
// InitDb executes mysqlctl command to add cell info
func (mysqlctl *MysqlctlProcess) InitDb() (err error) {
args := []string{"-log_dir", mysqlctl.LogDirectory,
"-tablet_uid", fmt.Sprintf("%d", mysqlctl.TabletUID),
"-mysql_port", fmt.Sprintf("%d", mysqlctl.MySQLPort),
"init",
"-init_db_sql_file", mysqlctl.InitDBFile}
args := []string{"--log_dir", mysqlctl.LogDirectory,
"--tablet_uid", fmt.Sprintf("%d", mysqlctl.TabletUID),
"--mysql_port", fmt.Sprintf("%d", mysqlctl.MySQLPort),
"init", "--",
"--init_db_sql_file", mysqlctl.InitDBFile}
if *isCoverage {
args = append([]string{"-test.coverprofile=" + getCoveragePath("mysql-initdb.out"), "-test.v"}, args...)
args = append([]string{"--test.coverprofile=" + getCoveragePath("mysql-initdb.out"), "--test.v"}, args...)
}
tmpProcess := exec.Command(
mysqlctl.Binary,
@ -76,12 +76,12 @@ func (mysqlctl *MysqlctlProcess) Start() (err error) {
func (mysqlctl *MysqlctlProcess) StartProcess() (*exec.Cmd, error) {
tmpProcess := exec.Command(
mysqlctl.Binary,
"-log_dir", mysqlctl.LogDirectory,
"-tablet_uid", fmt.Sprintf("%d", mysqlctl.TabletUID),
"-mysql_port", fmt.Sprintf("%d", mysqlctl.MySQLPort),
"--log_dir", mysqlctl.LogDirectory,
"--tablet_uid", fmt.Sprintf("%d", mysqlctl.TabletUID),
"--mysql_port", fmt.Sprintf("%d", mysqlctl.MySQLPort),
)
if *isCoverage {
tmpProcess.Args = append(tmpProcess.Args, []string{"-test.coverprofile=" + getCoveragePath("mysql-start.out")}...)
tmpProcess.Args = append(tmpProcess.Args, []string{"--test.coverprofile=" + getCoveragePath("mysql-start.out")}...)
}
if len(mysqlctl.ExtraArgs) > 0 {
@ -120,8 +120,8 @@ ssl_key={{.Dir}}/server-001-key.pem
tmpProcess.Env = append(tmpProcess.Env, "VTDATAROOT="+os.Getenv("VTDATAROOT"))
}
tmpProcess.Args = append(tmpProcess.Args, "init",
"-init_db_sql_file", mysqlctl.InitDBFile)
tmpProcess.Args = append(tmpProcess.Args, "init", "--",
"--init_db_sql_file", mysqlctl.InitDBFile)
}
tmpProcess.Args = append(tmpProcess.Args, "start")
log.Infof("Starting mysqlctl with command: %v", tmpProcess.Args)
@ -171,11 +171,11 @@ func (mysqlctl *MysqlctlProcess) Stop() (err error) {
func (mysqlctl *MysqlctlProcess) StopProcess() (*exec.Cmd, error) {
tmpProcess := exec.Command(
mysqlctl.Binary,
"-log_dir", mysqlctl.LogDirectory,
"-tablet_uid", fmt.Sprintf("%d", mysqlctl.TabletUID),
"--log_dir", mysqlctl.LogDirectory,
"--tablet_uid", fmt.Sprintf("%d", mysqlctl.TabletUID),
)
if *isCoverage {
tmpProcess.Args = append(tmpProcess.Args, []string{"-test.coverprofile=" + getCoveragePath("mysql-stop.out")}...)
tmpProcess.Args = append(tmpProcess.Args, []string{"--test.coverprofile=" + getCoveragePath("mysql-stop.out")}...)
}
if len(mysqlctl.ExtraArgs) > 0 {
tmpProcess.Args = append(tmpProcess.Args, mysqlctl.ExtraArgs...)

Просмотреть файл

@ -50,10 +50,10 @@ type MysqlctldProcess struct {
func (mysqlctld *MysqlctldProcess) InitDb() (err error) {
tmpProcess := exec.Command(
mysqlctld.Binary,
"-log_dir", mysqlctld.LogDirectory,
"-tablet_uid", fmt.Sprintf("%d", mysqlctld.TabletUID),
"-mysql_port", fmt.Sprintf("%d", mysqlctld.MySQLPort),
"-init_db_sql_file", mysqlctld.InitDBFile,
"--log_dir", mysqlctld.LogDirectory,
"--tablet_uid", fmt.Sprintf("%d", mysqlctld.TabletUID),
"--mysql_port", fmt.Sprintf("%d", mysqlctld.MySQLPort),
"--init_db_sql_file", mysqlctld.InitDBFile,
)
return tmpProcess.Run()
}
@ -66,16 +66,16 @@ func (mysqlctld *MysqlctldProcess) Start() error {
_ = createDirectory(mysqlctld.LogDirectory, 0700)
tempProcess := exec.Command(
mysqlctld.Binary,
"-log_dir", mysqlctld.LogDirectory,
"-tablet_uid", fmt.Sprintf("%d", mysqlctld.TabletUID),
"-mysql_port", fmt.Sprintf("%d", mysqlctld.MySQLPort),
"--log_dir", mysqlctld.LogDirectory,
"--tablet_uid", fmt.Sprintf("%d", mysqlctld.TabletUID),
"--mysql_port", fmt.Sprintf("%d", mysqlctld.MySQLPort),
)
tempProcess.Args = append(tempProcess.Args, mysqlctld.ExtraArgs...)
if mysqlctld.InitMysql {
tempProcess.Args = append(tempProcess.Args,
"-init_db_sql_file", mysqlctld.InitDBFile)
"--init_db_sql_file", mysqlctld.InitDBFile)
}
errFile, _ := os.Create(path.Join(mysqlctld.LogDirectory, "mysqlctld-stderr.txt"))
@ -130,7 +130,7 @@ func (mysqlctld *MysqlctldProcess) Stop() error {
mysqlctld.exitSignalReceived = true
tmpProcess := exec.Command(
"mysqlctl",
"-tablet_uid", fmt.Sprintf("%d", mysqlctld.TabletUID),
"--tablet_uid", fmt.Sprintf("%d", mysqlctld.TabletUID),
)
tmpProcess.Args = append(tmpProcess.Args, mysqlctld.ExtraArgs...)
tmpProcess.Args = append(tmpProcess.Args, "shutdown")

Просмотреть файл

@ -57,25 +57,25 @@ func (vtbackup *VtbackupProcess) Setup() (err error) {
vtbackup.proc = exec.Command(
vtbackup.Binary,
"-topo_implementation", vtbackup.CommonArg.TopoImplementation,
"-topo_global_server_address", vtbackup.CommonArg.TopoGlobalAddress,
"-topo_global_root", vtbackup.CommonArg.TopoGlobalRoot,
"-log_dir", vtbackup.LogDir,
"--topo_implementation", vtbackup.CommonArg.TopoImplementation,
"--topo_global_server_address", vtbackup.CommonArg.TopoGlobalAddress,
"--topo_global_root", vtbackup.CommonArg.TopoGlobalRoot,
"--log_dir", vtbackup.LogDir,
//initDBfile is required to run vtbackup
"-mysql_port", fmt.Sprintf("%d", vtbackup.MysqlPort),
"-init_db_sql_file", vtbackup.initDBfile,
"-init_keyspace", vtbackup.Keyspace,
"-init_shard", vtbackup.Shard,
"--mysql_port", fmt.Sprintf("%d", vtbackup.MysqlPort),
"--init_db_sql_file", vtbackup.initDBfile,
"--init_keyspace", vtbackup.Keyspace,
"--init_shard", vtbackup.Shard,
//Backup Arguments are not optional
"-backup_storage_implementation", "file",
"-file_backup_storage_root",
"--backup_storage_implementation", "file",
"--file_backup_storage_root",
path.Join(os.Getenv("VTDATAROOT"), "tmp", "backupstorage"),
)
if vtbackup.initialBackup {
vtbackup.proc.Args = append(vtbackup.proc.Args, "-initial_backup")
vtbackup.proc.Args = append(vtbackup.proc.Args, "--initial_backup")
}
if vtbackup.ExtraArgs != nil {
vtbackup.proc.Args = append(vtbackup.proc.Args, vtbackup.ExtraArgs...)

Просмотреть файл

@ -35,24 +35,26 @@ type VtctlProcess struct {
TopoGlobalRoot string
TopoServerAddress string
TopoRootPath string
VtctlMajorVersion int
}
// AddCellInfo executes vtctl command to add cell info
func (vtctl *VtctlProcess) AddCellInfo(Cell string) (err error) {
tmpProcess := exec.Command(
vtctl.Binary,
"-topo_implementation", vtctl.TopoImplementation,
"-topo_global_server_address", vtctl.TopoGlobalAddress,
"-topo_global_root", vtctl.TopoGlobalRoot,
"--topo_implementation", vtctl.TopoImplementation,
"--topo_global_server_address", vtctl.TopoGlobalAddress,
"--topo_global_root", vtctl.TopoGlobalRoot,
)
if *isCoverage {
tmpProcess.Args = append(tmpProcess.Args, "-test.coverprofile="+getCoveragePath("vtctl-addcell.out"))
}
tmpProcess.Args = append(tmpProcess.Args,
"AddCellInfo",
"-root", vtctl.TopoRootPath+Cell,
"-server_address", vtctl.TopoServerAddress,
"AddCellInfo", "--",
"--root", vtctl.TopoRootPath+Cell,
"--server_address", vtctl.TopoServerAddress,
Cell)
tmpProcess.Args = filterDoubleDashArgs(tmpProcess.Args, vtctl.VtctlMajorVersion)
log.Infof("Adding CellInfo for cell %v with command: %v", Cell, strings.Join(tmpProcess.Args, " "))
return tmpProcess.Run()
}
@ -69,17 +71,17 @@ func (vtctl *VtctlProcess) CreateKeyspace(keyspace string) (err error) {
// ExecuteCommandWithOutput executes any vtctlclient command and returns output
func (vtctl *VtctlProcess) ExecuteCommandWithOutput(args ...string) (result string, err error) {
args = append([]string{
"-log_dir", vtctl.LogDir,
"-enable_queries",
"-topo_implementation", vtctl.TopoImplementation,
"-topo_global_server_address", vtctl.TopoGlobalAddress,
"-topo_global_root", vtctl.TopoGlobalRoot}, args...)
"--log_dir", vtctl.LogDir,
"--enable_queries",
"--topo_implementation", vtctl.TopoImplementation,
"--topo_global_server_address", vtctl.TopoGlobalAddress,
"--topo_global_root", vtctl.TopoGlobalRoot}, args...)
if *isCoverage {
args = append([]string{"-test.coverprofile=" + getCoveragePath("vtctl-o-"+args[0]+".out"), "-test.v"}, args...)
args = append([]string{"--test.coverprofile=" + getCoveragePath("vtctl-o-"+args[0]+".out"), "--test.v"}, args...)
}
tmpProcess := exec.Command(
vtctl.Binary,
args...,
filterDoubleDashArgs(args, vtctl.VtctlMajorVersion)...,
)
log.Info(fmt.Sprintf("Executing vtctlclient with arguments %v", strings.Join(tmpProcess.Args, " ")))
resultByte, err := tmpProcess.CombinedOutput()
@ -89,16 +91,16 @@ func (vtctl *VtctlProcess) ExecuteCommandWithOutput(args ...string) (result stri
// ExecuteCommand executes any vtctlclient command
func (vtctl *VtctlProcess) ExecuteCommand(args ...string) (err error) {
args = append([]string{
"-enable_queries",
"-topo_implementation", vtctl.TopoImplementation,
"-topo_global_server_address", vtctl.TopoGlobalAddress,
"-topo_global_root", vtctl.TopoGlobalRoot}, args...)
"--enable_queries",
"--topo_implementation", vtctl.TopoImplementation,
"--topo_global_server_address", vtctl.TopoGlobalAddress,
"--topo_global_root", vtctl.TopoGlobalRoot}, args...)
if *isCoverage {
args = append([]string{"-test.coverprofile=" + getCoveragePath("vtctl-"+args[0]+".out"), "-test.v"}, args...)
args = append([]string{"--test.coverprofile=" + getCoveragePath("vtctl-"+args[0]+".out"), "--test.v"}, args...)
}
tmpProcess := exec.Command(
vtctl.Binary,
args...,
filterDoubleDashArgs(args, vtctl.VtctlMajorVersion)...,
)
log.Info(fmt.Sprintf("Executing vtctlclient with arguments %v", strings.Join(tmpProcess.Args, " ")))
return tmpProcess.Run()
@ -125,6 +127,11 @@ func VtctlProcessInstance(topoPort int, hostname string) *VtctlProcess {
topoRootPath = ""
}
version, err := getMajorVersion("vtctl")
if err != nil {
log.Warningf("failed to get major vtctl version; interop with CLI changes for VEP-4 may not work: %s", err)
}
vtctl := &VtctlProcess{
Name: "vtctl",
Binary: "vtctl",
@ -133,6 +140,7 @@ func VtctlProcessInstance(topoPort int, hostname string) *VtctlProcess {
TopoGlobalRoot: topoGlobalRoot,
TopoServerAddress: fmt.Sprintf("%s:%d", hostname, topoPort),
TopoRootPath: topoRootPath,
VtctlMajorVersion: version,
}
return vtctl
}

Просмотреть файл

@ -29,11 +29,12 @@ import (
// VtctlClientProcess is a generic handle for a running vtctlclient command .
// It can be spawned manually
type VtctlClientProcess struct {
Name string
Binary string
Server string
TempDirectory string
ZoneName string
Name string
Binary string
Server string
TempDirectory string
ZoneName string
VtctlClientMajorVersion int
}
// VtctlClientParams encapsulated params to provide if non-default
@ -48,8 +49,8 @@ type VtctlClientParams struct {
// InitShardPrimary executes vtctlclient command to make specified tablet the primary for the shard.
func (vtctlclient *VtctlClientProcess) InitShardPrimary(Keyspace string, Shard string, Cell string, TabletUID int) (err error) {
output, err := vtctlclient.ExecuteCommandWithOutput(
"InitShardPrimary",
"-force", "-wait_replicas_timeout", "31s",
"InitShardPrimary", "--",
"--force", "--wait_replicas_timeout", "31s",
fmt.Sprintf("%s/%s", Keyspace, Shard),
fmt.Sprintf("%s-%d", Cell, TabletUID))
if err != nil {
@ -61,10 +62,10 @@ func (vtctlclient *VtctlClientProcess) InitShardPrimary(Keyspace string, Shard s
// InitializeShard executes vtctlclient command to make specified tablet the primary for the shard.
func (vtctlclient *VtctlClientProcess) InitializeShard(Keyspace string, Shard string, Cell string, TabletUID int) (err error) {
output, err := vtctlclient.ExecuteCommandWithOutput(
"PlannedReparentShard",
"-keyspace_shard", fmt.Sprintf("%s/%s", Keyspace, Shard),
"-wait_replicas_timeout", "31s",
"-new_primary", fmt.Sprintf("%s-%d", Cell, TabletUID))
"PlannedReparentShard", "--",
"--keyspace_shard", fmt.Sprintf("%s/%s", Keyspace, Shard),
"--wait_replicas_timeout", "31s",
"--new_primary", fmt.Sprintf("%s-%d", Cell, TabletUID))
if err != nil {
log.Errorf("error in PlannedReparentShard output %s, err %s", output, err.Error())
}
@ -74,24 +75,24 @@ func (vtctlclient *VtctlClientProcess) InitializeShard(Keyspace string, Shard st
// ApplySchemaWithOutput applies SQL schema to the keyspace
func (vtctlclient *VtctlClientProcess) ApplySchemaWithOutput(Keyspace string, SQL string, params VtctlClientParams) (result string, err error) {
args := []string{
"ApplySchema",
"-sql", SQL,
"ApplySchema", "--",
"--sql", SQL,
}
if params.MigrationContext != "" {
args = append(args, "-migration_context", params.MigrationContext)
args = append(args, "--migration_context", params.MigrationContext)
}
if params.DDLStrategy != "" {
args = append(args, "-ddl_strategy", params.DDLStrategy)
args = append(args, "--ddl_strategy", params.DDLStrategy)
}
if params.UUIDList != "" {
args = append(args, "-uuid_list", params.UUIDList)
args = append(args, "--uuid_list", params.UUIDList)
}
if params.SkipPreflight {
args = append(args, "-skip_preflight")
args = append(args, "--skip_preflight")
}
if params.CallerId != "" {
args = append(args, "-caller_id", params.CallerId)
args = append(args, "--caller_id", params.CallerId)
}
args = append(args, Keyspace)
return vtctlclient.ExecuteCommandWithOutput(args...)
@ -107,15 +108,15 @@ func (vtctlclient *VtctlClientProcess) ApplySchema(Keyspace string, SQL string)
// ApplyVSchema applies vitess schema (JSON format) to the keyspace
func (vtctlclient *VtctlClientProcess) ApplyVSchema(Keyspace string, JSON string) (err error) {
return vtctlclient.ExecuteCommand(
"ApplyVSchema",
"-vschema", JSON,
"ApplyVSchema", "--",
"--vschema", JSON,
Keyspace,
)
}
// ApplyRoutingRules does it
func (vtctlclient *VtctlClientProcess) ApplyRoutingRules(JSON string) (err error) {
return vtctlclient.ExecuteCommand("ApplyRoutingRules", "-rules", JSON)
return vtctlclient.ExecuteCommand("ApplyRoutingRules", "--", "--rules", JSON)
}
// OnlineDDLShowRecent responds with recent schema migration list
@ -189,14 +190,14 @@ func (vtctlclient *VtctlClientProcess) ExecuteCommand(args ...string) (err error
// ExecuteCommandWithOutput executes any vtctlclient command and returns output
func (vtctlclient *VtctlClientProcess) ExecuteCommandWithOutput(args ...string) (result string, err error) {
pArgs := []string{"-server", vtctlclient.Server}
pArgs := []string{"--server", vtctlclient.Server}
if *isCoverage {
pArgs = append(pArgs, "-test.coverprofile="+getCoveragePath("vtctlclient-"+args[0]+".out"), "-test.v")
pArgs = append(pArgs, "--test.coverprofile="+getCoveragePath("vtctlclient-"+args[0]+".out"), "--test.v")
}
pArgs = append(pArgs, args...)
tmpProcess := exec.Command(
vtctlclient.Binary,
pArgs...,
filterDoubleDashArgs(pArgs, vtctlclient.VtctlClientMajorVersion)...,
)
log.Infof("Executing vtctlclient with command: %v", strings.Join(tmpProcess.Args, " "))
resultByte, err := tmpProcess.CombinedOutput()
@ -206,11 +207,17 @@ func (vtctlclient *VtctlClientProcess) ExecuteCommandWithOutput(args ...string)
// VtctlClientProcessInstance returns a VtctlProcess handle for vtctlclient process
// configured with the given Config.
func VtctlClientProcessInstance(hostname string, grpcPort int, tmpDirectory string) *VtctlClientProcess {
version, err := getMajorVersion("vtctl") // `vtctlclient` does not have a --version flag, so we assume both vtctl/vtctlclient have the same version
if err != nil {
log.Warningf("failed to get major vtctlclient version; interop with CLI changes for VEP-4 may not work: %s", err)
}
vtctlclient := &VtctlClientProcess{
Name: "vtctlclient",
Binary: "vtctlclient",
Server: fmt.Sprintf("%s:%d", hostname, grpcPort),
TempDirectory: tmpDirectory,
Name: "vtctlclient",
Binary: "vtctlclient",
Server: fmt.Sprintf("%s:%d", hostname, grpcPort),
TempDirectory: tmpDirectory,
VtctlClientMajorVersion: version,
}
return vtctlclient
}
@ -221,15 +228,15 @@ func (vtctlclient *VtctlClientProcess) InitTablet(tablet *Vttablet, cell string,
if tablet.Type == "rdonly" {
tabletType = "rdonly"
}
args := []string{"InitTablet", "-hostname", hostname,
"-port", fmt.Sprintf("%d", tablet.HTTPPort), "-allow_update", "-parent",
"-keyspace", keyspaceName,
"-shard", shardName}
args := []string{"InitTablet", "--", "--hostname", hostname,
"--port", fmt.Sprintf("%d", tablet.HTTPPort), "--allow_update", "--parent",
"--keyspace", keyspaceName,
"--shard", shardName}
if tablet.MySQLPort > 0 {
args = append(args, "-mysql_port", fmt.Sprintf("%d", tablet.MySQLPort))
args = append(args, "--mysql_port", fmt.Sprintf("%d", tablet.MySQLPort))
}
if tablet.GrpcPort > 0 {
args = append(args, "-grpc_port", fmt.Sprintf("%d", tablet.GrpcPort))
args = append(args, "--grpc_port", fmt.Sprintf("%d", tablet.GrpcPort))
}
args = append(args, fmt.Sprintf("%s-%010d", cell, tablet.TabletUID), tabletType)
return vtctlclient.ExecuteCommand(args...)

Просмотреть файл

@ -54,25 +54,25 @@ func (vtctld *VtctldProcess) Setup(cell string, extraArgs ...string) (err error)
_ = createDirectory(path.Join(vtctld.Directory, "backups"), 0700)
vtctld.proc = exec.Command(
vtctld.Binary,
"-enable_queries",
"-topo_implementation", vtctld.CommonArg.TopoImplementation,
"-topo_global_server_address", vtctld.CommonArg.TopoGlobalAddress,
"-topo_global_root", vtctld.CommonArg.TopoGlobalRoot,
"-cell", cell,
"-workflow_manager_init",
"-workflow_manager_use_election",
"-service_map", vtctld.ServiceMap,
"-backup_storage_implementation", vtctld.BackupStorageImplementation,
"-file_backup_storage_root", vtctld.FileBackupStorageRoot,
"--enable_queries",
"--topo_implementation", vtctld.CommonArg.TopoImplementation,
"--topo_global_server_address", vtctld.CommonArg.TopoGlobalAddress,
"--topo_global_root", vtctld.CommonArg.TopoGlobalRoot,
"--cell", cell,
"--workflow_manager_init",
"--workflow_manager_use_election",
"--service_map", vtctld.ServiceMap,
"--backup_storage_implementation", vtctld.BackupStorageImplementation,
"--file_backup_storage_root", vtctld.FileBackupStorageRoot,
// hard-code these two soon-to-be deprecated drain values.
"-wait_for_drain_sleep_rdonly", "1s",
"-wait_for_drain_sleep_replica", "1s",
"-log_dir", vtctld.LogDir,
"-port", fmt.Sprintf("%d", vtctld.Port),
"-grpc_port", fmt.Sprintf("%d", vtctld.GrpcPort),
"--wait_for_drain_sleep_rdonly", "1s",
"--wait_for_drain_sleep_replica", "1s",
"--log_dir", vtctld.LogDir,
"--port", fmt.Sprintf("%d", vtctld.Port),
"--grpc_port", fmt.Sprintf("%d", vtctld.GrpcPort),
)
if *isCoverage {
vtctld.proc.Args = append(vtctld.proc.Args, "-test.coverprofile="+getCoveragePath("vtctld.out"))
vtctld.proc.Args = append(vtctld.proc.Args, "--test.coverprofile="+getCoveragePath("vtctld.out"))
}
vtctld.proc.Args = append(vtctld.proc.Args, extraArgs...)

Просмотреть файл

@ -71,34 +71,34 @@ const defaultVtGatePlannerVersion = planbuilder.Gen4CompareV3
func (vtgate *VtgateProcess) Setup() (err error) {
args := []string{
"-topo_implementation", vtgate.CommonArg.TopoImplementation,
"-topo_global_server_address", vtgate.CommonArg.TopoGlobalAddress,
"-topo_global_root", vtgate.CommonArg.TopoGlobalRoot,
"-log_dir", vtgate.LogDir,
"-log_queries_to_file", vtgate.FileToLogQueries,
"-port", fmt.Sprintf("%d", vtgate.Port),
"-grpc_port", fmt.Sprintf("%d", vtgate.GrpcPort),
"-mysql_server_port", fmt.Sprintf("%d", vtgate.MySQLServerPort),
"-mysql_server_socket_path", vtgate.MySQLServerSocketPath,
"-cell", vtgate.Cell,
"-cells_to_watch", vtgate.CellsToWatch,
"-tablet_types_to_wait", vtgate.TabletTypesToWait,
"-gateway_implementation", vtgate.GatewayImplementation,
"-service_map", vtgate.ServiceMap,
"-mysql_auth_server_impl", vtgate.MySQLAuthServerImpl,
"--topo_implementation", vtgate.CommonArg.TopoImplementation,
"--topo_global_server_address", vtgate.CommonArg.TopoGlobalAddress,
"--topo_global_root", vtgate.CommonArg.TopoGlobalRoot,
"--log_dir", vtgate.LogDir,
"--log_queries_to_file", vtgate.FileToLogQueries,
"--port", fmt.Sprintf("%d", vtgate.Port),
"--grpc_port", fmt.Sprintf("%d", vtgate.GrpcPort),
"--mysql_server_port", fmt.Sprintf("%d", vtgate.MySQLServerPort),
"--mysql_server_socket_path", vtgate.MySQLServerSocketPath,
"--cell", vtgate.Cell,
"--cells_to_watch", vtgate.CellsToWatch,
"--tablet_types_to_wait", vtgate.TabletTypesToWait,
"--gateway_implementation", vtgate.GatewayImplementation,
"--service_map", vtgate.ServiceMap,
"--mysql_auth_server_impl", vtgate.MySQLAuthServerImpl,
}
if vtgate.PlannerVersion > 0 {
args = append(args, "-planner_version", vtgate.PlannerVersion.String())
args = append(args, "--planner_version", vtgate.PlannerVersion.String())
}
if vtgate.SysVarSetEnabled {
args = append(args, "-enable_system_settings")
args = append(args, "--enable_system_settings")
}
vtgate.proc = exec.Command(
vtgate.Binary,
args...,
)
if *isCoverage {
vtgate.proc.Args = append(vtgate.proc.Args, "-test.coverprofile="+getCoveragePath("vtgate.out"))
vtgate.proc.Args = append(vtgate.proc.Args, "--test.coverprofile="+getCoveragePath("vtgate.out"))
}
vtgate.proc.Args = append(vtgate.proc.Args, vtgate.ExtraArgs...)

Просмотреть файл

@ -50,18 +50,18 @@ func (vtgr *VtgrProcess) Start(alias string) (err error) {
*/
vtgr.proc = exec.Command(
vtgr.Binary,
"-topo_implementation", vtgr.TopoImplementation,
"-topo_global_server_address", vtgr.TopoGlobalAddress,
"-topo_global_root", vtgr.TopoGlobalRoot,
"-tablet_manager_protocol", "grpc",
"-scan_repair_timeout", "50s",
"-clusters_to_watch", strings.Join(vtgr.clusters, ","),
"--topo_implementation", vtgr.TopoImplementation,
"--topo_global_server_address", vtgr.TopoGlobalAddress,
"--topo_global_root", vtgr.TopoGlobalRoot,
"--tablet_manager_protocol", "grpc",
"--scan_repair_timeout", "50s",
"--clusters_to_watch", strings.Join(vtgr.clusters, ","),
)
if vtgr.config != "" {
vtgr.proc.Args = append(vtgr.proc.Args, fmt.Sprintf("-config=%s", vtgr.config))
vtgr.proc.Args = append(vtgr.proc.Args, fmt.Sprintf("--config=%s", vtgr.config))
}
if vtgr.grPort != 0 {
vtgr.proc.Args = append(vtgr.proc.Args, fmt.Sprintf("-gr_port=%d", vtgr.grPort))
vtgr.proc.Args = append(vtgr.proc.Args, fmt.Sprintf("--gr_port=%d", vtgr.grPort))
}
vtgr.proc.Args = append(vtgr.proc.Args, vtgr.ExtraArgs...)
errFile, _ := os.Create(path.Join(vtgr.LogDir, fmt.Sprintf("vtgr-stderr-%v.txt", alias)))

Просмотреть файл

@ -49,18 +49,18 @@ func (orc *VtorcProcess) Setup() (err error) {
*/
orc.proc = exec.Command(
orc.Binary,
"-topo_implementation", orc.TopoImplementation,
"-topo_global_server_address", orc.TopoGlobalAddress,
"-topo_global_root", orc.TopoGlobalRoot,
"-config", orc.Config,
"-orc_web_dir", path.Join(os.Getenv("VTROOT"), "web", "orchestrator"),
"--topo_implementation", orc.TopoImplementation,
"--topo_global_server_address", orc.TopoGlobalAddress,
"--topo_global_root", orc.TopoGlobalRoot,
"--config", orc.Config,
"--orc_web_dir", path.Join(os.Getenv("VTROOT"), "web", "orchestrator"),
)
if *isCoverage {
orc.proc.Args = append(orc.proc.Args, "-test.coverprofile="+getCoveragePath("orc.out"))
orc.proc.Args = append(orc.proc.Args, "--test.coverprofile="+getCoveragePath("orc.out"))
}
orc.proc.Args = append(orc.proc.Args, orc.ExtraArgs...)
orc.proc.Args = append(orc.proc.Args, "-alsologtostderr", "http")
orc.proc.Args = append(orc.proc.Args, "--alsologtostderr", "http")
errFile, _ := os.Create(path.Join(orc.LogDir, fmt.Sprintf("orc-stderr-%d.txt", time.Now().UnixNano())))
orc.proc.Stderr = errFile

Просмотреть файл

@ -85,43 +85,43 @@ func (vttablet *VttabletProcess) Setup() (err error) {
vttablet.proc = exec.Command(
vttablet.Binary,
"-topo_implementation", vttablet.CommonArg.TopoImplementation,
"-topo_global_server_address", vttablet.CommonArg.TopoGlobalAddress,
"-topo_global_root", vttablet.CommonArg.TopoGlobalRoot,
"-log_queries_to_file", vttablet.FileToLogQueries,
"-tablet-path", vttablet.TabletPath,
"-port", fmt.Sprintf("%d", vttablet.Port),
"-grpc_port", fmt.Sprintf("%d", vttablet.GrpcPort),
"-init_shard", vttablet.Shard,
"-log_dir", vttablet.LogDir,
"-tablet_hostname", vttablet.TabletHostname,
"-init_keyspace", vttablet.Keyspace,
"-init_tablet_type", vttablet.TabletType,
"-health_check_interval", fmt.Sprintf("%ds", vttablet.HealthCheckInterval),
"-enable_replication_reporter",
"-backup_storage_implementation", vttablet.BackupStorageImplementation,
"-file_backup_storage_root", vttablet.FileBackupStorageRoot,
"-service_map", vttablet.ServiceMap,
"-vtctld_addr", vttablet.VtctldAddress,
"-vtctld_addr", vttablet.VtctldAddress,
"-vreplication_tablet_type", vttablet.VreplicationTabletType,
"-db_charset", vttablet.Charset,
"--topo_implementation", vttablet.CommonArg.TopoImplementation,
"--topo_global_server_address", vttablet.CommonArg.TopoGlobalAddress,
"--topo_global_root", vttablet.CommonArg.TopoGlobalRoot,
"--log_queries_to_file", vttablet.FileToLogQueries,
"--tablet-path", vttablet.TabletPath,
"--port", fmt.Sprintf("%d", vttablet.Port),
"--grpc_port", fmt.Sprintf("%d", vttablet.GrpcPort),
"--init_shard", vttablet.Shard,
"--log_dir", vttablet.LogDir,
"--tablet_hostname", vttablet.TabletHostname,
"--init_keyspace", vttablet.Keyspace,
"--init_tablet_type", vttablet.TabletType,
"--health_check_interval", fmt.Sprintf("%ds", vttablet.HealthCheckInterval),
"--enable_replication_reporter",
"--backup_storage_implementation", vttablet.BackupStorageImplementation,
"--file_backup_storage_root", vttablet.FileBackupStorageRoot,
"--service_map", vttablet.ServiceMap,
"--vtctld_addr", vttablet.VtctldAddress,
"--vtctld_addr", vttablet.VtctldAddress,
"--vreplication_tablet_type", vttablet.VreplicationTabletType,
"--db_charset", vttablet.Charset,
)
if *isCoverage {
vttablet.proc.Args = append(vttablet.proc.Args, "-test.coverprofile="+getCoveragePath("vttablet.out"))
vttablet.proc.Args = append(vttablet.proc.Args, "--test.coverprofile="+getCoveragePath("vttablet.out"))
}
if *PerfTest {
vttablet.proc.Args = append(vttablet.proc.Args, "-pprof", fmt.Sprintf("cpu,waitSig,path=vttablet_pprof_%s", vttablet.Name))
vttablet.proc.Args = append(vttablet.proc.Args, "--pprof", fmt.Sprintf("cpu,waitSig,path=vttablet_pprof_%s", vttablet.Name))
}
if vttablet.SupportsBackup {
vttablet.proc.Args = append(vttablet.proc.Args, "-restore_from_backup")
vttablet.proc.Args = append(vttablet.proc.Args, "--restore_from_backup")
}
if vttablet.EnableSemiSync {
vttablet.proc.Args = append(vttablet.proc.Args, "-enable_semi_sync")
vttablet.proc.Args = append(vttablet.proc.Args, "--enable_semi_sync")
}
if vttablet.DbFlavor != "" {
vttablet.proc.Args = append(vttablet.proc.Args, fmt.Sprintf("-db_flavor=%s", vttablet.DbFlavor))
vttablet.proc.Args = append(vttablet.proc.Args, fmt.Sprintf("--db_flavor=%s", vttablet.DbFlavor))
}
vttablet.proc.Args = append(vttablet.proc.Args, vttablet.ExtraArgs...)

Просмотреть файл

@ -57,21 +57,21 @@ func (vtworker *VtworkerProcess) Setup(cell string) (err error) {
vtworker.proc = exec.Command(
vtworker.Binary,
"-log_dir", vtworker.LogDir,
"-port", fmt.Sprintf("%d", vtworker.Port),
"-executefetch_retry_time", vtworker.ExecuteRetryTime,
"-tablet_manager_protocol", "grpc",
"-tablet_protocol", "grpc",
"-topo_implementation", vtworker.CommonArg.TopoImplementation,
"-topo_global_server_address", vtworker.CommonArg.TopoGlobalAddress,
"-topo_global_root", vtworker.CommonArg.TopoGlobalRoot,
"-service_map", vtworker.ServiceMap,
"-grpc_port", fmt.Sprintf("%d", vtworker.GrpcPort),
"-cell", cell,
"-command_display_interval", "10ms",
"--log_dir", vtworker.LogDir,
"--port", fmt.Sprintf("%d", vtworker.Port),
"--executefetch_retry_time", vtworker.ExecuteRetryTime,
"--tablet_manager_protocol", "grpc",
"--tablet_protocol", "grpc",
"--topo_implementation", vtworker.CommonArg.TopoImplementation,
"--topo_global_server_address", vtworker.CommonArg.TopoGlobalAddress,
"--topo_global_root", vtworker.CommonArg.TopoGlobalRoot,
"--service_map", vtworker.ServiceMap,
"--grpc_port", fmt.Sprintf("%d", vtworker.GrpcPort),
"--cell", cell,
"--command_display_interval", "10ms",
)
if *isCoverage {
vtworker.proc.Args = append(vtworker.proc.Args, "-test.coverprofile=vtworker.out", "-test.v")
vtworker.proc.Args = append(vtworker.proc.Args, "--test.coverprofile=vtworker.out", "-test.v")
}
vtworker.proc.Args = append(vtworker.proc.Args, vtworker.ExtraArgs...)
@ -143,10 +143,10 @@ func (vtworker *VtworkerProcess) TearDown() error {
// ExecuteCommand executes any vtworker command
func (vtworker *VtworkerProcess) ExecuteCommand(args ...string) (err error) {
args = append([]string{"-vtworker_client_protocol", "grpc",
"-server", vtworker.Server, "-log_dir", vtworker.LogDir, "-stderrthreshold", "info"}, args...)
args = append([]string{"--vtworker_client_protocol", "grpc",
"--server", vtworker.Server, "--log_dir", vtworker.LogDir, "--stderrthreshold", "info"}, args...)
if *isCoverage {
args = append([]string{"-test.coverprofile=" + getCoveragePath("vtworkerclient-exec-cmd.out")}, args...)
args = append([]string{"--test.coverprofile=" + getCoveragePath("vtworkerclient-exec-cmd.out")}, args...)
}
tmpProcess := exec.Command(
"vtworkerclient",
@ -157,8 +157,8 @@ func (vtworker *VtworkerProcess) ExecuteCommand(args ...string) (err error) {
}
func (vtworker *VtworkerProcess) ExecuteCommandInBg(args ...string) (*exec.Cmd, error) {
args = append([]string{"-vtworker_client_protocol", "grpc",
"-server", vtworker.Server, "-log_dir", vtworker.LogDir, "-stderrthreshold", "info"}, args...)
args = append([]string{"--vtworker_client_protocol", "grpc",
"--server", vtworker.Server, "--log_dir", vtworker.LogDir, "--stderrthreshold", "info"}, args...)
tmpProcess := exec.Command(
"vtworkerclient",
args...,
@ -170,19 +170,19 @@ func (vtworker *VtworkerProcess) ExecuteCommandInBg(args ...string) (*exec.Cmd,
// ExecuteVtworkerCommand executes any vtworker command
func (vtworker *VtworkerProcess) ExecuteVtworkerCommand(port int, grpcPort int, args ...string) (err error) {
args = append([]string{
"-port", fmt.Sprintf("%d", port),
"-executefetch_retry_time", vtworker.ExecuteRetryTime,
"-tablet_manager_protocol", "grpc",
"-tablet_protocol", "grpc",
"-topo_implementation", vtworker.CommonArg.TopoImplementation,
"-topo_global_server_address", vtworker.CommonArg.TopoGlobalAddress,
"-topo_global_root", vtworker.CommonArg.TopoGlobalRoot,
"-service_map", vtworker.ServiceMap,
"-grpc_port", fmt.Sprintf("%d", grpcPort),
"-cell", vtworker.Cell,
"-log_dir", vtworker.LogDir, "-stderrthreshold", "1"}, args...)
"--port", fmt.Sprintf("%d", port),
"--executefetch_retry_time", vtworker.ExecuteRetryTime,
"--tablet_manager_protocol", "grpc",
"--tablet_protocol", "grpc",
"--topo_implementation", vtworker.CommonArg.TopoImplementation,
"--topo_global_server_address", vtworker.CommonArg.TopoGlobalAddress,
"--topo_global_root", vtworker.CommonArg.TopoGlobalRoot,
"--service_map", vtworker.ServiceMap,
"--grpc_port", fmt.Sprintf("%d", grpcPort),
"--cell", vtworker.Cell,
"--log_dir", vtworker.LogDir, "--stderrthreshold", "1"}, args...)
if *isCoverage {
args = append([]string{"-test.coverprofile=" + getCoveragePath("vtworker-exec-cmd.out")}, args...)
args = append([]string{"--test.coverprofile=" + getCoveragePath("vtworker-exec-cmd.out")}, args...)
}
tmpProcess := exec.Command(
"vtworker",

Просмотреть файл

@ -101,8 +101,8 @@ func testListAllTablets(t *testing.T) {
// now filtering with the first keyspace and tablet type of primary, in
// addition to the cell
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(
"ListAllTablets", "-keyspace", clusterInstance.Keyspaces[0].Name,
"-tablet_type", "primary",
"ListAllTablets", "--", "--keyspace", clusterInstance.Keyspaces[0].Name,
"--tablet_type", "primary",
clusterInstance.Cell)
require.Nil(t, err)

Просмотреть файл

@ -49,6 +49,6 @@ func TestDeleteTablet(t *testing.T) {
defer cluster.PanicHandler(t)
primary := clusterInstance.Keyspaces[0].Shards[0].PrimaryTablet()
require.NotNil(t, primary)
_, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("DeleteTablet", "-allow_primary", primary.Alias)
_, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("DeleteTablet", "--", "--allow_primary", primary.Alias)
require.Nil(t, err, "Error: %v", err)
}

Просмотреть файл

@ -60,10 +60,10 @@ func testReplicationBase(t *testing.T, isClientCertPassed bool) {
replicaTablet := *clusterInstance.Keyspaces[0].Shards[0].Vttablets[1]
if isClientCertPassed {
replicaTablet.VttabletProcess.ExtraArgs = append(replicaTablet.VttabletProcess.ExtraArgs, "-db_flags", "2048",
"-db_ssl_ca", path.Join(certDirectory, "ca-cert.pem"),
"-db_ssl_cert", path.Join(certDirectory, "client-cert.pem"),
"-db_ssl_key", path.Join(certDirectory, "client-key.pem"),
replicaTablet.VttabletProcess.ExtraArgs = append(replicaTablet.VttabletProcess.ExtraArgs, "--db_flags", "2048",
"--db_ssl_ca", path.Join(certDirectory, "ca-cert.pem"),
"--db_ssl_cert", path.Join(certDirectory, "client-cert.pem"),
"--db_ssl_key", path.Join(certDirectory, "client-key.pem"),
)
}
@ -95,13 +95,13 @@ func initializeCluster(t *testing.T) (int, error) {
certDirectory = path.Join(clusterInstance.TmpDirectory, "certs")
_ = encryption.CreateDirectory(certDirectory, 0700)
err := encryption.ExecuteVttlstestCommand("-root", certDirectory, "CreateCA")
err := encryption.ExecuteVttlstestCommand("--root", certDirectory, "CreateCA")
require.NoError(t, err)
err = encryption.ExecuteVttlstestCommand("-root", certDirectory, "CreateSignedCert", "-common_name", "Mysql Server", "-serial", "01", "server")
err = encryption.ExecuteVttlstestCommand("--root", certDirectory, "CreateSignedCert", "--", "--common_name", "Mysql Server", "--serial", "01", "server")
require.NoError(t, err)
err = encryption.ExecuteVttlstestCommand("-root", certDirectory, "CreateSignedCert", "-common_name", "Mysql Client", "-serial", "02", "client")
err = encryption.ExecuteVttlstestCommand("-root", certDirectory, "CreateSignedCert", "--", "--common_name", "Mysql Client", "--serial", "02", "client")
require.NoError(t, err)
extraMyCnf := path.Join(certDirectory, "secure.cnf")

Просмотреть файл

@ -131,7 +131,7 @@ func TestSecureTransport(t *testing.T) {
// start the tablets
for _, tablet := range []cluster.Vttablet{primaryTablet, replicaTablet} {
tablet.VttabletProcess.ExtraArgs = append(tablet.VttabletProcess.ExtraArgs, "-table-acl-config", tableACLConfigJSON, "-queryserver-config-strict-table-acl")
tablet.VttabletProcess.ExtraArgs = append(tablet.VttabletProcess.ExtraArgs, "--table-acl-config", tableACLConfigJSON, "--queryserver-config-strict-table-acl")
tablet.VttabletProcess.ExtraArgs = append(tablet.VttabletProcess.ExtraArgs, serverExtraArguments("vttablet-server-instance", "vttablet-client")...)
err = tablet.VttabletProcess.Setup()
require.NoError(t, err)
@ -143,12 +143,12 @@ func TestSecureTransport(t *testing.T) {
vtctlClientTmArgs := append(vtctlClientArgs, tmclientExtraArgs("vttablet-client-1")...)
// Reparenting
vtctlClientArgs = append(vtctlClientTmArgs, "InitShardPrimary", "-force", "test_keyspace/0", primaryTablet.Alias)
vtctlClientArgs = append(vtctlClientTmArgs, "InitShardPrimary", "--", "--force", "test_keyspace/0", primaryTablet.Alias)
err = clusterInstance.VtctlProcess.ExecuteCommand(vtctlClientArgs...)
require.NoError(t, err)
// Apply schema
var vtctlApplySchemaArgs = append(vtctlClientTmArgs, "ApplySchema", "-sql", createVtInsertTest, "test_keyspace")
var vtctlApplySchemaArgs = append(vtctlClientTmArgs, "ApplySchema", "--", "--sql", createVtInsertTest, "test_keyspace")
err = clusterInstance.VtctlProcess.ExecuteCommand(vtctlApplySchemaArgs...)
require.NoError(t, err)
@ -195,7 +195,7 @@ func TestSecureTransport(t *testing.T) {
// now restart vtgate in the mode where we don't use SSL
// for client connections, but we copy effective caller id
// into immediate caller id.
clusterInstance.VtGateExtraArgs = []string{"-grpc_use_effective_callerid"}
clusterInstance.VtGateExtraArgs = []string{"--grpc_use_effective_callerid"}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, tabletConnExtraArgs("vttablet-client-1")...)
err = clusterInstance.RestartVtgate()
require.NoError(t, err)
@ -256,7 +256,7 @@ func clusterSetUp(t *testing.T) (int, error) {
certDirectory = path.Join(clusterInstance.TmpDirectory, "certs")
_ = encryption.CreateDirectory(certDirectory, 0700)
err := encryption.ExecuteVttlstestCommand("-root", certDirectory, "CreateCA")
err := encryption.ExecuteVttlstestCommand("--root", certDirectory, "CreateCA")
require.NoError(t, err)
err = createSignedCert("ca", "01", "vttablet-server", "vttablet server CA")
@ -341,37 +341,37 @@ func createSignedCert(ca string, serial string, name string, commonName string)
log.Infof("Creating signed cert and key %s", commonName)
tmpProcess := exec.Command(
"vttlstest",
"-root", certDirectory,
"CreateSignedCert",
"-parent", ca,
"-serial", serial,
"-common_name", commonName,
"--root", certDirectory,
"CreateSignedCert", "--",
"--parent", ca,
"--serial", serial,
"--common_name", commonName,
name)
return tmpProcess.Run()
}
func serverExtraArguments(name string, ca string) []string {
args := []string{"-grpc_cert", certDirectory + "/" + name + "-cert.pem",
"-grpc_key", certDirectory + "/" + name + "-key.pem",
"-grpc_ca", certDirectory + "/" + ca + "-cert.pem"}
args := []string{"--grpc_cert", certDirectory + "/" + name + "-cert.pem",
"--grpc_key", certDirectory + "/" + name + "-key.pem",
"--grpc_ca", certDirectory + "/" + ca + "-cert.pem"}
return args
}
func tmclientExtraArgs(name string) []string {
ca := "vttablet-server"
var args = []string{"-tablet_manager_grpc_cert", certDirectory + "/" + name + "-cert.pem",
"-tablet_manager_grpc_key", certDirectory + "/" + name + "-key.pem",
"-tablet_manager_grpc_ca", certDirectory + "/" + ca + "-cert.pem",
"-tablet_manager_grpc_server_name", "vttablet server instance"}
var args = []string{"--tablet_manager_grpc_cert", certDirectory + "/" + name + "-cert.pem",
"--tablet_manager_grpc_key", certDirectory + "/" + name + "-key.pem",
"--tablet_manager_grpc_ca", certDirectory + "/" + ca + "-cert.pem",
"--tablet_manager_grpc_server_name", "vttablet server instance"}
return args
}
func tabletConnExtraArgs(name string) []string {
ca := "vttablet-server"
args := []string{"-tablet_grpc_cert", certDirectory + "/" + name + "-cert.pem",
"-tablet_grpc_key", certDirectory + "/" + name + "-key.pem",
"-tablet_grpc_ca", certDirectory + "/" + ca + "-cert.pem",
"-tablet_grpc_server_name", "vttablet server instance"}
args := []string{"--tablet_grpc_cert", certDirectory + "/" + name + "-cert.pem",
"--tablet_grpc_key", certDirectory + "/" + name + "-key.pem",
"--tablet_grpc_ca", certDirectory + "/" + ca + "-cert.pem",
"--tablet_grpc_server_name", "vttablet server instance"}
return args
}

Просмотреть файл

@ -120,7 +120,7 @@ func TestMain(m *testing.M) {
if err := clusterForKSTest.StartKeyspace(*keyspaceUnsharded, []string{keyspaceUnshardedName}, 1, false); err != nil {
return 1
}
if err := clusterForKSTest.VtctlclientProcess.ExecuteCommand("SetKeyspaceShardingInfo", "-force", keyspaceUnshardedName, "keyspace_id", "uint64"); err != nil {
if err := clusterForKSTest.VtctlclientProcess.ExecuteCommand("SetKeyspaceShardingInfo", "--", "--force", keyspaceUnshardedName, "keyspace_id", "uint64"); err != nil {
return 1
}
if err := clusterForKSTest.VtctlclientProcess.ExecuteCommand("RebuildKeyspaceGraph", keyspaceUnshardedName); err != nil {
@ -202,25 +202,25 @@ func TestDeleteKeyspace(t *testing.T) {
defer cluster.PanicHandler(t)
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("CreateKeyspace", "test_delete_keyspace")
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("CreateShard", "test_delete_keyspace/0")
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("InitTablet", "-keyspace=test_delete_keyspace", "-shard=0", "zone1-0000000100", "primary")
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("InitTablet", "--", "--keyspace=test_delete_keyspace", "--shard=0", "zone1-0000000100", "primary")
// Can't delete keyspace if there are shards present.
err := clusterForKSTest.VtctlclientProcess.ExecuteCommand("DeleteKeyspace", "test_delete_keyspace")
require.Error(t, err)
// Can't delete shard if there are tablets present.
err = clusterForKSTest.VtctlclientProcess.ExecuteCommand("DeleteShard", "-even_if_serving", "test_delete_keyspace/0")
err = clusterForKSTest.VtctlclientProcess.ExecuteCommand("DeleteShard", "--", "--even_if_serving", "test_delete_keyspace/0")
require.Error(t, err)
// Use recursive DeleteShard to remove tablets.
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("DeleteShard", "-even_if_serving", "-recursive", "test_delete_keyspace/0")
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("DeleteShard", "--", "--even_if_serving", "--recursive", "test_delete_keyspace/0")
// Now non-recursive DeleteKeyspace should work.
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("DeleteKeyspace", "test_delete_keyspace")
// Start over and this time use recursive DeleteKeyspace to do everything.
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("CreateKeyspace", "test_delete_keyspace")
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("CreateShard", "test_delete_keyspace/0")
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("InitTablet", "-port=1234", "-keyspace=test_delete_keyspace", "-shard=0", "zone1-0000000100", "primary")
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("InitTablet", "--", "--port=1234", "--keyspace=test_delete_keyspace", "--shard=0", "zone1-0000000100", "primary")
// Create the serving/replication entries and check that they exist,
// so we can later check they're deleted.
@ -229,7 +229,7 @@ func TestDeleteKeyspace(t *testing.T) {
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("GetSrvKeyspace", cell, "test_delete_keyspace")
// Recursive DeleteKeyspace
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("DeleteKeyspace", "-recursive", "test_delete_keyspace")
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("DeleteKeyspace", "--", "--recursive", "test_delete_keyspace")
// Check that everything is gone.
err = clusterForKSTest.VtctlclientProcess.ExecuteCommand("GetKeyspace", "test_delete_keyspace")
@ -245,6 +245,7 @@ func TestDeleteKeyspace(t *testing.T) {
}
// TODO: Fix this test, not running in CI
// TODO: (ajm188) if this test gets fixed, the flags need to be updated to comply with VEP-4 as well.
// tells that in zone2 after deleting shard, there is no shard #264 and in zone1 there is only 1 #269
/*func RemoveKeyspaceCell(t *testing.T) {
_ = clusterForKSTest.VtctlclientProcess.ExecuteCommand("CreateKeyspace", "test_delete_keyspace_removekscell")

Просмотреть файл

@ -277,9 +277,9 @@ func TestReparenting(t *testing.T) {
// do planned reparenting, make one replica as primary
// and validate client connection count in correspond tablets
clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(
"PlannedReparentShard",
"-keyspace_shard", userKeyspace+"/-80",
"-new_primary", shard0Replica.Alias)
"PlannedReparentShard", "--",
"--keyspace_shard", userKeyspace+"/-80",
"--new_primary", shard0Replica.Alias)
// validate topology
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Validate")
require.Nil(t, err)
@ -300,9 +300,9 @@ func TestReparenting(t *testing.T) {
// make old primary again as new primary
clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(
"PlannedReparentShard",
"-keyspace_shard", userKeyspace+"/-80",
"-new_primary", shard0Primary.Alias)
"PlannedReparentShard", "--",
"--keyspace_shard", userKeyspace+"/-80",
"--new_primary", shard0Primary.Alias)
// validate topology
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Validate")
require.Nil(t, err)

Просмотреть файл

@ -109,17 +109,17 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtGateExtraArgs = []string{
"-vschema_ddl_authorized_users=%",
"-mysql_server_query_timeout", "1s",
"-mysql_auth_server_impl", "static",
"-mysql_auth_server_static_file", clusterInstance.TmpDirectory + mysqlAuthServerStatic,
"-mysql_server_version", "8.0.16-7",
"-warn_sharded_only=true",
"--vschema_ddl_authorized_users=%",
"--mysql_server_query_timeout", "1s",
"--mysql_auth_server_impl", "static",
"--mysql_auth_server_static_file", clusterInstance.TmpDirectory + mysqlAuthServerStatic,
"--mysql_server_version", "8.0.16-7",
"--warn_sharded_only=true",
}
clusterInstance.VtTabletExtraArgs = []string{
"-table-acl-config", clusterInstance.TmpDirectory + tableACLConfig,
"-queryserver-config-strict-table-acl",
"--table-acl-config", clusterInstance.TmpDirectory + tableACLConfig,
"--queryserver-config-strict-table-acl",
}
// Start keyspace

Просмотреть файл

@ -215,19 +215,19 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1"}
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1"}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"-migration_check_interval", "5s",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
"--migration_check_interval", "5s",
}
clusterInstance.VtGateExtraArgs = []string{
"-ddl_strategy", "online",
"--ddl_strategy", "online",
}
if err := clusterInstance.StartTopo(); err != nil {

Просмотреть файл

@ -157,21 +157,21 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1",
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1",
}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"-migration_check_interval", "5s",
"-gh-ost-path", os.Getenv("VITESS_ENDTOEND_GH_OST_PATH"), // leave env variable empty/unset to get the default behavior. Override in Mac.
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
"--migration_check_interval", "5s",
"--gh-ost-path", os.Getenv("VITESS_ENDTOEND_GH_OST_PATH"), // leave env variable empty/unset to get the default behavior. Override in Mac.
}
clusterInstance.VtGateExtraArgs = []string{
"-ddl_strategy", "gh-ost",
"--ddl_strategy", "gh-ost",
}
if err := clusterInstance.StartTopo(); err != nil {

Просмотреть файл

@ -179,20 +179,20 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1",
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1",
}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"-migration_check_interval", "5s",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
"--migration_check_interval", "5s",
}
clusterInstance.VtGateExtraArgs = []string{
"-ddl_strategy", "online",
"--ddl_strategy", "online",
}
if err := clusterInstance.StartTopo(); err != nil {

Просмотреть файл

@ -198,15 +198,15 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1"}
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1"}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
}
clusterInstance.VtGateExtraArgs = []string{}

Просмотреть файл

@ -95,15 +95,15 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1"}
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1"}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
}
clusterInstance.VtGateExtraArgs = []string{}

Просмотреть файл

@ -93,15 +93,15 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1"}
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1"}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
}
clusterInstance.VtGateExtraArgs = []string{}

Просмотреть файл

@ -161,20 +161,20 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1",
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1",
}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"-migration_check_interval", "5s",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
"--migration_check_interval", "5s",
}
clusterInstance.VtGateExtraArgs = []string{
"-ddl_strategy", "online",
"--ddl_strategy", "online",
}
if err := clusterInstance.StartTopo(); err != nil {
@ -477,7 +477,7 @@ func TestSchemaChange(t *testing.T) {
})
t.Run("PRS shard -80", func(t *testing.T) {
// migration has started and is throttled. We now run PRS
err := clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "-keyspace_shard", keyspaceName+"/-80", "-new_primary", shards[0].Vttablets[reparentTabletIndex].Alias)
err := clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "--", "--keyspace_shard", keyspaceName+"/-80", "--new_primary", shards[0].Vttablets[reparentTabletIndex].Alias)
require.NoError(t, err, "failed PRS: %v", err)
})

Просмотреть файл

@ -209,20 +209,20 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1",
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1",
}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"-migration_check_interval", "5s",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
"--migration_check_interval", "5s",
}
clusterInstance.VtGateExtraArgs = []string{
"-ddl_strategy", "online",
"--ddl_strategy", "online",
}
if err := clusterInstance.StartTopo(); err != nil {

Просмотреть файл

@ -406,24 +406,24 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1",
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1",
}
// -vstream_packet_size is set to a small value that ensures we get multiple stream iterations,
// --vstream_packet_size is set to a small value that ensures we get multiple stream iterations,
// thereby examining lastPK on vcopier side. We will be iterating tables using non-PK order throughout
// this test suite, and so the low setting ensures we hit the more interesting code paths.
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"-migration_check_interval", "5s",
"-vstream_packet_size", "4096", // Keep this value small and below 10k to ensure multilple vstream iterations
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
"--migration_check_interval", "5s",
"--vstream_packet_size", "4096", // Keep this value small and below 10k to ensure multilple vstream iterations
}
clusterInstance.VtGateExtraArgs = []string{
"-ddl_strategy", "online",
"--ddl_strategy", "online",
}
if err := clusterInstance.StartTopo(); err != nil {

Просмотреть файл

@ -76,17 +76,17 @@ func TestMain(m *testing.M) {
}
clusterInstance.VtctldExtraArgs = []string{
"-schema_change_dir", schemaChangeDirectory,
"-schema_change_controller", "local",
"-schema_change_check_interval", "1",
"--schema_change_dir", schemaChangeDirectory,
"--schema_change_controller", "local",
"--schema_change_check_interval", "1",
}
clusterInstance.VtTabletExtraArgs = []string{
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"-migration_check_interval", "5s",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
"--migration_check_interval", "5s",
}
if err := clusterInstance.StartTopo(); err != nil {

Просмотреть файл

@ -182,7 +182,7 @@ func TestERSForInitialization(t *testing.T) {
clusterInstance := cluster.NewCluster("zone1", "localhost")
defer clusterInstance.Teardown()
keyspace := &cluster.Keyspace{Name: utils.KeyspaceName}
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "-durability_policy=semi_sync")
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "--durability_policy=semi_sync")
// Start topo server
err := clusterInstance.StartTopo()
require.NoError(t, err)
@ -196,10 +196,10 @@ func TestERSForInitialization(t *testing.T) {
shard := &cluster.Shard{Name: utils.ShardName}
shard.Vttablets = tablets
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"-enable_semi_sync",
"-init_populate_metadata",
"-track_schema_versions=true",
"--lock_tables_timeout", "5s",
"--enable_semi_sync",
"--init_populate_metadata",
"--track_schema_versions=true",
}
// Initialize Cluster

Просмотреть файл

@ -236,8 +236,8 @@ func reparentFromOutside(t *testing.T, clusterInstance *cluster.LocalProcessClus
if downPrimary {
err := tablets[0].VttabletProcess.TearDownWithTimeout(30 * time.Second)
require.NoError(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteTablet",
"-allow_primary", tablets[0].Alias)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteTablet", "--",
"--allow_primary", tablets[0].Alias)
require.NoError(t, err)
}

Просмотреть файл

@ -89,9 +89,9 @@ func setupCluster(ctx context.Context, t *testing.T, shardName string, cells []s
keyspace := &cluster.Keyspace{Name: KeyspaceName}
if enableSemiSync {
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs, "-enable_semi_sync")
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs, "--enable_semi_sync")
if clusterInstance.VtctlMajorVersion >= 13 {
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "-durability_policy=semi_sync")
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "--durability_policy=semi_sync")
}
}
@ -123,16 +123,16 @@ func setupCluster(ctx context.Context, t *testing.T, shardName string, cells []s
shard.Vttablets = tablets
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs,
"-lock_tables_timeout", "5s",
"-init_populate_metadata",
"-track_schema_versions=true",
"--lock_tables_timeout", "5s",
"--init_populate_metadata",
"--track_schema_versions=true",
// disabling online-ddl for reparent tests. This is done to reduce flakiness.
// All the tests in this package reparent frequently between different tablets
// This means that Promoting a tablet to primary is sometimes immediately followed by a DemotePrimary call.
// In this case, the close method and initSchema method of the onlineDDL executor race.
// If the initSchema acquires the lock, then it takes about 30 seconds for it to run during which time the
// DemotePrimary rpc is stalled!
"-queryserver_enable_online_ddl=false")
"--queryserver_enable_online_ddl=false")
if clusterInstance.VtTabletMajorVersion >= 13 && clusterInstance.VtctlMajorVersion >= 13 {
// disabling active reparents on the tablet since we don't want the replication manager
@ -141,7 +141,7 @@ func setupCluster(ctx context.Context, t *testing.T, shardName string, cells []s
// tests in this test suite should work irrespective of this flag. Each run of ERS, PRS should be
// setting up the replication correctly.
// However, due to the bugs in old vitess components we can only do this for version >= 13.
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs, "-disable_active_reparents")
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs, "--disable_active_reparents")
}
// Initialize Cluster
@ -207,9 +207,9 @@ func setupClusterLegacy(ctx context.Context, t *testing.T, shardName string, cel
keyspace := &cluster.Keyspace{Name: KeyspaceName}
if enableSemiSync {
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs, "-enable_semi_sync")
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs, "--enable_semi_sync")
if clusterInstance.VtctlMajorVersion >= 13 {
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "-durability_policy=semi_sync")
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "--durability_policy=semi_sync")
}
}
@ -241,16 +241,16 @@ func setupClusterLegacy(ctx context.Context, t *testing.T, shardName string, cel
shard.Vttablets = tablets
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs,
"-lock_tables_timeout", "5s",
"-init_populate_metadata",
"-track_schema_versions=true",
"--lock_tables_timeout", "5s",
"--init_populate_metadata",
"--track_schema_versions=true",
// disabling online-ddl for reparent tests. This is done to reduce flakiness.
// All the tests in this package reparent frequently between different tablets
// This means that Promoting a tablet to primary is sometimes immediately followed by a DemotePrimary call.
// In this case, the close method and initSchema method of the onlineDDL executor race.
// If the initSchema acquires the lock, then it takes about 30 seconds for it to run during which time the
// DemotePrimary rpc is stalled!
"-queryserver_enable_online_ddl=false")
"--queryserver_enable_online_ddl=false")
if clusterInstance.VtTabletMajorVersion >= 13 && clusterInstance.VtctlMajorVersion >= 13 {
// disabling active reparents on the tablet since we don't want the replication manager
@ -259,7 +259,7 @@ func setupClusterLegacy(ctx context.Context, t *testing.T, shardName string, cel
// tests in this test suite should work irrespective of this flag. Each run of ERS, PRS should be
// setting up the replication correctly.
// However, due to the bugs in old vitess components we can only do this for version >= 13.
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs, "-disable_active_reparents")
clusterInstance.VtTabletExtraArgs = append(clusterInstance.VtTabletExtraArgs, "--disable_active_reparents")
}
// Initialize Cluster
@ -305,8 +305,8 @@ func setupShardLegacy(ctx context.Context, t *testing.T, clusterInstance *cluste
}
// Force the replica to reparent assuming that all the datasets are identical.
err := clusterInstance.VtctlclientProcess.ExecuteCommand("InitShardPrimary",
"-force", fmt.Sprintf("%s/%s", KeyspaceName, shardName), tablets[0].Alias)
err := clusterInstance.VtctlclientProcess.ExecuteCommand("InitShardPrimary", "--",
"--force", fmt.Sprintf("%s/%s", KeyspaceName, shardName), tablets[0].Alias)
require.NoError(t, err)
ValidateTopology(t, clusterInstance, true)
@ -368,18 +368,18 @@ func PrsAvoid(t *testing.T, clusterInstance *cluster.LocalProcessCluster, tab *c
// PrsWithTimeout runs PRS
func PrsWithTimeout(t *testing.T, clusterInstance *cluster.LocalProcessCluster, tab *cluster.Vttablet, avoid bool, actionTimeout, waitTimeout string) (string, error) {
args := []string{
"PlannedReparentShard",
"-keyspace_shard", fmt.Sprintf("%s/%s", KeyspaceName, ShardName)}
"PlannedReparentShard", "--",
"--keyspace_shard", fmt.Sprintf("%s/%s", KeyspaceName, ShardName)}
if actionTimeout != "" {
args = append(args, "-action_timeout", actionTimeout)
args = append(args, "--action_timeout", actionTimeout)
}
if waitTimeout != "" {
args = append(args, "-wait_replicas_timeout", waitTimeout)
args = append(args, "--wait_replicas_timeout", waitTimeout)
}
if avoid {
args = append(args, "-avoid_tablet")
args = append(args, "--avoid_tablet")
} else {
args = append(args, "-new_primary")
args = append(args, "--new_primary")
}
args = append(args, tab.Alias)
out, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(args...)
@ -395,17 +395,17 @@ func Ers(clusterInstance *cluster.LocalProcessCluster, tab *cluster.Vttablet, to
func ErsIgnoreTablet(clusterInstance *cluster.LocalProcessCluster, tab *cluster.Vttablet, timeout, waitReplicasTimeout string, tabletsToIgnore []*cluster.Vttablet, preventCrossCellPromotion bool) (string, error) {
var args []string
if timeout != "" {
args = append(args, "-action_timeout", timeout)
args = append(args, "--action_timeout", timeout)
}
args = append(args, "EmergencyReparentShard", "-keyspace_shard", fmt.Sprintf("%s/%s", KeyspaceName, ShardName))
args = append(args, "EmergencyReparentShard", "--", "--keyspace_shard", fmt.Sprintf("%s/%s", KeyspaceName, ShardName))
if tab != nil {
args = append(args, "-new_primary", tab.Alias)
args = append(args, "--new_primary", tab.Alias)
}
if waitReplicasTimeout != "" {
args = append(args, "-wait_replicas_timeout", waitReplicasTimeout)
args = append(args, "--wait_replicas_timeout", waitReplicasTimeout)
}
if preventCrossCellPromotion {
args = append(args, "-prevent_cross_cell_promotion=true")
args = append(args, "--prevent_cross_cell_promotion=true")
}
if len(tabletsToIgnore) != 0 {
tabsString := ""
@ -416,16 +416,16 @@ func ErsIgnoreTablet(clusterInstance *cluster.LocalProcessCluster, tab *cluster.
tabsString = tabsString + "," + vttablet.Alias
}
}
args = append(args, "-ignore_replicas", tabsString)
args = append(args, "--ignore_replicas", tabsString)
}
return clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(args...)
}
// ErsWithVtctl runs ERS via vtctl binary
func ErsWithVtctl(clusterInstance *cluster.LocalProcessCluster) (string, error) {
args := []string{"EmergencyReparentShard", "-keyspace_shard", fmt.Sprintf("%s/%s", KeyspaceName, ShardName)}
args := []string{"EmergencyReparentShard", "--", "--keyspace_shard", fmt.Sprintf("%s/%s", KeyspaceName, ShardName)}
if clusterInstance.VtctlMajorVersion >= 13 {
args = append([]string{"-durability_policy=semi_sync"}, args...)
args = append([]string{"--durability_policy=semi_sync"}, args...)
}
return clusterInstance.VtctlProcess.ExecuteCommandWithOutput(args...)
}
@ -439,7 +439,7 @@ func ValidateTopology(t *testing.T, clusterInstance *cluster.LocalProcessCluster
args := []string{"Validate"}
if pingTablets {
args = append(args, "-ping-tablets=true")
args = append(args, "--", "--ping-tablets=true")
}
out, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(args...)
require.Empty(t, out)
@ -479,7 +479,7 @@ func CheckPrimaryTablet(t *testing.T, clusterInstance *cluster.LocalProcessClust
assert.Equal(t, topodatapb.TabletType_PRIMARY, tabletInfo.GetType())
// make sure the health stream is updated
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "-count", "1", tablet.Alias)
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "--", "--count", "1", tablet.Alias)
require.NoError(t, err)
var streamHealthResponse querypb.StreamHealthResponse
@ -503,7 +503,7 @@ func isHealthyPrimaryTablet(t *testing.T, clusterInstance *cluster.LocalProcessC
}
// make sure the health stream is updated
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "-count", "1", tablet.Alias)
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "--", "--count", "1", tablet.Alias)
require.Nil(t, err)
var streamHealthResponse querypb.StreamHealthResponse
@ -593,8 +593,8 @@ func ResurrectTablet(ctx context.Context, t *testing.T, clusterInstance *cluster
// DeleteTablet is used to delete the given tablet
func DeleteTablet(t *testing.T, clusterInstance *cluster.LocalProcessCluster, tab *cluster.Vttablet) {
err := clusterInstance.VtctlclientProcess.ExecuteCommand(
"DeleteTablet",
"-allow_primary",
"DeleteTablet", "--",
"--allow_primary",
tab.Alias)
require.NoError(t, err)
}
@ -665,8 +665,8 @@ func CheckReparentFromOutside(t *testing.T, clusterInstance *cluster.LocalProces
require.NoError(t, err)
streamHealth, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(
"VtTabletStreamHealth",
"-count", "1", tablet.Alias)
"VtTabletStreamHealth", "--",
"--count", "1", tablet.Alias)
require.NoError(t, err)
var streamHealthResponse querypb.StreamHealthResponse

Просмотреть файл

@ -190,7 +190,7 @@ func checkStreamHealthEqualsBinlogPlayerVars(t *testing.T, vttablet cluster.Vtta
// Enforce health check because it's not running by default as
// tablets may not be started with it, or may not run it in time.
_ = ci.VtctlclientProcess.ExecuteCommand("RunHealthCheck", vttablet.Alias)
streamHealth, err := ci.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "-count", "1", vttablet.Alias)
streamHealth, err := ci.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "--", "--count", "1", vttablet.Alias)
require.Nil(t, err)
var streamHealthResponse querypb.StreamHealthResponse
@ -409,7 +409,7 @@ func checkThrottlerServiceMaxRates(t *testing.T, server string, names []string,
startTime := time.Now()
msg := fmt.Sprintf("%d active throttler(s)", len(names))
for {
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("ThrottlerMaxRates", "--server", server)
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("ThrottlerMaxRates", "--", "--server", server)
require.Nil(t, err)
if strings.Contains(output, msg) || (time.Now().After(startTime.Add(2 * time.Minute))) {
break
@ -425,11 +425,11 @@ func checkThrottlerServiceMaxRates(t *testing.T, server string, names []string,
// Check that it's possible to change the max rate on the throttler.
newRate := "unlimited"
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("ThrottlerSetMaxRate", "--server", server, newRate)
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("ThrottlerSetMaxRate", "--", "--server", server, newRate)
require.Nil(t, err)
assert.Contains(t, output, msg)
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("ThrottlerMaxRates", "--server", server)
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("ThrottlerMaxRates", "--", "--server", server)
require.Nil(t, err)
for _, name := range names {
str := fmt.Sprintf("| %s | %s |", name, newRate)
@ -441,7 +441,7 @@ func checkThrottlerServiceMaxRates(t *testing.T, server string, names []string,
// checkThrottlerServiceConfiguration checks the vtctl (Get|Update|Reset)ThrottlerConfiguration commands.
func checkThrottlerServiceConfiguration(t *testing.T, server string, names []string, ci cluster.LocalProcessCluster) {
output, err := ci.VtctlclientProcess.ExecuteCommandWithOutput(
"UpdateThrottlerConfiguration", "--server", server,
"UpdateThrottlerConfiguration", "--", "--server", server,
"--copy_zero_values",
"target_replication_lag_sec:12345 "+
"max_replication_lag_sec:65789 "+
@ -461,7 +461,7 @@ func checkThrottlerServiceConfiguration(t *testing.T, server string, names []str
msg := fmt.Sprintf("%d active throttler(s)", len(names))
assert.Contains(t, output, msg)
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("GetThrottlerConfiguration", "--server", server)
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("GetThrottlerConfiguration", "--", "--server", server)
require.Nil(t, err)
for _, name := range names {
str := fmt.Sprintf("| %s | target_replication_lag_sec:12345 ", name)
@ -471,12 +471,12 @@ func checkThrottlerServiceConfiguration(t *testing.T, server string, names []str
assert.Contains(t, output, msg)
// Reset clears our configuration values.
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("ResetThrottlerConfiguration", "--server", server)
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("ResetThrottlerConfiguration", "--", "--server", server)
require.Nil(t, err)
assert.Contains(t, output, msg)
// Check that the reset configuration no longer has our values.
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("GetThrottlerConfiguration", "--server", server)
output, err = ci.VtctlclientProcess.ExecuteCommandWithOutput("GetThrottlerConfiguration", "--", "--server", server)
require.Nil(t, err)
assert.NotContains(t, output, "target_replication_lag_sec:12345")
assert.Contains(t, output, msg)

Просмотреть файл

@ -96,7 +96,7 @@ func ClusterWrapper(isMulti bool) (int, error) {
ClusterInstance = nil
ClusterInstance = cluster.NewCluster(cell, hostname)
ClusterInstance.VtctldExtraArgs = append(ClusterInstance.VtctldExtraArgs, "-durability_policy=semi_sync")
ClusterInstance.VtctldExtraArgs = append(ClusterInstance.VtctldExtraArgs, "--durability_policy=semi_sync")
// Start topo server
if err := ClusterInstance.StartTopo(); err != nil {
return 1, err
@ -134,7 +134,7 @@ func initClusterForInitialSharding(keyspaceName string, shardNames []string, tot
var mysqlProcesses []*exec.Cmd
var extraArgs []string
if isMulti {
extraArgs = []string{"-db-credentials-file", dbCredentialFile}
extraArgs = []string{"--db-credentials-file", dbCredentialFile}
}
for _, shardName := range shardNames {
@ -226,17 +226,17 @@ func AssignMysqlPortFromKs1ToKs2() {
func TestInitialSharding(t *testing.T, keyspace *cluster.Keyspace, keyType querypb.Type, isMulti bool, isExternal bool) {
defer cluster.PanicHandler(t)
if isExternal {
commonTabletArg = append(commonTabletArg, "-db_host", "127.0.0.1")
commonTabletArg = append(commonTabletArg, "-disable_active_reparents")
commonTabletArg = append(commonTabletArg, "--db_host", "127.0.0.1")
commonTabletArg = append(commonTabletArg, "--disable_active_reparents")
for _, shard := range keyspace.Shards {
for _, tablet := range shard.Vttablets {
tablet.VttabletProcess.ExtraArgs = append(tablet.VttabletProcess.ExtraArgs, "-db_port", fmt.Sprintf("%d", tablet.MySQLPort))
tablet.VttabletProcess.ExtraArgs = append(tablet.VttabletProcess.ExtraArgs, "--db_port", fmt.Sprintf("%d", tablet.MySQLPort))
tablet.VttabletProcess.DbPassword = dbPwd
}
}
}
if isMulti {
commonTabletArg = append(commonTabletArg, "-db-credentials-file", dbCredentialFile)
commonTabletArg = append(commonTabletArg, "--db-credentials-file", dbCredentialFile)
}
// Start the primary and rdonly of 1st shard
shard1 := keyspace.Shards[0]
@ -309,7 +309,7 @@ func TestInitialSharding(t *testing.T, keyspace *cluster.Keyspace, keyType query
}
vtgateInstance := ClusterInstance.NewVtgateInstance()
vtgateInstance.MySQLServerSocketPath = path.Join(ClusterInstance.TmpDirectory, fmt.Sprintf("mysql-%s.sock", keyspaceName))
vtgateInstance.ExtraArgs = []string{"-retry-count", fmt.Sprintf("%d", 2), "-tablet_protocol", "grpc", "-normalize_queries", "-tablet_refresh_interval", "2s"}
vtgateInstance.ExtraArgs = []string{"--retry-count", fmt.Sprintf("%d", 2), "--tablet_protocol", "grpc", "--normalize_queries", "--tablet_refresh_interval", "2s"}
err = vtgateInstance.Setup()
vtgateInstances = append(vtgateInstances, vtgateInstance)
require.NoError(t, err)
@ -393,12 +393,12 @@ func TestInitialSharding(t *testing.T, keyspace *cluster.Keyspace, keyType query
expectedPartitions[topodata.TabletType_RDONLY] = []string{shard1.Name}
checkSrvKeyspaceForSharding(t, keyspaceName, expectedPartitions)
err = ClusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard",
err = ClusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard", "--",
"--exclude_tables", "unrelated",
shard1.Rdonly().Alias, fmt.Sprintf("%s/%s", keyspaceName, shard21.Name))
require.NoError(t, err)
err = ClusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard",
err = ClusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard", "--",
"--exclude_tables", "unrelated",
shard1.Rdonly().Alias, fmt.Sprintf("%s/%s", keyspaceName, shard22.Name))
require.NoError(t, err)
@ -407,7 +407,7 @@ func TestInitialSharding(t *testing.T, keyspace *cluster.Keyspace, keyType query
require.NoError(t, err)
// Initial clone (online).
_ = ClusterInstance.VtworkerProcess.ExecuteCommand("SplitClone",
_ = ClusterInstance.VtworkerProcess.ExecuteCommand("SplitClone", "--",
"--offline=false",
"--exclude_tables", "unrelated",
"--chunk_count", "10",
@ -430,7 +430,7 @@ func TestInitialSharding(t *testing.T, keyspace *cluster.Keyspace, keyType query
insertSQL := fmt.Sprintf(sharding.InsertTabletTemplateKsID, tableName, ksid, "msg4", ksid)
sharding.ExecuteOnTablet(t, insertSQL, *shard22.PrimaryTablet(), keyspaceName, true)
_ = ClusterInstance.VtworkerProcess.ExecuteCommand("SplitClone",
_ = ClusterInstance.VtworkerProcess.ExecuteCommand("SplitClone", "--",
"--exclude_tables", "unrelated",
"--chunk_count", "10",
"--min_rows_per_chunk", "1",
@ -508,7 +508,7 @@ func TestInitialSharding(t *testing.T, keyspace *cluster.Keyspace, keyType query
err = ClusterInstance.VtworkerProcess.ExecuteVtworkerCommand(ClusterInstance.GetAndReservePort(),
ClusterInstance.GetAndReservePort(),
"--use_v3_resharding_mode=true",
"SplitDiff",
"SplitDiff", "--",
"--min_healthy_rdonly_tablets", "1",
fmt.Sprintf("%s/%s", keyspaceName, shard))
require.NoError(t, err)
@ -547,7 +547,7 @@ func TestInitialSharding(t *testing.T, keyspace *cluster.Keyspace, keyType query
checkSrvKeyspaceForSharding(t, keyspaceName, expectedPartitions)
//move replica back and forth
_ = ClusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedTypes", "-reverse", shard1Ks, "replica")
_ = ClusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedTypes", "--", "--reverse", shard1Ks, "replica")
// After a backwards migration, queryservice should be enabled on source and disabled on destinations
sharding.CheckTabletQueryService(t, *sourceTablet, "SERVING", false, *ClusterInstance)

Просмотреть файл

@ -107,7 +107,7 @@ func TestMergesharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// Launch keyspace
keyspace := &cluster.Keyspace{Name: keyspaceName}
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "-durability_policy=semi_sync")
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "--durability_policy=semi_sync")
// Start topo server
err := clusterInstance.StartTopo()
@ -136,16 +136,16 @@ func TestMergesharding(t *testing.T, useVarbinaryShardingKeyType bool) {
shard3.Vttablets = []*cluster.Vttablet{shard3Primary, shard3Replica, shard3Rdonly}
clusterInstance.VtTabletExtraArgs = []string{
"-vreplication_healthcheck_topology_refresh", "1s",
"-vreplication_healthcheck_retry_delay", "1s",
"-vreplication_retry_delay", "1s",
"-degraded_threshold", "5s",
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_semi_sync",
"-enable_replication_reporter",
"-enable-tx-throttler",
"-binlog_use_v3_resharding_mode=true",
"--vreplication_healthcheck_topology_refresh", "1s",
"--vreplication_healthcheck_retry_delay", "1s",
"--vreplication_retry_delay", "1s",
"--degraded_threshold", "5s",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_semi_sync",
"--enable_replication_reporter",
"--enable-tx-throttler",
"--binlog_use_v3_resharding_mode=true",
}
shardingColumnType := "bigint(20) unsigned"
@ -274,7 +274,7 @@ func TestMergesharding(t *testing.T, useVarbinaryShardingKeyType bool) {
require.NoError(t, err)
// Initial clone (online).
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone",
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone", "--",
"--offline=false",
"--chunk_count", "10",
"--min_rows_per_chunk", "1",
@ -304,7 +304,7 @@ func TestMergesharding(t *testing.T, useVarbinaryShardingKeyType bool) {
insertValue(t, shard3.PrimaryTablet(), keyspaceName, tableName, 4, "msg4", key3)
err = clusterInstance.VtworkerProcess.ExecuteCommand(
"SplitClone",
"SplitClone", "--",
"--chunk_count", "10",
"--min_rows_per_chunk", "1",
"--min_healthy_rdonly_tablets", "1",
@ -378,7 +378,7 @@ func TestMergesharding(t *testing.T, useVarbinaryShardingKeyType bool) {
err = clusterInstance.VtworkerProcess.ExecuteVtworkerCommand(clusterInstance.GetAndReservePort(),
clusterInstance.GetAndReservePort(),
"--use_v3_resharding_mode=true",
"SplitDiff",
"SplitDiff", "--",
"--exclude_tables", "unrelated",
"--min_healthy_rdonly_tablets", "1",
"--source_uid", "1",
@ -395,7 +395,7 @@ func TestMergesharding(t *testing.T, useVarbinaryShardingKeyType bool) {
err = clusterInstance.VtworkerProcess.ExecuteVtworkerCommand(clusterInstance.GetAndReservePort(),
clusterInstance.GetAndReservePort(),
"--use_v3_resharding_mode=true",
"SplitDiff",
"SplitDiff", "--",
"--exclude_tables", "unrelated",
"--min_healthy_rdonly_tablets", "1",
"--source_uid", "2",
@ -409,8 +409,8 @@ func TestMergesharding(t *testing.T, useVarbinaryShardingKeyType bool) {
sharding.CheckTabletQueryService(t, *shard3Primary, "NOT_SERVING", false, *clusterInstance)
streamHealth, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(
"VtTabletStreamHealth",
"-count", "1", shard3Primary.Alias)
"VtTabletStreamHealth", "--",
"--count", "1", shard3Primary.Alias)
require.NoError(t, err)
log.Info("Got health: ", streamHealth)
@ -487,7 +487,7 @@ func TestMergesharding(t *testing.T, useVarbinaryShardingKeyType bool) {
}
for _, tablet := range []cluster.Vttablet{*shard0Primary, *shard1Primary} {
err = clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteTablet", "-allow_primary", tablet.Alias)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteTablet", "--", "--allow_primary", tablet.Alias)
require.NoError(t, err)
}

Просмотреть файл

@ -190,7 +190,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// Launch keyspace
keyspace := &cluster.Keyspace{Name: keyspaceName}
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "-durability_policy=semi_sync")
clusterInstance.VtctldExtraArgs = append(clusterInstance.VtctldExtraArgs, "--durability_policy=semi_sync")
// Start topo server
err := clusterInstance.StartTopo()
@ -228,16 +228,16 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
shard3.Vttablets = []*cluster.Vttablet{shard3Primary, shard3Replica, shard3Rdonly}
clusterInstance.VtTabletExtraArgs = []string{
"-vreplication_healthcheck_topology_refresh", "1s",
"-vreplication_healthcheck_retry_delay", "1s",
"-vreplication_retry_delay", "1s",
"-degraded_threshold", "5s",
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_semi_sync",
"-enable_replication_reporter",
"-enable-tx-throttler",
"-binlog_use_v3_resharding_mode=true",
"--vreplication_healthcheck_topology_refresh", "1s",
"--vreplication_healthcheck_retry_delay", "1s",
"--vreplication_retry_delay", "1s",
"--degraded_threshold", "5s",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_semi_sync",
"--enable_replication_reporter",
"--enable-tx-throttler",
"--binlog_use_v3_resharding_mode=true",
}
shardingColumnType := "bigint(20) unsigned"
@ -377,10 +377,10 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
require.Nil(t, err)
// we need to create the schema, and the worker will do data copying
err = clusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard", "--exclude_tables", "unrelated",
err = clusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard", "--", "--exclude_tables", "unrelated",
shard1.Rdonly().Alias, fmt.Sprintf("%s/%s", keyspaceName, shard2.Name))
require.Nil(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard", "--exclude_tables", "unrelated",
err = clusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard", "--", "--exclude_tables", "unrelated",
shard1.Rdonly().Alias, fmt.Sprintf("%s/%s", keyspaceName, shard3.Name))
require.Nil(t, err)
@ -394,7 +394,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// the rate limit is set very high.
// Initial clone (online).
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone",
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone", "--",
"--offline=false",
"--exclude_tables", "unrelated",
"--chunk_count", "10",
@ -419,7 +419,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
_, err = shard1Primary.VttabletProcess.QueryTablet(sql, keyspaceName, true)
require.Nil(t, err)
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone",
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone", "--",
"--offline=false",
"--exclude_tables", "unrelated",
"--chunk_count", "10",
@ -442,7 +442,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
_, err = shard1Primary.VttabletProcess.QueryTablet(sql, keyspaceName, true)
require.Nil(t, err)
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone",
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone", "--",
"--offline=false",
"--exclude_tables", "unrelated",
"--chunk_count", "10",
@ -474,7 +474,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
insertValue(t, shard3.PrimaryTablet(), keyspaceName, tableName, 4, "msg4", key3)
insertValue(t, shard3.PrimaryTablet(), keyspaceName, tableName, 5, "msg5", key3)
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone",
err = clusterInstance.VtworkerProcess.ExecuteCommand("SplitClone", "--",
"--exclude_tables", "unrelated",
"--chunk_count", "10",
"--min_rows_per_chunk", "1",
@ -495,7 +495,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
checkStartupValues(t, shardingKeyType)
// check the schema too
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateSchemaKeyspace", "--exclude_tables=unrelated", keyspaceName)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateSchemaKeyspace", "--", "--exclude_tables=unrelated", keyspaceName)
require.Nil(t, err)
// Verify vreplication table entries
@ -567,7 +567,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
err = clusterInstance.VtworkerProcess.ExecuteVtworkerCommand(clusterInstance.GetAndReservePort(),
clusterInstance.GetAndReservePort(),
"--use_v3_resharding_mode=true",
"SplitDiff",
"SplitDiff", "--",
"--exclude_tables", "unrelated",
"--min_healthy_rdonly_tablets", "1",
shard3Ks)
@ -578,7 +578,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
err = clusterInstance.VtworkerProcess.ExecuteVtworkerCommand(clusterInstance.GetAndReservePort(),
clusterInstance.GetAndReservePort(),
"--use_v3_resharding_mode=true",
"MultiSplitDiff",
"MultiSplitDiff", "--",
"--exclude_tables", "unrelated",
shard1Ks)
require.Nil(t, err)
@ -622,8 +622,8 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
for _, primary := range []cluster.Vttablet{*shard2Primary, *shard3Primary} {
sharding.CheckTabletQueryService(t, primary, "NOT_SERVING", false, *clusterInstance)
streamHealth, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(
"VtTabletStreamHealth",
"-count", "1", primary.Alias)
"VtTabletStreamHealth", "--",
"--count", "1", primary.Alias)
require.Nil(t, err)
log.Info("Got health: ", streamHealth)
@ -637,7 +637,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// now serve rdonly from the split shards, in cell1 only
err = clusterInstance.VtctlclientProcess.ExecuteCommand(
"MigrateServedTypes", fmt.Sprintf("--cells=%s", cell1),
"MigrateServedTypes", "--", fmt.Sprintf("--cells=%s", cell1),
shard1Ks, "rdonly")
require.Nil(t, err)
@ -667,9 +667,9 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// rerun migrate to ensure it doesn't fail
// skip refresh to make it go faster
err = clusterInstance.VtctlclientProcess.ExecuteCommand(
"MigrateServedTypes",
"MigrateServedTypes", "--",
fmt.Sprintf("--cells=%s", cell1),
"-skip-refresh-state=true",
"--skip-refresh-state=true",
shard1Ks, "rdonly")
require.Nil(t, err)
@ -694,8 +694,8 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// rerun migrate to ensure it doesn't fail
// skip refresh to make it go faster
err = clusterInstance.VtctlclientProcess.ExecuteCommand(
"MigrateServedTypes",
"-skip-refresh-state=true",
"MigrateServedTypes", "--",
"--skip-refresh-state=true",
shard1Ks, "rdonly")
require.Nil(t, err)
@ -713,7 +713,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// move replica back and forth
err = clusterInstance.VtctlclientProcess.ExecuteCommand(
"MigrateServedTypes", "-reverse",
"MigrateServedTypes", "--", "--reverse",
shard1Ks, "replica")
require.Nil(t, err)
@ -754,8 +754,8 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
sharding.CheckSrvKeyspace(t, cell1, keyspaceName, "", 0, expectedPartitions, *clusterInstance)
// reparent shard2 to shard2Replica1, then insert more data and see it flow through still
err = clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "-keyspace_shard", shard2Ks,
"-new_primary", shard2Replica1.Alias)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "--", "--keyspace_shard", shard2Ks,
"--new_primary", shard2Replica1.Alias)
require.Nil(t, err)
// update our test variables to point at the new primary
@ -773,7 +773,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
err = clusterInstance.VtworkerProcess.ExecuteVtworkerCommand(clusterInstance.GetAndReservePort(),
clusterInstance.GetAndReservePort(),
"--use_v3_resharding_mode=true",
"SplitDiff",
"SplitDiff", "--",
"--exclude_tables", "unrelated",
"--min_healthy_rdonly_tablets", "1",
shard3Ks)
@ -784,7 +784,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
err = clusterInstance.VtworkerProcess.ExecuteVtworkerCommand(clusterInstance.GetAndReservePort(),
clusterInstance.GetAndReservePort(),
"--use_v3_resharding_mode=true",
"MultiSplitDiff",
"MultiSplitDiff", "--",
"--exclude_tables", "unrelated",
shard1Ks)
require.Nil(t, err)
@ -799,7 +799,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// mock with the SourceShard records to test 'vtctl SourceShardDelete' and 'vtctl SourceShardAdd'
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SourceShardDelete", shard3Ks, "1")
require.Nil(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SourceShardAdd", "--key_range=80-",
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SourceShardAdd", "--", "--key_range=80-",
shard3Ks, "1", shard1Ks)
require.Nil(t, err)
@ -811,8 +811,8 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// which should cause the Migrate to be canceled and the source
// primary to be serving again.
// This is the legacy resharding migration command
err = clusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedTypes",
"-filtered_replication_wait_time", "0s", shard1Ks, "primary")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedTypes", "--",
"--filtered_replication_wait_time", "0s", shard1Ks, "primary")
require.Error(t, err)
expectedPartitions = map[topodata.TabletType][]string{}
@ -824,8 +824,8 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
sharding.CheckTabletQueryService(t, *shard1Primary, "SERVING", false, *clusterInstance)
// sabotage primary migration and make it fail in an unfinished state.
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl",
"-denied_tables=t",
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--",
"--denied_tables=t",
shard3Ks, "primary")
require.Nil(t, err)
@ -850,11 +850,11 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// remove sabotage, but make it fail early. This should not result in the source primary serving,
// because this failure is past the point of no return.
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "-denied_tables=t",
"-remove", shard3Ks, "primary")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--", "--denied_tables=t",
"--remove", shard3Ks, "primary")
require.Nil(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedTypes",
"-filtered_replication_wait_time", "0s", shard1Ks, "primary")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedTypes", "--",
"--filtered_replication_wait_time", "0s", shard1Ks, "primary")
require.Error(t, err)
sharding.CheckTabletQueryService(t, *shard1Primary, "NOT_SERVING", true, *clusterInstance)
@ -893,7 +893,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
3, false, "resharding2", fixedParentID, keyspaceName, shardingKeyType, nil)
// repeat the migration with reverse_replication
err = clusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedTypes", "-reverse_replication=true",
err = clusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedTypes", "--", "--reverse_replication=true",
shard1Ks, "primary")
require.Nil(t, err)
// look for the rows in the original primary after a short wait
@ -905,8 +905,8 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
// retry the migration to ensure it now fails
err = clusterInstance.VtctlclientProcess.ExecuteCommand(
"MigrateServedTypes",
"-reverse_replication=true",
"MigrateServedTypes", "--",
"--reverse_replication=true",
shard1Ks, "primary")
require.Error(t, err)
@ -932,7 +932,7 @@ func TestResharding(t *testing.T, useVarbinaryShardingKeyType bool) {
err = clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteTablet", tablet.Alias)
require.Nil(t, err)
}
err = clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteTablet", "-allow_primary", shard1Primary.Alias)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteTablet", "--", "--allow_primary", shard1Primary.Alias)
require.Nil(t, err)
// rebuild the serving graph, all mentions of the old shards should be gone

Просмотреть файл

@ -202,7 +202,7 @@ func TestVerticalSplit(t *testing.T) {
// create the schema on the source keyspace, add some values
insertInitialValues(t, conn, sourcePrimaryTablet, destinationPrimaryTablet)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard", "--tables", "/moving/,view1", sourceRdOnlyTablet1.Alias, "destination_keyspace/0")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("CopySchemaShard", "--", "--tables", "/moving/,view1", sourceRdOnlyTablet1.Alias, "destination_keyspace/0")
require.NoError(t, err, "CopySchemaShard failed")
// starting vtworker
@ -219,7 +219,7 @@ func TestVerticalSplit(t *testing.T) {
"--cell", cellj,
"--command_display_interval", "10ms",
"--use_v3_resharding_mode=true",
"VerticalSplitClone",
"VerticalSplitClone", "--",
"--tables", "/moving/,view1",
"--chunk_count", "10",
"--min_rows_per_chunk", "1",
@ -239,7 +239,7 @@ func TestVerticalSplit(t *testing.T) {
"--cell", cellj,
"--command_display_interval", "10ms",
"--use_v3_resharding_mode=true",
"VerticalSplitClone",
"VerticalSplitClone", "--",
"--tables", "/moving/,view1",
"--chunk_count", "10",
"--min_rows_per_chunk", "1",
@ -288,7 +288,7 @@ func TestVerticalSplit(t *testing.T) {
clusterInstance.GetAndReservePort(),
"--use_v3_resharding_mode=true",
"--cell", "test_nj",
"VerticalSplitDiff",
"VerticalSplitDiff", "--",
"--min_healthy_rdonly_tablets", "1",
"destination_keyspace/0")
require.NoError(t, err)
@ -322,7 +322,7 @@ func TestVerticalSplit(t *testing.T) {
validateKeyspaceJSON(t, keyspaceJSON, []string{"test_ca", "test_nj"})
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetKeyspaceServedFrom", "-source=source_keyspace", "-remove", "-cells=test_nj,test_ca", "destination_keyspace", "rdonly")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetKeyspaceServedFrom", "--", "--source=source_keyspace", "--remove", "--cells=test_nj,test_ca", "destination_keyspace", "rdonly")
require.NoError(t, err)
// again validating keyspaceJSON
@ -331,7 +331,7 @@ func TestVerticalSplit(t *testing.T) {
validateKeyspaceJSON(t, keyspaceJSON, nil)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetKeyspaceServedFrom", "-source=source_keyspace", "destination_keyspace", "rdonly")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetKeyspaceServedFrom", "--", "--source=source_keyspace", "destination_keyspace", "rdonly")
require.NoError(t, err)
keyspaceJSON, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("GetKeyspace", "destination_keyspace")
@ -366,7 +366,7 @@ func TestVerticalSplit(t *testing.T) {
checkClientConnRedirectionExecuteKeyrange(ctx, t, gconn, destinationKeyspace, []topodata.TabletType{topodata.TabletType_PRIMARY}, []string{"moving1", "moving2"})
// move replica back and forth
err = clusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedFrom", "-reverse", "destination_keyspace/0", "replica")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("MigrateServedFrom", "--", "--reverse", "destination_keyspace/0", "replica")
require.NoError(t, err)
checkSrvKeyspaceServedFrom(t, cellj, destinationKeyspace, "ServedFrom(primary): source_keyspace\nServedFrom(replica): source_keyspace\n", *clusterInstance)
checkDeniedTables(t, sourcePrimaryTablet, sourceKeyspace, nil)
@ -404,11 +404,11 @@ func TestVerticalSplit(t *testing.T) {
// now remove the tables on the source shard. The denied tables
// in the source shard won't match any table, make sure that works.
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ApplySchema", "-sql=drop view view1", "source_keyspace")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ApplySchema", "--", "--sql=drop view view1", "source_keyspace")
require.NoError(t, err)
for _, table := range []string{"moving1", "moving2"} {
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ApplySchema", "--sql=drop table "+table, "source_keyspace")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ApplySchema", "--", "--sql=drop table "+table, "source_keyspace")
require.NoError(t, err)
}
for _, tablet := range []cluster.Vttablet{sourcePrimaryTablet, sourceReplicaTablet, sourceRdOnlyTablet1, sourceRdOnlyTablet2} {
@ -425,21 +425,21 @@ func TestVerticalSplit(t *testing.T) {
func verifyVtctlSetShardTabletControl(t *testing.T) {
// clear the rdonly entry:
err := clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--remove", "source_keyspace/0", "rdonly")
err := clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--", "--remove", "source_keyspace/0", "rdonly")
require.NoError(t, err)
assertTabletControls(t, clusterInstance, []topodata.TabletType{topodata.TabletType_PRIMARY, topodata.TabletType_REPLICA})
// re-add rdonly:
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--denied_tables=/moving/,view1", "source_keyspace/0", "rdonly")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--", "--denied_tables=/moving/,view1", "source_keyspace/0", "rdonly")
require.NoError(t, err)
assertTabletControls(t, clusterInstance, []topodata.TabletType{topodata.TabletType_PRIMARY, topodata.TabletType_REPLICA, topodata.TabletType_RDONLY})
//and then clear all entries:
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--remove", "source_keyspace/0", "rdonly")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--", "--remove", "source_keyspace/0", "rdonly")
require.NoError(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--remove", "source_keyspace/0", "replica")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--", "--remove", "source_keyspace/0", "replica")
require.NoError(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--remove", "source_keyspace/0", "primary")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("SetShardTabletControl", "--", "--remove", "source_keyspace/0", "primary")
require.NoError(t, err)
shardJSON, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("GetShard", "source_keyspace/0")
@ -564,7 +564,7 @@ func checkDeniedTables(t *testing.T, tablet cluster.Vttablet, keyspace string, e
for _, table := range []string{"moving1", "moving2"} {
if expected != nil && strings.Contains(strings.Join(expected, " "), "moving") {
// table is denied, should get error
err := clusterInstance.VtctlclientProcess.ExecuteCommand("VtTabletExecute", "-json", tablet.Alias, fmt.Sprintf("select count(1) from %s", table))
err := clusterInstance.VtctlclientProcess.ExecuteCommand("VtTabletExecute", "--", "--json", tablet.Alias, fmt.Sprintf("select count(1) from %s", table))
require.Error(t, err, "disallowed due to rule: enforce denied tables")
} else {
// table is not part of the denylist, should just work
@ -642,7 +642,7 @@ func initializeCluster() (int, error) {
return 1, err
}
} else {
if err := clusterInstance.VtctlclientProcess.ExecuteCommand("CreateKeyspace", "--served_from", "primary:source_keyspace,replica:source_keyspace,rdonly:source_keyspace", "destination_keyspace"); err != nil {
if err := clusterInstance.VtctlclientProcess.ExecuteCommand("CreateKeyspace", "--", "--served_from", "primary:source_keyspace,replica:source_keyspace,rdonly:source_keyspace", "destination_keyspace"); err != nil {
return 1, err
}
}

Просмотреть файл

@ -216,7 +216,7 @@ func (bt *BufferingTest) createCluster() (*cluster.LocalProcessCluster, int) {
clusterInstance := cluster.NewCluster(cell, hostname)
// Start topo server
clusterInstance.VtctldExtraArgs = []string{"-remote_operation_timeout", "30s", "-topo_etcd_lease_ttl", "40"}
clusterInstance.VtctldExtraArgs = []string{"--remote_operation_timeout", "30s", "--topo_etcd_lease_ttl", "40"}
if err := clusterInstance.StartTopo(); err != nil {
return nil, 1
}
@ -227,22 +227,22 @@ func (bt *BufferingTest) createCluster() (*cluster.LocalProcessCluster, int) {
SchemaSQL: sqlSchema,
VSchema: bt.VSchema,
}
clusterInstance.VtTabletExtraArgs = []string{"-health_check_interval", "1s",
"-queryserver-config-transaction-timeout", "20",
clusterInstance.VtTabletExtraArgs = []string{"--health_check_interval", "1s",
"--queryserver-config-transaction-timeout", "20",
}
if err := clusterInstance.StartUnshardedKeyspace(*keyspace, 1, false); err != nil {
return nil, 1
}
clusterInstance.VtGateExtraArgs = []string{
"-enable_buffer",
"--enable_buffer",
// Long timeout in case failover is slow.
"-buffer_window", "10m",
"-buffer_max_failover_duration", "10m",
"-buffer_min_time_between_failovers", "20m",
"-gateway_implementation", "tabletgateway",
"-buffer_implementation", "keyspace_events",
"-tablet_refresh_interval", "1s",
"--buffer_window", "10m",
"--buffer_max_failover_duration", "10m",
"--buffer_min_time_between_failovers", "20m",
"--gateway_implementation", "tabletgateway",
"--buffer_implementation", "keyspace_events",
"--tablet_refresh_interval", "1s",
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, bt.VtGateExtraArgs...)

Просмотреть файл

@ -92,9 +92,9 @@ func failoverPlannedReparenting(t *testing.T, clusterInstance *cluster.LocalProc
reads.ExpectQueries(10)
writes.ExpectQueries(10)
err := clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "-keyspace_shard",
err := clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "--", "--keyspace_shard",
fmt.Sprintf("%s/%s", keyspaceUnshardedName, "0"),
"-new_primary", clusterInstance.Keyspaces[0].Shards[0].Vttablets[1].Alias)
"--new_primary", clusterInstance.Keyspaces[0].Shards[0].Vttablets[1].Alias)
require.NoError(t, err)
}

Просмотреть файл

@ -70,7 +70,7 @@ func reshard02(t *testing.T, clusterInstance *cluster.LocalProcessCluster, keysp
workflowName := "buf2buf"
workflow := fmt.Sprintf("%s.%s", keyspaceName, "buf2buf")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Reshard", "-source_shards", "0", "-target_shards", "-80,80-", "Create", workflow)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Reshard", "--", "--source_shards", "0", "--target_shards", "-80,80-", "Create", workflow)
require.NoError(t, err)
// Execute the resharding operation
@ -78,10 +78,10 @@ func reshard02(t *testing.T, clusterInstance *cluster.LocalProcessCluster, keysp
writes.ExpectQueries(25)
waitForLowLag(t, clusterInstance, keyspaceName, workflowName)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Reshard", "-tablet_types=rdonly,replica", "SwitchTraffic", workflow)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Reshard", "--", "--tablet_types=rdonly,replica", "SwitchTraffic", workflow)
require.NoError(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Reshard", "-tablet_types=primary", "SwitchTraffic", workflow)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Reshard", "--", "--tablet_types=primary", "SwitchTraffic", workflow)
require.NoError(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Reshard", "Complete", workflow)

Просмотреть файл

@ -54,16 +54,16 @@ var (
) Engine=InnoDB
`
commonTabletArg = []string{
"-vreplication_healthcheck_topology_refresh", "1s",
"-vreplication_healthcheck_retry_delay", "1s",
"-vreplication_retry_delay", "1s",
"-degraded_threshold", "5s",
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-serving_state_grace_period", "1s",
"-binlog_player_protocol", "grpc",
"-enable-autocommit",
"--vreplication_healthcheck_topology_refresh", "1s",
"--vreplication_healthcheck_retry_delay", "1s",
"--vreplication_retry_delay", "1s",
"--degraded_threshold", "5s",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--serving_state_grace_period", "1s",
"--binlog_player_protocol", "grpc",
"--enable-autocommit",
}
vSchema = `
{
@ -246,12 +246,12 @@ func TestAlias(t *testing.T) {
sharding.CheckSrvKeyspace(t, cell2, keyspaceName, "", 0, expectedPartitions, *localCluster)
// Adds alias so vtgate can route to replica/rdonly tablets that are not in the same cell, but same alias
err = localCluster.VtctlclientProcess.ExecuteCommand("AddCellsAlias",
"-cells", allCells,
err = localCluster.VtctlclientProcess.ExecuteCommand("AddCellsAlias", "--",
"--cells", allCells,
"region_east_coast")
require.Nil(t, err)
err = localCluster.VtctlclientProcess.ExecuteCommand("UpdateCellsAlias",
"-cells", allCells,
err = localCluster.VtctlclientProcess.ExecuteCommand("UpdateCellsAlias", "--",
"--cells", allCells,
"region_east_coast")
require.Nil(t, err)
@ -303,9 +303,9 @@ func waitTillAllTabletsAreHealthyInVtgate(t *testing.T, vtgateInstance cluster.V
}
func testQueriesOnTabletType(t *testing.T, tabletType string, vtgateGrpcPort int, shouldFail bool) {
output, err := localCluster.VtctlProcess.ExecuteCommandWithOutput("VtGateExecute", "-json",
"-server", fmt.Sprintf("%s:%d", localCluster.Hostname, vtgateGrpcPort),
"-target", "@"+tabletType,
output, err := localCluster.VtctlProcess.ExecuteCommandWithOutput("VtGateExecute", "--", "--json",
"--server", fmt.Sprintf("%s:%d", localCluster.Hostname, vtgateGrpcPort),
"--target", "@"+tabletType,
fmt.Sprintf(`select * from %s`, tableName))
if shouldFail {
require.Error(t, err)

Просмотреть файл

@ -64,7 +64,7 @@ func TestMain(m *testing.M) {
exitCode := func() int {
clusterInstance = cluster.NewCluster(cell, "localhost")
clusterInstance.VtTabletExtraArgs = []string{"-health_check_interval", "1s"}
clusterInstance.VtTabletExtraArgs = []string{"--health_check_interval", "1s"}
defer clusterInstance.Teardown()
// Start topo server

Просмотреть файл

@ -61,9 +61,9 @@ func TestTabletCommands(t *testing.T) {
// test exclude_field_names to vttablet works as expected
sql := "select id, value from t1"
args := []string{
"VtTabletExecute",
"-options", "included_fields:TYPE_ONLY",
"-json",
"VtTabletExecute", "--",
"--options", "included_fields:TYPE_ONLY",
"--json",
primaryTablet.Alias,
sql,
}
@ -73,7 +73,7 @@ func TestTabletCommands(t *testing.T) {
// make sure direct dba queries work
sql = "select * from t1"
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("ExecuteFetchAsDba", "-json", primaryTablet.Alias, sql)
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("ExecuteFetchAsDba", "--", "--json", primaryTablet.Alias, sql)
require.Nil(t, err)
assertExecuteFetch(t, result)
@ -87,7 +87,7 @@ func TestTabletCommands(t *testing.T) {
err = clusterInstance.VtctlclientProcess.ExecuteCommand("RefreshStateByShard", keyspaceShard)
require.Nil(t, err, "error should be Nil")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("RefreshStateByShard", "--cells="+cell, keyspaceShard)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("RefreshStateByShard", "--", "--cells="+cell, keyspaceShard)
require.Nil(t, err, "error should be Nil")
// Check basic actions.
@ -107,18 +107,18 @@ func TestTabletCommands(t *testing.T) {
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Validate")
require.Nil(t, err, "error should be Nil")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Validate", "-ping-tablets=true")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("Validate", "--", "--ping-tablets=true")
require.Nil(t, err, "error should be Nil")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateKeyspace", keyspaceName)
require.Nil(t, err, "error should be Nil")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateKeyspace", "-ping-tablets=true", keyspaceName)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateKeyspace", "--", "--ping-tablets=true", keyspaceName)
require.Nil(t, err, "error should be Nil")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateShard", "-ping-tablets=false", keyspaceShard)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateShard", "--", "--ping-tablets=false", keyspaceShard)
require.Nil(t, err, "error should be Nil")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateShard", "-ping-tablets=true", keyspaceShard)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("ValidateShard", "--", "--ping-tablets=true", keyspaceShard)
require.Nil(t, err, "error should be Nil")
}
@ -160,7 +160,7 @@ func TestActionAndTimeout(t *testing.T) {
time.Sleep(1 * time.Second)
// try a frontend RefreshState that should timeout as the tablet is busy running the other one
err = clusterInstance.VtctlclientProcess.ExecuteCommand("RefreshState", primaryTablet.Alias, "-wait-time", "2s")
err = clusterInstance.VtctlclientProcess.ExecuteCommand("RefreshState", "--", primaryTablet.Alias, "--wait-time", "2s")
assert.Error(t, err, "timeout as tablet is in Sleep")
}
@ -168,24 +168,24 @@ func TestHook(t *testing.T) {
// test a regular program works
defer cluster.PanicHandler(t)
runHookAndAssert(t, []string{
"ExecuteHook", primaryTablet.Alias, "test.sh", "--flag1", "--param1=hello"}, "0", false, "")
"ExecuteHook", "--", primaryTablet.Alias, "test.sh", "--flag1", "--param1=hello"}, "0", false, "")
// test stderr output
runHookAndAssert(t, []string{
"ExecuteHook", primaryTablet.Alias, "test.sh", "--to-stderr"}, "0", false, "ERR: --to-stderr\n")
"ExecuteHook", "--", primaryTablet.Alias, "test.sh", "--to-stderr"}, "0", false, "ERR: --to-stderr\n")
// test commands that fail
runHookAndAssert(t, []string{
"ExecuteHook", primaryTablet.Alias, "test.sh", "--exit-error"}, "1", false, "ERROR: exit status 1\n")
"ExecuteHook", "--", primaryTablet.Alias, "test.sh", "--exit-error"}, "1", false, "ERROR: exit status 1\n")
// test hook that is not present
runHookAndAssert(t, []string{
"ExecuteHook", primaryTablet.Alias, "not_here.sh", "--exit-error"}, "-1", false, "missing hook")
"ExecuteHook", "--", primaryTablet.Alias, "not_here.sh", "--exit-error"}, "-1", false, "missing hook")
// test hook with invalid name
runHookAndAssert(t, []string{
"ExecuteHook", primaryTablet.Alias, "/bin/ls"}, "-1", true, "hook name cannot have")
"ExecuteHook", "--", primaryTablet.Alias, "/bin/ls"}, "-1", true, "hook name cannot have")
}
func runHookAndAssert(t *testing.T, params []string, expectedStatus string, expectedError bool, expectedStderr string) {
@ -235,8 +235,8 @@ func TestShardReplicationFix(t *testing.T) {
func TestGetSchema(t *testing.T) {
defer cluster.PanicHandler(t)
res, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("GetSchema",
"-include-views", "-tables", "t1,v1",
res, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("GetSchema", "--",
"--include-views", "--tables", "t1,v1",
fmt.Sprintf("%s-%d", clusterInstance.Cell, primaryTablet.TabletUID))
require.Nil(t, err)

Просмотреть файл

@ -56,12 +56,12 @@ func TestTopoCustomRule(t *testing.T) {
require.NoError(t, err)
// Copy config file into topo.
err = clusterInstance.VtctlclientProcess.ExecuteCommand("TopoCp", "-to_topo", topoCustomRuleFile, topoCustomRulePath)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("TopoCp", "--", "--to_topo", topoCustomRuleFile, topoCustomRulePath)
require.Nil(t, err, "error should be Nil")
// Set extra tablet args for topo custom rule
clusterInstance.VtTabletExtraArgs = []string{
"-topocustomrule_path", topoCustomRulePath,
"--topocustomrule_path", topoCustomRulePath,
}
// Start a new Tablet
@ -98,7 +98,7 @@ func TestTopoCustomRule(t *testing.T) {
err = os.WriteFile(topoCustomRuleFile, data, 0777)
require.NoError(t, err)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("TopoCp", "-to_topo", topoCustomRuleFile, topoCustomRulePath)
err = clusterInstance.VtctlclientProcess.ExecuteCommand("TopoCp", "--", "--to_topo", topoCustomRuleFile, topoCustomRulePath)
require.Nil(t, err, "error should be Nil")
// And wait until the query fails with the right error.
@ -121,6 +121,6 @@ func TestTopoCustomRule(t *testing.T) {
}
func vtctlExec(sql string, tabletAlias string) (string, error) {
args := []string{"VtTabletExecute", "-json", tabletAlias, sql}
args := []string{"VtTabletExecute", "--", "--json", tabletAlias, sql}
return clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(args...)
}

Просмотреть файл

@ -94,16 +94,16 @@ func TestMain(m *testing.M) {
// List of users authorized to execute vschema ddl operations
clusterInstance.VtGateExtraArgs = []string{
"-vschema_ddl_authorized_users=%",
"-discovery_low_replication_lag", tabletUnhealthyThreshold.String(),
"--vschema_ddl_authorized_users=%",
"--discovery_low_replication_lag", tabletUnhealthyThreshold.String(),
}
// Set extra tablet args for lock timeout
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-health_check_interval", tabletHealthcheckRefreshInterval.String(),
"-unhealthy_threshold", tabletUnhealthyThreshold.String(),
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--health_check_interval", tabletHealthcheckRefreshInterval.String(),
"--unhealthy_threshold", tabletUnhealthyThreshold.String(),
}
// We do not need semiSync for this test case.
clusterInstance.EnableSemiSync = false

Просмотреть файл

@ -87,9 +87,9 @@ func TestMain(m *testing.M) {
// Set extra tablet args for lock timeout
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
}
// We do not need semiSync for this test case.
clusterInstance.EnableSemiSync = false
@ -174,7 +174,7 @@ func TestPrimaryRestartSetsTERTimestamp(t *testing.T) {
// Capture the current TER.
result, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(
"VtTabletStreamHealth", "-count", "1", replicaTablet.Alias)
"VtTabletStreamHealth", "--", "--count", "1", replicaTablet.Alias)
require.Nil(t, err)
var streamHealthRes1 querypb.StreamHealthResponse
@ -201,7 +201,7 @@ func TestPrimaryRestartSetsTERTimestamp(t *testing.T) {
// Make sure that the TER did not change
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput(
"VtTabletStreamHealth", "-count", "1", replicaTablet.Alias)
"VtTabletStreamHealth", "--", "--count", "1", replicaTablet.Alias)
require.Nil(t, err)
var streamHealthRes2 querypb.StreamHealthResponse

Просмотреть файл

@ -73,7 +73,7 @@ func TestQPS(t *testing.T) {
var qpsIncreased bool
timeout := time.Now().Add(12 * time.Second)
for time.Now().Before(timeout) {
result, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "-count", "1", primaryTablet.Alias)
result, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "--", "--count", "1", primaryTablet.Alias)
require.Nil(t, err)
var streamHealthResponse querypb.StreamHealthResponse

Просмотреть файл

@ -85,14 +85,14 @@ func TestMain(m *testing.M) {
// Set extra tablet args for lock timeout
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"-gc_check_interval", "5s",
"-gc_purge_check_interval", "5s",
"-table_gc_lifecycle", "hold,purge,evac,drop",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
"--gc_check_interval", "5s",
"--gc_purge_check_interval", "5s",
"--table_gc_lifecycle", "hold,purge,evac,drop",
}
// We do not need semiSync for this test case.
clusterInstance.EnableSemiSync = false

Просмотреть файл

@ -65,11 +65,11 @@ func TestTabletReshuffle(t *testing.T) {
// We have to disable active reparenting to prevent the tablet from trying to fix replication.
// We also have to disable replication reporting because we're pointed at the primary.
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"-mycnf_server_id", fmt.Sprintf("%d", rTablet.TabletUID),
"-db_socket", fmt.Sprintf("%s/mysql.sock", primaryTablet.VttabletProcess.Directory),
"-disable_active_reparents",
"-enable_replication_reporter=false",
"--lock_tables_timeout", "5s",
"--mycnf_server_id", fmt.Sprintf("%d", rTablet.TabletUID),
"--db_socket", fmt.Sprintf("%s/mysql.sock", primaryTablet.VttabletProcess.Directory),
"--disable_active_reparents",
"--enable_replication_reporter=false",
}
defer func() { clusterInstance.VtTabletExtraArgs = []string{} }()
@ -80,9 +80,9 @@ func TestTabletReshuffle(t *testing.T) {
sql := "select value from t1"
args := []string{
"VtTabletExecute",
"-options", "included_fields:TYPE_ONLY",
"-json",
"VtTabletExecute", "--",
"--options", "included_fields:TYPE_ONLY",
"--json",
rTablet.Alias,
sql,
}
@ -135,7 +135,7 @@ func TestHealthCheck(t *testing.T) {
require.NoError(t, err)
// make sure the health stream is updated
result, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "-count", "1", rTablet.Alias)
result, err := clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "--", "--count", "1", rTablet.Alias)
require.NoError(t, err)
verifyStreamHealth(t, result, true)
@ -147,7 +147,7 @@ func TestHealthCheck(t *testing.T) {
checkHealth(t, rTablet.HTTPPort, false)
// now test VtTabletStreamHealth returns the right thing
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "-count", "2", rTablet.Alias)
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "--", "--count", "2", rTablet.Alias)
require.NoError(t, err)
scanner := bufio.NewScanner(strings.NewReader(result))
for scanner.Scan() {
@ -163,7 +163,7 @@ func TestHealthCheck(t *testing.T) {
time.Sleep(tabletUnhealthyThreshold + tabletHealthcheckRefreshInterval)
// now the replica's VtTabletStreamHealth should show it as unhealthy
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "-count", "1", rTablet.Alias)
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "--", "--count", "1", rTablet.Alias)
require.NoError(t, err)
scanner = bufio.NewScanner(strings.NewReader(result))
for scanner.Scan() {
@ -188,7 +188,7 @@ func TestHealthCheck(t *testing.T) {
time.Sleep(tabletHealthcheckRefreshInterval)
// now the replica's VtTabletStreamHealth should show it as healthy again
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "-count", "1", rTablet.Alias)
result, err = clusterInstance.VtctlclientProcess.ExecuteCommandWithOutput("VtTabletStreamHealth", "--", "--count", "1", rTablet.Alias)
require.NoError(t, err)
scanner = bufio.NewScanner(strings.NewReader(result))
for scanner.Scan() {

Просмотреть файл

@ -38,7 +38,7 @@ func TestFallbackSecurityPolicy(t *testing.T) {
require.NoError(t, err)
// Requesting an unregistered security_policy should fallback to deny-all.
clusterInstance.VtTabletExtraArgs = []string{"-security_policy", "bogus"}
clusterInstance.VtTabletExtraArgs = []string{"--security_policy", "bogus"}
err = clusterInstance.StartVttablet(mTablet, "SERVING", false, cell, keyspaceName, hostname, shardName)
require.NoError(t, err)
@ -93,7 +93,7 @@ func TestDenyAllSecurityPolicy(t *testing.T) {
require.NoError(t, err)
// Requesting a deny-all security_policy.
clusterInstance.VtTabletExtraArgs = []string{"-security_policy", "deny-all"}
clusterInstance.VtTabletExtraArgs = []string{"--security_policy", "deny-all"}
err = clusterInstance.StartVttablet(mTablet, "SERVING", false, cell, keyspaceName, hostname, shardName)
require.NoError(t, err)
@ -125,7 +125,7 @@ func TestReadOnlySecurityPolicy(t *testing.T) {
require.NoError(t, err)
// Requesting a read-only security_policy.
clusterInstance.VtTabletExtraArgs = []string{"-security_policy", "read-only"}
clusterInstance.VtTabletExtraArgs = []string{"--security_policy", "read-only"}
err = clusterInstance.StartVttablet(mTablet, "SERVING", false, cell, keyspaceName, hostname, shardName)
require.NoError(t, err)

Просмотреть файл

@ -69,8 +69,8 @@ func TestLocalMetadata(t *testing.T) {
rTablet := clusterInstance.NewVttabletInstance("replica", 0, "")
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"-init_populate_metadata",
"--lock_tables_timeout", "5s",
"--init_populate_metadata",
}
rTablet.MysqlctlProcess = *cluster.MysqlCtlProcessInstance(rTablet.TabletUID, rTablet.MySQLPort, clusterInstance.TmpDirectory)
err := rTablet.MysqlctlProcess.Start()
@ -90,7 +90,7 @@ func TestLocalMetadata(t *testing.T) {
// start with -init_populate_metadata false (default)
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"--lock_tables_timeout", "5s",
}
rTablet2.MysqlctlProcess = *cluster.MysqlCtlProcessInstance(rTablet2.TabletUID, rTablet2.MySQLPort, clusterInstance.TmpDirectory)
err = rTablet2.MysqlctlProcess.Start()

Просмотреть файл

@ -93,13 +93,13 @@ func TestMain(m *testing.M) {
// Set extra tablet args for lock timeout
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-enable-lag-throttler",
"-throttle_threshold", "1s",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--enable-lag-throttler",
"--throttle_threshold", "1s",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
}
// We do not need semiSync for this test case.
clusterInstance.EnableSemiSync = false

Просмотреть файл

@ -96,15 +96,15 @@ func TestMain(m *testing.M) {
// Set extra tablet args for lock timeout
clusterInstance.VtTabletExtraArgs = []string{
"-lock_tables_timeout", "5s",
"-watch_replication_stream",
"-enable_replication_reporter",
"-enable-lag-throttler",
"-throttle_metrics_query", "show global status like 'threads_running'",
"-throttle_metrics_threshold", fmt.Sprintf("%d", testThreshold),
"-throttle_check_as_check_self",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
"--lock_tables_timeout", "5s",
"--watch_replication_stream",
"--enable_replication_reporter",
"--enable-lag-throttler",
"--throttle_metrics_query", "show global status like 'threads_running'",
"--throttle_metrics_threshold", fmt.Sprintf("%d", testThreshold),
"--throttle_check_as_check_self",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
}
// We do not need semiSync for this test case.
clusterInstance.EnableSemiSync = false

Просмотреть файл

@ -30,8 +30,8 @@ var (
vtdataroot string
mainClusterConfig *ClusterConfig
externalClusterConfig *ClusterConfig
extraVTGateArgs = []string{"-tablet_refresh_interval", "10ms"}
extraVtctldArgs = []string{"-remote_operation_timeout", "600s", "-topo_etcd_lease_ttl", "120"}
extraVTGateArgs = []string{"--tablet_refresh_interval", "10ms"}
extraVtctldArgs = []string{"--remote_operation_timeout", "600s", "--topo_etcd_lease_ttl", "120"}
)
// ClusterConfig defines the parameters like ports, tmpDir, tablet types which uniquely define a vitess cluster
@ -401,14 +401,14 @@ func (vc *VitessCluster) AddTablet(t testing.TB, cell *Cell, keyspace *Keyspace,
tablet := &Tablet{}
options := []string{
"-queryserver-config-schema-reload-time", "5",
"-enable-lag-throttler",
"-heartbeat_enable",
"-heartbeat_interval", "250ms",
} //FIXME: for multi-cell initial schema doesn't seem to load without "-queryserver-config-schema-reload-time"
"--queryserver-config-schema-reload-time", "5",
"--enable-lag-throttler",
"--heartbeat_enable",
"--heartbeat_interval", "250ms",
} //FIXME: for multi-cell initial schema doesn't seem to load without "--queryserver-config-schema-reload-time"
if mainClusterConfig.vreplicationCompressGTID {
options = append(options, "-vreplication_store_compressed_gtid=true")
options = append(options, "--vreplication_store_compressed_gtid=true")
}
vttablet := cluster.VttabletProcessInstance(
@ -531,7 +531,7 @@ func (vc *VitessCluster) DeleteShard(t testing.TB, cellName string, ksName strin
}
log.Infof("Deleting Shard %s", shardName)
//TODO how can we avoid the use of even_if_serving?
if output, err := vc.VtctlClient.ExecuteCommandWithOutput("DeleteShard", "-recursive", "-even_if_serving", ksName+"/"+shardName); err != nil {
if output, err := vc.VtctlClient.ExecuteCommandWithOutput("DeleteShard", "--", "--recursive", "--even_if_serving", ksName+"/"+shardName); err != nil {
t.Fatalf("DeleteShard command failed with error %+v and output %s\n", err, output)
}

Просмотреть файл

@ -243,7 +243,7 @@ func checkIfTableExists(t *testing.T, vc *VitessCluster, tabletAlias string, tab
var err error
found := false
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("GetSchema", "-tables", table, tabletAlias); err != nil {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("GetSchema", "--", "--tables", table, tabletAlias); err != nil {
return false, err
}
jsonparser.ArrayEach([]byte(output), func(value []byte, dataType jsonparser.ValueType, offset int, err error) {
@ -288,7 +288,7 @@ func printShardPositions(vc *VitessCluster, ksShards []string) {
}
func clearRoutingRules(t *testing.T, vc *VitessCluster) error {
if _, err := vc.VtctlClient.ExecuteCommandWithOutput("ApplyRoutingRules", "-rules={}"); err != nil {
if _, err := vc.VtctlClient.ExecuteCommandWithOutput("ApplyRoutingRules", "--", "--rules={}"); err != nil {
return err
}
return nil

Просмотреть файл

@ -87,16 +87,16 @@ func TestMigrate(t *testing.T) {
ksWorkflow := "product.e1"
t.Run("mount external cluster", func(t *testing.T) {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "-type=vitess", "-topo_type=etcd2",
fmt.Sprintf("-topo_server=localhost:%d", extVc.ClusterConfig.topoPort), "-topo_root=/vitess/global", "ext1"); err != nil {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "--", "--type=vitess", "--topo_type=etcd2",
fmt.Sprintf("--topo_server=localhost:%d", extVc.ClusterConfig.topoPort), "--topo_root=/vitess/global", "ext1"); err != nil {
t.Fatalf("Mount command failed with %+v : %s\n", err, output)
}
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "-type=vitess", "-list"); err != nil {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "--", "--type=vitess", "--list"); err != nil {
t.Fatalf("Mount command failed with %+v : %s\n", err, output)
}
expected = "ext1\n"
require.Equal(t, expected, output)
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "-type=vitess", "-show", "ext1"); err != nil {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "--", "--type=vitess", "--show", "ext1"); err != nil {
t.Fatalf("Mount command failed with %+v : %s\n", err, output)
}
expected = `{"ClusterName":"ext1","topo_config":{"topo_type":"etcd2","server":"localhost:12379","root":"/vitess/global"}}` + "\n"
@ -104,7 +104,7 @@ func TestMigrate(t *testing.T) {
})
t.Run("migrate from external cluster", func(t *testing.T) {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Migrate", "-all", "-cells=extcell1",
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Migrate", "--", "--all", "--cells=extcell1",
"-source=ext1.rating", "create", ksWorkflow); err != nil {
t.Fatalf("Migrate command failed with %+v : %s\n", err, output)
}
@ -128,7 +128,7 @@ func TestMigrate(t *testing.T) {
t.Run("cancel migrate workflow", func(t *testing.T) {
execVtgateQuery(t, vtgateConn, "product", "drop table review,rating")
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Migrate", "-all", "-auto_start=false", "-cells=extcell1",
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Migrate", "--", "--all", "--auto_start=false", "--cells=extcell1",
"-source=ext1.rating", "create", ksWorkflow); err != nil {
t.Fatalf("Migrate command failed with %+v : %s\n", err, output)
}
@ -148,17 +148,17 @@ func TestMigrate(t *testing.T) {
require.False(t, found)
})
t.Run("unmount external cluster", func(t *testing.T) {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "-type=vitess", "-unmount", "ext1"); err != nil {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "--", "--type=vitess", "--unmount", "ext1"); err != nil {
t.Fatalf("Mount command failed with %+v : %s\n", err, output)
}
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "-type=vitess", "-list"); err != nil {
if output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "--", "--type=vitess", "--list"); err != nil {
t.Fatalf("Mount command failed with %+v : %s\n", err, output)
}
expected = "\n"
require.Equal(t, expected, output)
output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "-type=vitess", "-show", "ext1")
output, err = vc.VtctlClient.ExecuteCommandWithOutput("Mount", "--", "--type=vitess", "--show", "ext1")
require.Errorf(t, err, "there is no vitess cluster named ext1")
})
}

Просмотреть файл

@ -98,23 +98,26 @@ func tstWorkflowExec(t *testing.T, cells, workflow, sourceKs, targetKs, tables,
} else {
args = append(args, "Reshard")
}
args = append(args, "--")
if BypassLagCheck {
args = append(args, "-max_replication_lag_allowed=2542087h")
args = append(args, "--max_replication_lag_allowed=2542087h")
}
switch action {
case workflowActionCreate:
if currentWorkflowType == wrangler.MoveTablesWorkflow {
args = append(args, "-source", sourceKs, "-tables", tables)
args = append(args, "--source", sourceKs, "--tables", tables)
} else {
args = append(args, "-source_shards", sourceShards, "-target_shards", targetShards)
args = append(args, "--source_shards", sourceShards, "--target_shards", targetShards)
}
}
if cells != "" {
args = append(args, "-cells", cells)
args = append(args, "--cells", cells)
}
if tabletTypes != "" {
args = append(args, "-tablet_types", tabletTypes)
args = append(args, "--tablet_types", tabletTypes)
}
ksWorkflow := fmt.Sprintf("%s.%s", targetKs, workflow)
args = append(args, action, ksWorkflow)
@ -606,8 +609,8 @@ func TestSwitchReadsWritesInAnyOrder(t *testing.T) {
}
func switchReadsNew(t *testing.T, cells, ksWorkflow string, reverse bool) {
output, err := vc.VtctlClient.ExecuteCommandWithOutput("SwitchReads", "-cells="+cells,
"-tablet_types=rdonly,replica", fmt.Sprintf("-reverse=%t", reverse), ksWorkflow)
output, err := vc.VtctlClient.ExecuteCommandWithOutput("SwitchReads", "--", "--cells="+cells,
"--tablet_types=rdonly,replica", fmt.Sprintf("-reverse=%t", reverse), ksWorkflow)
require.NoError(t, err, fmt.Sprintf("SwitchReads Error: %s: %s", err, output))
if output != "" {
fmt.Printf("SwitchReads output: %s\n", output)
@ -730,7 +733,7 @@ func createAdditionalCustomerShards(t *testing.T, shards string) {
}
func tstApplySchemaOnlineDDL(t *testing.T, sql string, keyspace string) {
err := vc.VtctlClient.ExecuteCommand("ApplySchema", "-skip_preflight", "-ddl_strategy=online",
"-sql", sql, keyspace)
err := vc.VtctlClient.ExecuteCommand("ApplySchema", "--", "--skip_preflight", "--ddl_strategy=online",
"--sql", sql, keyspace)
require.NoError(t, err, fmt.Sprintf("ApplySchema Error: %s", err))
}

Просмотреть файл

@ -215,7 +215,7 @@ func TestCellAliasVreplicationWorkflow(t *testing.T) {
vc.AddKeyspace(t, []*Cell{cell1, cell2}, "product", "0", initialProductVSchema, initialProductSchema, defaultReplicas, defaultRdonly, 100, sourceKsOpts)
// Add cell alias containing only zone2
result, err := vc.VtctlClient.ExecuteCommandWithOutput("AddCellsAlias", "-cells", "zone2", "alias")
result, err := vc.VtctlClient.ExecuteCommandWithOutput("AddCellsAlias", "--", "--cells", "zone2", "alias")
require.NoError(t, err, "command failed with output: %v", result)
vtgate = cell1.Vtgates[0]
@ -607,7 +607,7 @@ func reshard(t *testing.T, ksName string, tableName string, workflow string, sou
t.Fatal(err)
}
}
if err := vc.VtctlClient.ExecuteCommand("Reshard", "-v1", "-cells="+sourceCellOrAlias, "-tablet_types=replica,primary", ksWorkflow, sourceShards, targetShards); err != nil {
if err := vc.VtctlClient.ExecuteCommand("Reshard", "--", "--v1", "--cells="+sourceCellOrAlias, "--tablet_types=replica,primary", ksWorkflow, sourceShards, targetShards); err != nil {
t.Fatalf("Reshard command failed with %+v\n", err)
}
tablets := vc.getVttabletsInKeyspace(t, defaultCell, ksName, "primary")
@ -709,7 +709,7 @@ func shardMerchant(t *testing.T) {
func vdiff(t *testing.T, workflow, cells string) {
t.Run("vdiff", func(t *testing.T) {
output, err := vc.VtctlClient.ExecuteCommandWithOutput("VDiff", "-tablet_types=primary", "-source_cell="+cells, "-format", "json", workflow)
output, err := vc.VtctlClient.ExecuteCommandWithOutput("VDiff", "--", "--tablet_types=primary", "--source_cell="+cells, "--format", "json", workflow)
log.Infof("vdiff err: %+v, output: %+v", err, output)
require.Nil(t, err)
require.NotNil(t, output)
@ -986,18 +986,18 @@ func catchup(t *testing.T, vttablet *cluster.VttabletProcess, workflow, info str
}
func moveTables(t *testing.T, cell, workflow, sourceKs, targetKs, tables string) {
if err := vc.VtctlClient.ExecuteCommand("MoveTables", "-v1", "-cells="+cell, "-workflow="+workflow,
if err := vc.VtctlClient.ExecuteCommand("MoveTables", "--", "--v1", "--cells="+cell, "--workflow="+workflow,
"-tablet_types="+"primary,replica,rdonly", sourceKs, targetKs, tables); err != nil {
t.Fatalf("MoveTables command failed with %+v\n", err)
}
}
func applyVSchema(t *testing.T, vschema, keyspace string) {
err := vc.VtctlClient.ExecuteCommand("ApplyVSchema", "-vschema", vschema, keyspace)
err := vc.VtctlClient.ExecuteCommand("ApplyVSchema", "--", "--vschema", vschema, keyspace)
require.NoError(t, err)
}
func switchReadsDryRun(t *testing.T, cells, ksWorkflow string, dryRunResults []string) {
output, err := vc.VtctlClient.ExecuteCommandWithOutput("SwitchReads", "-cells="+cells, "-tablet_type=replica", "-dry_run", ksWorkflow)
output, err := vc.VtctlClient.ExecuteCommandWithOutput("SwitchReads", "--", "--cells="+cells, "--tablet_type=replica", "--dry_run", ksWorkflow)
require.NoError(t, err, fmt.Sprintf("SwitchReads DryRun Error: %s: %s", err, output))
validateDryRunResults(t, output, dryRunResults)
}
@ -1005,14 +1005,14 @@ func switchReadsDryRun(t *testing.T, cells, ksWorkflow string, dryRunResults []s
func switchReads(t *testing.T, cells, ksWorkflow string) {
var output string
var err error
output, err = vc.VtctlClient.ExecuteCommandWithOutput("SwitchReads", "-cells="+cells, "-tablet_type=rdonly", ksWorkflow)
output, err = vc.VtctlClient.ExecuteCommandWithOutput("SwitchReads", "--", "--cells="+cells, "--tablet_type=rdonly", ksWorkflow)
require.NoError(t, err, fmt.Sprintf("SwitchReads Error: %s: %s", err, output))
output, err = vc.VtctlClient.ExecuteCommandWithOutput("SwitchReads", "-cells="+cells, "-tablet_type=replica", ksWorkflow)
output, err = vc.VtctlClient.ExecuteCommandWithOutput("SwitchReads", "--", "--cells="+cells, "--tablet_type=replica", ksWorkflow)
require.NoError(t, err, fmt.Sprintf("SwitchReads Error: %s: %s", err, output))
}
func switchWritesDryRun(t *testing.T, ksWorkflow string, dryRunResults []string) {
output, err := vc.VtctlClient.ExecuteCommandWithOutput("SwitchWrites", "-dry_run", ksWorkflow)
output, err := vc.VtctlClient.ExecuteCommandWithOutput("SwitchWrites", "--", "--dry_run", ksWorkflow)
require.NoError(t, err, fmt.Sprintf("SwitchWrites DryRun Error: %s: %s", err, output))
validateDryRunResults(t, output, dryRunResults)
}
@ -1049,8 +1049,8 @@ func printSwitchWritesExtraDebug(t *testing.T, ksWorkflow, msg string) {
func switchWrites(t *testing.T, ksWorkflow string, reverse bool) {
const SwitchWritesTimeout = "91s" // max: 3 tablet picker 30s waits + 1
output, err := vc.VtctlClient.ExecuteCommandWithOutput("SwitchWrites",
"-filtered_replication_wait_time="+SwitchWritesTimeout, fmt.Sprintf("-reverse=%t", reverse), ksWorkflow)
output, err := vc.VtctlClient.ExecuteCommandWithOutput("SwitchWrites", "--",
"--filtered_replication_wait_time="+SwitchWritesTimeout, fmt.Sprintf("--reverse=%t", reverse), ksWorkflow)
if output != "" {
fmt.Printf("Output of SwitchWrites for %s:\n++++++\n%s\n--------\n", ksWorkflow, output)
}
@ -1060,9 +1060,9 @@ func switchWrites(t *testing.T, ksWorkflow string, reverse bool) {
}
func dropSourcesDryRun(t *testing.T, ksWorkflow string, renameTables bool, dryRunResults []string) {
args := []string{"DropSources", "-dry_run"}
args := []string{"DropSources", "--", "--dry_run"}
if renameTables {
args = append(args, "-rename_tables")
args = append(args, "--rename_tables")
}
args = append(args, ksWorkflow)
output, err := vc.VtctlClient.ExecuteCommandWithOutput(args...)

Просмотреть файл

@ -143,7 +143,7 @@ func testVStreamWithFailover(t *testing.T, failover bool) {
case 1:
if failover {
insertMu.Lock()
output, err := vc.VtctlClient.ExecuteCommandWithOutput("PlannedReparentShard", "-keyspace_shard=product/0", "-new_primary=zone1-101")
output, err := vc.VtctlClient.ExecuteCommandWithOutput("PlannedReparentShard", "--", "--keyspace_shard=product/0", "--new_primary=zone1-101")
insertMu.Unlock()
log.Infof("output of first PRS is %s", output)
require.NoError(t, err)
@ -151,7 +151,7 @@ func testVStreamWithFailover(t *testing.T, failover bool) {
case 2:
if failover {
insertMu.Lock()
output, err := vc.VtctlClient.ExecuteCommandWithOutput("PlannedReparentShard", "-keyspace_shard=product/0", "-new_primary=zone1-100")
output, err := vc.VtctlClient.ExecuteCommandWithOutput("PlannedReparentShard", "--", "--keyspace_shard=product/0", "--new_primary=zone1-100")
insertMu.Unlock()
log.Infof("output of second PRS is %s", output)
require.NoError(t, err)
@ -401,7 +401,7 @@ func TestVStreamStopOnReshardFalse(t *testing.T) {
func TestVStreamWithKeyspacesToWatch(t *testing.T) {
extraVTGateArgs = append(extraVTGateArgs, []string{
"-keyspaces_to_watch", "product",
"--keyspaces_to_watch", "product",
}...)
testVStreamWithFailover(t, false)

Просмотреть файл

@ -203,13 +203,13 @@ func createCluster() (*cluster.LocalProcessCluster, int) {
}
clusterInstance.VtGateExtraArgs = []string{
"-enable_buffer",
"--enable_buffer",
// Long timeout in case failover is slow.
"-buffer_window", "10m",
"-buffer_max_failover_duration", "10m",
"-buffer_min_time_between_failovers", "20m",
"--buffer_window", "10m",
"--buffer_max_failover_duration", "10m",
"--buffer_min_time_between_failovers", "20m",
// Use legacy gateway. tabletgateway test is at go/test/endtoend/tabletgateway/buffer/buffer_test.go
"-gateway_implementation", "discoverygateway",
"--gateway_implementation", "discoverygateway",
}
// Start vtgate
@ -270,9 +270,9 @@ func testBufferBase(t *testing.T, isExternalParent bool) {
externalReparenting(t, clusterInstance)
} else {
//reparent call
if err := clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "-keyspace_shard",
if err := clusterInstance.VtctlclientProcess.ExecuteCommand("PlannedReparentShard", "--", "--keyspace_shard",
fmt.Sprintf("%s/%s", keyspaceUnshardedName, "0"),
"-new_primary", clusterInstance.Keyspaces[0].Shards[0].Vttablets[1].Alias); err != nil {
"--new_primary", clusterInstance.Keyspaces[0].Shards[0].Vttablets[1].Alias); err != nil {
log.Errorf("clusterInstance.VtctlclientProcess.ExecuteCommand(\"PlannedRepare... caused an error : %v", err)
}
}

Просмотреть файл

@ -64,7 +64,7 @@ func TestMain(m *testing.M) {
}
// Start vtgate
clusterInstance.VtGateExtraArgs = []string{"-dbddl_plugin", "noop", "-mysql_server_query_timeout", "60s"}
clusterInstance.VtGateExtraArgs = []string{"--dbddl_plugin", "noop", "--mysql_server_query_timeout", "60s"}
vtgateProcess := clusterInstance.NewVtgateInstance()
vtgateProcess.SysVarSetEnabled = true
if err := vtgateProcess.Setup(); err != nil {
@ -164,7 +164,7 @@ func shutdown(t *testing.T, ksName string) {
}
require.NoError(t,
clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteKeyspace", "-recursive", ksName))
clusterInstance.VtctlclientProcess.ExecuteCommand("DeleteKeyspace", "--", "--recursive", ksName))
require.NoError(t,
clusterInstance.VtctlclientProcess.ExecuteCommand("RebuildVSchemaGraph"))

Просмотреть файл

@ -96,14 +96,14 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-transaction-timeout", "3"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-transaction-timeout", "3"}
if err := clusterInstance.StartKeyspace(*Keyspace, []string{"-80", "80-"}, 1, false); err != nil {
log.Fatal(err.Error())
return 1
}
// Start vtgate
clusterInstance.VtGateExtraArgs = []string{"-warn_sharded_only=true"}
clusterInstance.VtGateExtraArgs = []string{"--warn_sharded_only=true"}
if err := clusterInstance.StartVtgate(); err != nil {
log.Fatal(err.Error())
return 1

Просмотреть файл

@ -98,8 +98,8 @@ func createCluster(extraVTGateArgs []string) (*cluster.LocalProcessCluster, int)
}
vtGateArgs := []string{
"-mysql_auth_server_static_file", clusterInstance.TmpDirectory + "/" + mysqlAuthServerStatic,
"-keyspaces_to_watch", keyspaceUnshardedName,
"--mysql_auth_server_static_file", clusterInstance.TmpDirectory + "/" + mysqlAuthServerStatic,
"--keyspaces_to_watch", keyspaceUnshardedName,
}
if extraVTGateArgs != nil {
@ -147,7 +147,7 @@ func TestVSchemaDDLWithKeyspacesToWatch(t *testing.T) {
defer cluster.PanicHandler(t)
extraVTGateArgs := []string{
"-vschema_ddl_authorized_users", "%",
"--vschema_ddl_authorized_users", "%",
}
clusterInstance, exitCode := createCluster(extraVTGateArgs)
defer clusterInstance.Teardown()

Просмотреть файл

@ -457,8 +457,8 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtGateExtraArgs = []string{"-schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-schema-change-signal", "-queryserver-config-schema-change-signal-interval", "0.1"}
clusterInstance.VtGateExtraArgs = []string{"--schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-schema-change-signal", "--queryserver-config-schema-change-signal-interval", "0.1"}
err = clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, true)
if err != nil {
return 1
@ -474,7 +474,7 @@ func TestMain(m *testing.M) {
return 1
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-enable_system_settings=true")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--enable_system_settings=true")
// Start vtgate
err = clusterInstance.StartVtgate()
if err != nil {

Просмотреть файл

@ -105,14 +105,14 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtGateExtraArgs = []string{"-schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-schema-change-signal", "-queryserver-config-schema-change-signal-interval", "0.1"}
clusterInstance.VtGateExtraArgs = []string{"--schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-schema-change-signal", "--queryserver-config-schema-change-signal-interval", "0.1"}
err = clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, true)
if err != nil {
return 1
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-enable_system_settings=true")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--enable_system_settings=true")
// Start vtgate
err = clusterInstance.StartVtgate()
if err != nil {

Просмотреть файл

@ -59,8 +59,8 @@ func TestMain(m *testing.M) {
clusterInstance.VtGatePlannerVersion = querypb.ExecuteOptions_Gen4
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs,
"-enable_system_settings=true",
"-mysql_server_version=8.0.16-7",
"--enable_system_settings=true",
"--mysql_server_version=8.0.16-7",
)
// Start vtgate
err = clusterInstance.StartVtgate()

Просмотреть файл

@ -124,7 +124,7 @@ func TestMain(m *testing.M) {
}
// Start vtgate
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-planner_version", "Gen4Fallback")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--planner_version", "Gen4Fallback")
if err := clusterInstance.StartVtgate(); err != nil {
return 1
}

Просмотреть файл

@ -199,14 +199,14 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtGateExtraArgs = []string{"-schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-schema-change-signal", "-queryserver-config-schema-change-signal-interval", "0.1"}
clusterInstance.VtGateExtraArgs = []string{"--schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-schema-change-signal", "--queryserver-config-schema-change-signal-interval", "0.1"}
err = clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, true)
if err != nil {
return 1
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-enable_system_settings=true")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--enable_system_settings=true")
// Start vtgate
err = clusterInstance.StartVtgate()
if err != nil {

Просмотреть файл

@ -108,14 +108,14 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtGateExtraArgs = []string{"-schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-schema-change-signal", "-queryserver-config-schema-change-signal-interval", "0.1"}
clusterInstance.VtGateExtraArgs = []string{"--schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-schema-change-signal", "--queryserver-config-schema-change-signal-interval", "0.1"}
err = clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, true)
if err != nil {
return 1
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-enable_system_settings=true")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--enable_system_settings=true")
// Start vtgate
err = clusterInstance.StartVtgate()
if err != nil {

Просмотреть файл

@ -410,8 +410,8 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtGateExtraArgs = []string{"-schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-schema-change-signal", "-queryserver-config-schema-change-signal-interval", "0.1"}
clusterInstance.VtGateExtraArgs = []string{"--schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-schema-change-signal", "--queryserver-config-schema-change-signal-interval", "0.1"}
err = clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, true)
if err != nil {
return 1
@ -427,7 +427,7 @@ func TestMain(m *testing.M) {
return 1
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-enable_system_settings=true")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--enable_system_settings=true")
// Start vtgate
err = clusterInstance.StartVtgate()
if err != nil {

Просмотреть файл

@ -150,14 +150,14 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtGateExtraArgs = []string{"-schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-schema-change-signal", "-queryserver-config-schema-change-signal-interval", "0.1"}
clusterInstance.VtGateExtraArgs = []string{"--schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-schema-change-signal", "--queryserver-config-schema-change-signal-interval", "0.1"}
err = clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, true)
if err != nil {
return 1
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-enable_system_settings=true")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--enable_system_settings=true")
// Start vtgate
err = clusterInstance.StartVtgate()
if err != nil {

Просмотреть файл

@ -149,14 +149,14 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtGateExtraArgs = []string{"-schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-schema-change-signal", "-queryserver-config-schema-change-signal-interval", "0.1"}
clusterInstance.VtGateExtraArgs = []string{"--schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-schema-change-signal", "--queryserver-config-schema-change-signal-interval", "0.1"}
err = clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, true)
if err != nil {
return 1
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-enable_system_settings=true")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--enable_system_settings=true")
// Start vtgate
err = clusterInstance.StartVtgate()
if err != nil {

Просмотреть файл

@ -149,14 +149,14 @@ func TestMain(m *testing.M) {
SchemaSQL: SchemaSQL,
VSchema: VSchema,
}
clusterInstance.VtGateExtraArgs = []string{"-schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-schema-change-signal", "-queryserver-config-schema-change-signal-interval", "0.1"}
clusterInstance.VtGateExtraArgs = []string{"--schema_change_signal"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-schema-change-signal", "--queryserver-config-schema-change-signal-interval", "0.1"}
err = clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, true)
if err != nil {
return 1
}
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "-enable_system_settings=true")
clusterInstance.VtGateExtraArgs = append(clusterInstance.VtGateExtraArgs, "--enable_system_settings=true")
// Start vtgate
err = clusterInstance.StartVtgate()
if err != nil {

Просмотреть файл

@ -118,13 +118,13 @@ func TestMain(m *testing.M) {
SchemaSQL: sqlSchema,
VSchema: vSchema,
}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-transaction-timeout", "5"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-transaction-timeout", "5"}
if err := clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, false); err != nil {
return 1
}
// Start vtgate
clusterInstance.VtGateExtraArgs = []string{"-lock_heartbeat_time", "2s"}
clusterInstance.VtGateExtraArgs = []string{"--lock_heartbeat_time", "2s"}
vtgateProcess := clusterInstance.NewVtgateInstance()
vtgateProcess.SysVarSetEnabled = true
if err := vtgateProcess.Setup(); err != nil {

Просмотреть файл

@ -118,13 +118,13 @@ func TestMain(m *testing.M) {
SchemaSQL: sqlSchema,
VSchema: vSchema,
}
clusterInstance.VtTabletExtraArgs = []string{"-queryserver-config-transaction-timeout", "5", "-mysql_server_version", "5.7.0"}
clusterInstance.VtTabletExtraArgs = []string{"--queryserver-config-transaction-timeout", "5", "--mysql_server_version", "5.7.0"}
if err := clusterInstance.StartKeyspace(*keyspace, []string{"-80", "80-"}, 1, false); err != nil {
return 1
}
// Start vtgate
clusterInstance.VtGateExtraArgs = []string{"-lock_heartbeat_time", "2s", "-enable_system_settings=true"}
clusterInstance.VtGateExtraArgs = []string{"--lock_heartbeat_time", "2s", "--enable_system_settings=true"}
if err := clusterInstance.StartVtgate(); err != nil {
return 1
}

Просмотреть файл

@ -86,7 +86,7 @@ func TestMain(m *testing.M) {
}
// Start vtgate
clusterInstance.VtGateExtraArgs = []string{"-lock_heartbeat_time", "2s", "-enable_system_settings=true"}
clusterInstance.VtGateExtraArgs = []string{"--lock_heartbeat_time", "2s", "--enable_system_settings=true"}
if err := clusterInstance.StartVtgate(); err != nil {
return 1
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше